[mpas-developers] Block Decomposition Revision

Doug Jacobsen jacobsen.douglas at gmail.com
Fri Jan 27 14:40:41 MST 2012


Hi Michael


> Since we don't currently have a way to define weights for cells or edges,
> I'd not object to dealing with the extension of partial_global_graph_info
> later, especially since it doesn't fundamentally affect the work of
> assigning multiple blocks to an MPI task.
>

That sounds like a good idea to me, thanks!


>  I'd not argue that it couldn't be useful for an MPI task to know how many
> blocks another task owns, but I think we should consider whether the
> mpas_get_blocks_per_proc() routine or the blocks_per_proc vector are
> needed to meet any of the requirements. If not, do we want there to be a
> requirement for such, and if so, why? Also, while recognizing that the
> number of MPI tasks should be at most O(10^6) for the forseeable future,
> I think we should be wary of storing global information in memory.
>

Ok, I will change it to be a public routine that simply returns the number
of blocks for a given processor. This was also intended to be a routine
where a processor could figure out how many blocks it was supposed to have.
I planned to make the array public so the IO routines could easily read and
determine how many it needed and allocate that many blocks, but it would be
very easy to change to just use the routine. Also, the routine isn't very
computationally expensive, so it wouldn't be a big penalty to call it a few
times. I don't really anticipate any of the cores needing to call it, just
other parts of framework. So I'll make this change to the design doc.


> I think at some point in the past, the design document template may have
> had a section named "Implementation"; so, maybe we could keep the
> implementation details in this section, and focus on a detailed interface
> specification in the Design section. Does that sound like a reasonable
> option?
>

There was the implementation section. I removed it because it wasn't useful
at the time, but I'll add it back, and put the actual implementations of
the routines in that section. Thanks.

Doug
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/mpas-developers/attachments/20120127/7c0f8a77/attachment.html 


More information about the mpas-developers mailing list