[mpas-developers] derived data type redesign [with attachments!]
duda at ucar.edu
Thu Dec 22 12:32:12 MST 2011
Thinking a bit more about the issue of halos, perhaps we can try to deal
with flexibility in the number of halo layers and in which halo layers
are to be exchanged on a per-field basis as part of the current DDT
The issue of exchanging arbitrary halo layers (e.g., at one point only
exchanging the inner-most layer of halo cells for a field, and later
exchanging all layers) doesn't sound terribly difficult; as I mentioned
earlier, Conrad has already been working on this a bit. We could
generalize, and have the parallel_info type look like
type (dm_exchange_list), dimension(:), pointer :: cellsToSend
type (dm_exchange_list), dimension(:), pointer :: cellsToRecv
type (sm_exchange_list), dimension(:), pointer :: cellsToCopy
type (dm_exchange_list), dimension(:), pointer :: edgesToSend
type (dm_exchange_list), dimension(:), pointer :: edgesToRecv
type (sm_exchange_list), dimension(:), pointer :: edgesToCopy
type (dm_exchange_list), dimension(:), pointer :: verticesToSend
type (dm_exchange_list), dimension(:), pointer :: verticesToRecv
type (sm_exchange_list), dimension(:), pointer :: verticesToCopy
end type parallel_info
where each of the exchange lists (whether for shared-memory copies or
distributed-memory communication) is an array, with index 1 listing the
cells/edges/vertices to communicate for the first (inner-most layer),
index 2 listing the cells/edges/vertices for the second layer, etc.
Then, a call to mpas_dmpar_exch_halo could look like
call mpas_dmpar_exch_halo(theta, (/1,2/))
to exchange layers 1 and 2 for the field theta, or just
call mpas_dmpar_exch_halo(theta, (/1/))
to exchange just the first layer. The mpas_dmpar_exch_halo routine could
also check whether the field theta even has a second layer of halo cells
associated with it, and return an error if not.
Internally, the implementation of the current mpas_dmpar_exch_halo just
copies the required cells from the field array into a send buffer,
receives MPI messages in a receive buffer, and copies from the buffer
into the field array; so I think this could be generalized -- albeit
probably not in an optimal way -- to handle calls where cells from a
list of halo layers were specified (essentially, we just copy multiple
lists of cells into the buffer rather than from a single list).
A second issue concerns how we might deal with different numbers of halo
layers on a per-field basis, since we currently assume that
grid % nCells gives the total number of cells for many different fields.
An obvious solution would be to have nCells be part of the field type,
rather than the mesh type; however, this would require us to update
essentially every loop in the code to loop over, e.g.,
state % theta % nCells rather than grid % nCells.
Do others have any thoughts on the above proposal? In particular, would
having separate exchange lists for each halo layer preclude any
forseeable performance optimizations, and would having
nCells/nEdges/nVertices be associated with each field lead to
significant code-modification headaches?
On Thu, Dec 22, 2011 at 11:45:23AM -0700, Michael Duda wrote:
> Hi, Doug.
> In principle, the code that builds layers of halo cells and creates
> communication lists for them can be used to build halos of arbitrary
> extent; at the moment, though, this code is heavily intertwined with the
> code to read in fields. It's been quite a while since I last looked at
> this code, though Conrad has been doing some work recently on the halo
> code in MPAS, specifically, to enable us to loop over subsets of halo
> edges/cells/vertices and to exchange subsets of the halos; he may be
> able to comment on how easy it might be to compute a set of 4-halo
> exchange lists, and to add the extra cells for just one field.
> My suspicion is that in its current form, the code may not easily handle
> different halo extents for each field; but, as you've probably
> experienced, it's often not too difficult to add some quick-and-dirty
> code to handle a specific case for the purposes of testing out an idea.
> I'll let Conrad weigh in with any comments he might have, and I'll also
> take a look through the code in module_io_input.F to see where we might
> be able to fit in code for a 4-halo.
> On Wed, Dec 21, 2011 at 02:34:07PM -0700, Doug Jacobsen wrote:
> > Hey Michael,
> > Something we were discussing in the MPAS-O meeting today was adding
> > extensible halos to variables. ie. letting each variable contain possibly a
> > different sized halo. This could be useful for our purposes with respect to
> > the split explicit time stepping methods. Mainly because we could have a
> > halo equal to the number of subcycles on barotropic fields, which would
> > negate the need to do halo updates inside the barotropic solve. And then in
> > other places, we could stick with the conventional 2 halo. It might be good
> > to add that into the rewrite of registry, since the halo can be fairly
> > extensible already just by adding more cells in between nCellsSolve and
> > nCells+1.
> > Also, along these lines. Do you know a way to grow the halo further than 2
> > in the current implementation? I have a halo update I want to try some
> > things with, and an easy test is to make it a 4 halo instead of a 2 halo,
> > just for this test at least.
> > Thanks,
> > Doug
More information about the mpas-developers