Hi again all,<br><br>I have attached a newer updated version of the design doc, with several changes related to discussions I have had with various people. The changes should also address a lot of the concerns that were presented in previous emails about this topic.<br>
<br>I have also included some bigger picture information and general timeline information for the whole project ( all of tasks 1 through 5, not just this one).<br><br>Please let me know if you have any questions or concerns regarding this newly updated document.<br>
<br>Doug<br><br><div class="gmail_quote">On Thu, Jan 26, 2012 at 11:01 AM, Michael Duda <span dir="ltr"><<a href="mailto:duda@ucar.edu">duda@ucar.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi, Doug.<br>
<br>
Thanks very much for the document. I'm still thinking through the design<br>
section, but I have a couple of quick comments on the requirements.<br>
First, I think we need to be careful about including implementation<br>
details in the requirements themselves. For example, the requirement<br>
<br>
"Domain decomposition information needs to be read from a file based on<br>
number of total blocks as opposed to number of MPI tasks."<br>
<br>
says quite a bit about how this should be implemented ("needs to be read<br>
from a file"). Secondly, even though the initial design may take the<br>
simplest approach to assigning multiple blocks to a task, I think it<br>
will be good to ensure that the requirements describe our long-term<br>
requirements for the module.<br>
<br>
Two of the requirements could be something along the lines of:<br>
<br>
1) the user must be able to specify the number of blocks to be owned by<br>
each task; and<br>
<br>
2) the block_decomp module must return information describing the cells<br>
contained in the specified number of blocks for the task.<br>
<br>
As for additional requirements, we could consider whether it is the case<br>
that we ultimately want to be able to specify a different number of<br>
blocks for each MPI task, and if we want to be able to place additional<br>
constraints on which blocks are assigned to a task (much harder, I<br>
think; e.g., preference should be given for contiguously located blocks<br>
to maximize shared-memory copies, or for blocks that are spread as<br>
evenly around the global domain as possible to help with load balancing).<br>
<br>
I apologize for taking so long to have a look through the document; I'll<br>
send along any other comments that might come up as soon as I can.<br>
<br>
Regards,<br>
Michael<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
On Tue, Jan 24, 2012 at 01:26:53PM -0700, Doug Jacobsen wrote:<br>
> Hi All,<br>
><br>
> Continuing with the work that Michale previously proposed before the<br>
> derived data type revision, I am starting work on revisions to the block<br>
> decomposition module to support multiple blocks per mpi process. As a<br>
> refresher, this item was previously described as<br>
><br>
> 4) Modify the block_decomp module to enable a task to get a list of<br>
> cells in more than one block that it is to be the owner of.<br>
> Implemented in the simplest way, there could simply be a namelist<br>
> option to specify how many blocks each task should own, and the<br>
> block_decomp module could look for a graph.info.part.n file, with<br>
> n=num_blocks_per_task*num_tasks, and assign blocks k, 2k, 3k, ...,<br>
> num_blocks_per_task*k to task k.<br>
><br>
> I have attached a design document (pdf and latex) related to this task.<br>
> Please let me know if you have any comments or suggestions. I would like to<br>
> have this implemented by the end of the week. Thanks for any feedback.<br>
><br>
> Doug<br>
<br>
<br>
<br>
</div></div><div class="HOEnZb"><div class="h5">> _______________________________________________<br>
> mpas-developers mailing list<br>
> <a href="mailto:mpas-developers@mailman.ucar.edu">mpas-developers@mailman.ucar.edu</a><br>
> <a href="http://mailman.ucar.edu/mailman/listinfo/mpas-developers" target="_blank">http://mailman.ucar.edu/mailman/listinfo/mpas-developers</a><br>
<br>
</div></div></blockquote></div><br>