[mpas-developers] [ Restructuring high-level MPAS driver]

Todd D. Ringler ringler at lanl.gov
Thu Oct 28 12:32:53 MDT 2010


Hi Michael,

Delaying the issues related to graph.info.part.* sounds appropriate.

Cheers,
Todd

> Hi, Todd.
>
> Thanks for the comments. The issues you've identified -- specifying
> names for graph files and running with a single MPI task -- should
> certainly be addressed, and I would guess that these shouldn't take much
> work to implement. I also think that these are something of a separate
> issue from high-level code structure, so I'd propose to address them in
> a separate set of commits. Unless someone else has a strong need to
> tackle them sooner, I can take a look at these in the next week or so.
>
> Cheers,
> Michael
>
>
> On Wed, Oct 27, 2010 at 01:12:20PM -0600, Todd Ringler wrote:
>>
>> Hi Michael,
>>
>> Thanks for pushing on this. It is definitely an aspect of the model that
>> needs a bit of work. In addition to my comments below, I think that Mark
>> Petersen will be scoping the potential advantages/disadvantages of the
>> proposed changes.
>>
>> I think that it is entirely reasonable to assume that some form of
>> output will be generated. If possible, it might be a good idea to make
>> an runtime check to see if this is actually the case given the namelist
>> file so that we can exit gracefully if needed.
>>
>> On another note (that is related to this topic because it is done at
>> init) is the issue of graph.info.part.* . If have two recommendations
>> that we can consider to be a part of this reorganization or addressed
>> separately.
>>
>> The first is that we might want to be able to specify the
>> "graph.info.part" part of the input file through the namelist. We are
>> generating a lot of meshes and the graph files go with the meshes. Metis
>> names the graph files to be analogous to the grid files and this makes
>> sense. As a result, I do a lot of copying like "cp
>> x1.2562.graph.info.part.8 graph.info.part.8" into my run directory. A
>> generalization would make the graph input similar the grid input, i.e.
>> we can specify the form of both. The difference here is that we would
>> require a way to get the number of tasks at runtime (before opening the
>> graph file) in order to complete the graph file name.
>>
>> The second graph issue is that we should be able to run the model with
>> "mpirun -np 1 ocean.exe" with no graph file present. I have verified
>> that the model runs in this mode by creating my own graph.info.part.1
>> files (kmetis will not produce these!). This configuration essentially
>> turns off the MPI and provides a means of testing certain aspects of the
>> model. I guess that this would be pretty trivial if, again, we could we
>> can retrieve the number of processors before opening the graph file.
>>
>> This might be more of a "graph" issue than an "init" issue and we might
>> be better off pushing this to a later reorganization. I just wanted to
>> bring it up now in case you think that it is easy to address at present.
>>
>> Thanks again for putting together a reorg of the init procedure.
>>
>> Cheers,
>> Todd
>>
>>
>> Begin forwarded message:
>>
>> > From: "Todd D. Ringler" <ringler at lanl.gov>
>> > Date: October 27, 2010 11:06:37 AM MDT
>> > To: todd.ringler at mac.com
>> > Subject: [Fwd: [mpas-developers] Restructuring high-level MPAS driver]
>> > Reply-To: ringler at lanl.gov
>> >
>> > ---------------------------- Original Message
>> ----------------------------
>> > Subject: [mpas-developers] Restructuring high-level MPAS driver
>> > From:    "Michael Duda" <duda at ucar.edu>
>> > Date:    Tue, October 26, 2010 4:30 pm
>> > To:      mpas-developers at ucar.edu
>> > --------------------------------------------------------------------------
>> >
>> > Hi, Developers.
>> >
>> > I've been working to restructure the higher levels of the MPAS
>> software
>> > (driver and subdriver) to enable us to separate the test case
>> > initialization from the model proper, among other things, and I've
>> > settled on what I think is a set of changes that will take us in the
>> > proper direction.
>> >
>> > Currently, in the current top-level code (mpas.F and
>> module_subdriver.F),
>> > the setup_test_cases() routine is called directly, which precludes the
>> > separation of this from the model. I view the initialization code that
>> > is currently in module_test_cases.F as something of its own "core"
>> that
>> > uses the MPAS software infrastructure in the same way that the sw,
>> > nhyd_atmos, and ocean cores use this infrastructure. Consequently, I'd
>> > like to generalize both mpas.F and module_subdriver.F by removing any
>> > core-specific code, where the definition of "core" now includes
>> > initialization, post-processing, etc. Under this generalization, the
>> > content of mpas.F would be a main program that simply calls init(),
>> > run(), and finalize() routines in the subdriver module, and the
>> > subdriver module's init() implementation would be responsible for
>> > initializing the framework (infrastructure) and then the core; to
>> > further abstract the details of the infrastructure and core, I'm also
>> > proposing that we provide main modules for each of these, which would
>> > implement their own init, run, and finalize routines. Calls to
>> > setup_test_cases() could still exist within either the init() or run()
>> > routines implemented by the core's main module, if at all; however,
>> > with a separated initialization, the initialization would exist as
>> > its own core, whose run() routine would perform the work currently
>> > done in setup_test_cases().
>> >
>> > I've placed a tar file to illustrate what these changes look like
>> > in
>> http://www.mmm.ucar.edu/people/duda/files/mpas/mpas_newdriver.tar.gz.
>> > This code is an svn working copy, so it should be possible to 'svn
>> > status' and 'svn diff' to see what files have changed. During the
>> > restructuring, I found that the mpas_query() routine is no longer
>> > needed, since number of time levels can be determined for fields
>> purely
>> > in the registry, so I also removed code related to this, too.
>> >
>> > One side effect in the currently reorganization is that both the
>> output
>> > and restart files are always opened at the start of MPAS execution,
>> > regardless of whether a core intends to write a restart file at all.
>> > This raises the question of whether control over output (and perhpas
>> > even input) streams should be given to the individual MPAS cores.
>> Would
>> > assuming that a core always writes at least an output stream, but may
>> > not write a restart stream be reasonable for now? If so, we could
>> > probably relocate code to open/write/close the restart file down to
>> the
>> > core-specific code, for example.
>> >
>> > On the upside, the restructuring that I've proposed has very little
>> > impact on the actual solver code in the MPAS cores; and, I've found
>> > that the high-level code is lightweight enough that it should be easy
>> > to make further changes as we move forward with future development.
>> >
>> > I apologize for the lengthy e-mail, but if anyone has any comments,
>> > particularly concerning other requirements besides separating
>> > initialization from model integration, I'd be very glad to hear them.
>> > Assuming no objections that cannot be remedied, I'd like to commit
>> > the proposed changes to the repository trunk fairly soon.
>> >
>> > Cheers,
>> > Michael
>> > _______________________________________________
>> > mpas-developers mailing list
>> > mpas-developers at mailman.ucar.edu
>> > http://mailman.ucar.edu/mailman/listinfo/mpas-developers
>> >
>>
>
>> _______________________________________________
>> mpas-developers mailing list
>> mpas-developers at mailman.ucar.edu
>> http://mailman.ucar.edu/mailman/listinfo/mpas-developers
>
> _______________________________________________
> mpas-developers mailing list
> mpas-developers at mailman.ucar.edu
> http://mailman.ucar.edu/mailman/listinfo/mpas-developers
>



More information about the mpas-developers mailing list