[Wrf-users] Nesting and Domain Decomposition

Tabish Ansari tabishumaransari at gmail.com
Thu Feb 25 09:37:26 MST 2016


Hi Doug,

I'm not too knowledgeable in this but have some literature which might be
of relevance. Please have a look at the attached files.

Cheers,

Tabish

Tabish U Ansari
PhD student, Lancaster Environment Center
Lancaster Univeristy
Bailrigg, Lancaster,
LA1 4YW, United Kingdom

On 25 February 2016 at 13:59, Douglas Lowe <Douglas.Lowe at manchester.ac.uk>
wrote:

> Hi all,
>
> I'm running WRF-Chem with a nest of 3 domains, with the settings listed
> below. I'd like to be
> able to split this across as many processes as possible in order to speed
> things up (currently
> I'm only managing 3x real time, which isn't very good when running
> multiday simulations).
> Unfortunately I am finding that WRF hangs when calling the photolysis
> driver for my 2nd domain
> (which is the smallest of the domains) if I use too many processors.
>
> The (relevant) model domain settings are:
> max_dom                         = 3,
> e_we                                = 134,  81,   91,
> e_sn                                = 146,  81,   91,
> e_vert                              = 41,    41,  41,
> num_metgrid_levels        = 38,
> dx                                    = 15000,3000,1000,
> dy                                    = 15000,3000,1000,
>
> WRF will run when I split over upto 168 processes (7 nodes on the ARCHER
> supercomputer),
> but wont work if I split over 192 (or more) processes (8 nodes on ARCHER).
>
> Looking at the log messages I *think* that WRF is splitting each domain
> into the same
> number of patches, and sending one patch from each domain to a single
> process for
> analysis. However, this means that I am limited by the smallest domain as
> to how many
> patches I can split a domain into before we end up with patches which are
> dwarved by
> the halos around them.
>
> Would it not make more sense to be able to split each domain into
> different numbers
> of patches (so that each patch is of a similar size, regardless of which
> domain it is from) and
> send one patch from one domain to a single process (or, perhaps, send more
> patches from the
> outer domains to a single process, if needed for balancing computational
> demands)? And
> is there anyway for me to do this with WRF?
>
> Thanks,
> Doug
> _______________________________________________
> Wrf-users mailing list
> Wrf-users at ucar.edu
> http://mailman.ucar.edu/mailman/listinfo/wrf-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/wrf-users/attachments/20160225/02afb9aa/attachment-0001.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: WRF-HPC.pdf
Type: application/pdf
Size: 243897 bytes
Desc: not available
Url : http://mailman.ucar.edu/pipermail/wrf-users/attachments/20160225/02afb9aa/attachment-0003.pdf 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: WRF-chapter-multicore.pdf
Type: application/pdf
Size: 230093 bytes
Desc: not available
Url : http://mailman.ucar.edu/pipermail/wrf-users/attachments/20160225/02afb9aa/attachment-0004.pdf 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: CUDA-WRF_ppt.pdf
Type: application/pdf
Size: 2314206 bytes
Desc: not available
Url : http://mailman.ucar.edu/pipermail/wrf-users/attachments/20160225/02afb9aa/attachment-0005.pdf 


More information about the Wrf-users mailing list