[Wrf-users] scalability of WRF w/OpenMPI

Dominikus Heinzeller climbfuji at ymail.com
Fri Jul 1 09:14:16 MDT 2016


Hi Marc,

One thing you could check is the interconnect over which the MPI traffic is routed. In particular if you use different interconnects for I/O and MPI (e.g. Ethernet for I/O and Infiniband for MPI - if this is not configured correctly, MPI might actually use the Ethernet connection, too). Do you see similar problems with parallel benchmarks on your network?

Cheers

Dom

> On 1/07/2016, at 3:05 PM, Marcella, Marc <MMarcella at AIR-WORLDWIDE.COM> wrote:
> 
> Hi all,
> 
> Im trying to run WRF3.6.1 in parallel using OpenMPI on a Linux HPC with an openmpi-1.6.5 build with the PGI compiler 13.6-0 64-bit, WRF3.6.  I’ve tried the configure 3 option, and what we’re finding is that WRF doesn’t seem to be scaling properly.  For a domain of 450x200, anything past 256 cores and we actually see negative returns.  Linear scaling stops at essentially 16 cores.  When we introduce the DM+SM option, we see better returns but still we don’t see any type of scalability (4cores = 30hr simulated, 16cores = 74hr simulated, 256 cores = 432hr simulated)  
>  
> From what I researched online, it appears WRF should scale a lot better or close to linear for a significant larger amount of cores, particularly in DM mode.  Is there any known issues with WRF3.6 or certain versions of MPI we should use, or something else we should be aware of to properly scale parallel WRF?
>  
> Any experience shared would be greatly appreciated…
>  
> Thanks,
> Marc
>  
>  
> _______________________________________________
> Wrf-users mailing list
> Wrf-users at ucar.edu <mailto:Wrf-users at ucar.edu>
> http://mailman.ucar.edu/mailman/listinfo/wrf-users <http://mailman.ucar.edu/mailman/listinfo/wrf-users>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/wrf-users/attachments/20160701/de57c58a/attachment.html 


More information about the Wrf-users mailing list