<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 2/07/2016, at 5:42 PM, Marcella, Marc <<a href="mailto:MMarcella@air-worldwide.com" class="">MMarcella@AIR-WORLDWIDE.COM</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="WordSection1" style="page: WordSection1; font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class="">Hi Dom,<o:p class=""></o:p></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><o:p class=""> </o:p></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class="">Thanks for the reply. Yes, unfortunately we've already gone through all of the kernel tcp tuning for 10gb ethernet, and for mpi over 10gb ethernet. We are coming no where near topping out the interconnects between the nodes. Open to other suggestions. Do you see near linear scaling for say 256,512, 1024, etc cores? Or should we at least see something somewhat close? <o:p class=""></o:p></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><o:p class=""> </o:p></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class="">Marc<o:p class=""></o:p></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125);" class=""><o:p class=""> </o:p></span></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125);" class=""><o:p class=""> </o:p></span></div><div class=""><div style="border-style: solid none none; border-top-color: rgb(181, 196, 223); border-top-width: 1pt; padding: 3pt 0in 0in;" class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><b class=""><span style="font-size: 10pt; font-family: Tahoma, sans-serif;" class="">From:</span></b><span style="font-size: 10pt; font-family: Tahoma, sans-serif;" class=""><span class="Apple-converted-space"> </span>Dominikus Heinzeller [<a href="mailto:climbfuji@ymail.com" class="">mailto:climbfuji@ymail.com</a>]<span class="Apple-converted-space"> </span><br class=""><b class="">Sent:</b><span class="Apple-converted-space"> </span>Friday, July 01, 2016 11:14 AM<br class=""><b class="">To:</b><span class="Apple-converted-space"> </span>Marcella, Marc<br class=""><b class="">Cc:</b><span class="Apple-converted-space"> </span><a href="mailto:wrf-users@ucar.edu" class="">wrf-users@ucar.edu</a><br class=""><b class="">Subject:</b><span class="Apple-converted-space"> </span>Re: [Wrf-users] scalability of WRF w/OpenMPI<o:p class=""></o:p></span></div></div></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><o:p class=""> </o:p></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class="">Hi Marc,<o:p class=""></o:p></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><o:p class=""> </o:p></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class="">One thing you could check is the interconnect over which the MPI traffic is routed. In particular if you use different interconnects for I/O and MPI (e.g. Ethernet for I/O and Infiniband for MPI - if this is not configured correctly, MPI might actually use the Ethernet connection, too). Do you see similar problems with parallel benchmarks on your network?<o:p class=""></o:p></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><o:p class=""> </o:p></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class="">Cheers<o:p class=""></o:p></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><o:p class=""> </o:p></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class="">Dom<o:p class=""></o:p></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><o:p class=""> </o:p></div></div><div class=""><div class=""><blockquote style="margin-top: 5pt; margin-bottom: 5pt;" class=""><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class="">On 1/07/2016, at 3:05 PM, Marcella, Marc <<a href="mailto:MMarcella@air-worldwide.com" style="color: purple; text-decoration: underline;" class="">MMarcella@AIR-WORLDWIDE.COM</a>> wrote:<o:p class=""></o:p></div></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><o:p class=""> </o:p></div><div class=""><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif;" class="">Hi all,<o:p class=""></o:p></span></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif;" class=""><br class="">Im trying to run WRF3.6.1 in parallel using OpenMPI on a Linux HPC with an openmpi-1.6.5 build with the PGI compiler 13.6-0 64-bit, WRF3.6. I’ve tried the configure 3 option, and what we’re finding is that WRF doesn’t seem to be scaling properly. For a domain of 450x200, anything past 256 cores and we actually see negative returns. Linear scaling stops at essentially 16 cores. When we introduce the DM+SM option, we see better returns but still we don’t see any type of scalability (4cores = 30hr simulated, 16cores = 74hr simulated, 256 cores = 432hr simulated) <span class="apple-converted-space"> </span><o:p class=""></o:p></span></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif;" class=""> <o:p class=""></o:p></span></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif;" class="">From what I researched online, it appears WRF should scale a lot better or close to linear for a significant larger amount of cores, particularly in DM mode. Is there any known issues with WRF3.6 or certain versions of MPI we should use, or something else we should be aware of to properly scale parallel WRF?<o:p class=""></o:p></span></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif;" class=""> <o:p class=""></o:p></span></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif;" class="">Any experience shared would be greatly appreciated…<o:p class=""></o:p></span></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif;" class=""> <o:p class=""></o:p></span></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif;" class="">Thanks,<br class="">Marc<o:p class=""></o:p></span></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif;" class=""> <o:p class=""></o:p></span></div></div><div class=""><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 11pt; font-family: Calibri, sans-serif;" class=""> <o:p class=""></o:p></span></div></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif;" class=""><span style="font-size: 9pt; font-family: Helvetica, sans-serif;" class="">_______________________________________________<br class="">Wrf-users mailing list<br class=""></span><a href="mailto:Wrf-users@ucar.edu" style="color: purple; text-decoration: underline;" class=""><span style="font-size: 9pt; font-family: Helvetica, sans-serif; color: purple;" class="">Wrf-users@ucar.edu</span></a><span style="font-size: 9pt; font-family: Helvetica, sans-serif;" class=""><br class=""></span><a href="http://mailman.ucar.edu/mailman/listinfo/wrf-users" style="color: purple; text-decoration: underline;" class=""><span style="font-size: 9pt; font-family: Helvetica, sans-serif; color: purple;" class="">http://mailman.ucar.edu/mailman/listinfo/wrf-users</span></a></div></div></blockquote></div></div></div></div></blockquote></div><br class=""></div></body></html>