[Wrf-users] Wrf-users Digest, Vol 145, Issue 3

Wang, Yaoping wang.3866 at buckeyemail.osu.edu
Fri Sep 2 19:07:13 MDT 2016


Hi Jagan,


Thank you. The use "./real.exe" was not the problem though, because I ran it with mpiexec and got the same segfault. (Additionally, I realized I also tried unlimiting the stacksize, which did not solve the problem either.)


When you said "OpenMPI" is not required, did you mean "OpenMP" or OpenMPI? I actually use MVAPICH2, which seems to load default with the intel compilers and be the only one compatible with the NetCDF module on our system. Or did you mean openMP is not a big boost to performance?


Thanks,

Yaoping


________________________________
From: wrf-users-bounces at ucar.edu <wrf-users-bounces at ucar.edu> on behalf of jagan TNAU <jagan at tnau.ac.in>
Sent: Friday, September 2, 2016 8:30:50 PM
To: wrf-users at ucar.edu
Subject: Re: [Wrf-users] Wrf-users Digest, Vol 145, Issue 3

Yapping,

When you already compiled using an MPI you should not simply use ./real.exe you should also indicate how many processors you are using and the command should be "mpirun -np 12 ./real.exe". I do not know why you are using without indicating the number of processors you like to use.

Openmpi is not a requirement and you can use any open source api lis MPICH2 etc.

On Sat, Sep 3, 2016 at 4:08 AM, <wrf-users-request at ucar.edu<mailto:wrf-users-request at ucar.edu>> wrote:
Send Wrf-users mailing list submissions to
        wrf-users at ucar.edu<mailto:wrf-users at ucar.edu>

To subscribe or unsubscribe via the World Wide Web, visit
        http://mailman.ucar.edu/mailman/listinfo/wrf-users
or, via email, send a message with subject or body 'help' to
        wrf-users-request at ucar.edu<mailto:wrf-users-request at ucar.edu>

You can reach the person managing the list at
        wrf-users-owner at ucar.edu<mailto:wrf-users-owner at ucar.edu>

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Wrf-users digest..."


Today's Topics:

   1. Re: What is a reasonable speed for WRF / how to increase it?
      (Wang, Yaoping)


----------------------------------------------------------------------

Message: 1
Date: Fri, 2 Sep 2016 22:33:30 +0000
From: "Wang, Yaoping" <wang.3866 at buckeyemail.osu.edu<mailto:wang.3866 at buckeyemail.osu.edu>>
Subject: Re: [Wrf-users] What is a reasonable speed for WRF / how to
        increase it?
To: Mike Dvorak <mike at sailtactics.com<mailto:mike at sailtactics.com>>, "wrf-users at ucar.edu<mailto:wrf-users at ucar.edu>"
        <wrf-users at ucar.edu<mailto:wrf-users at ucar.edu>>, "cf.ross at gmail.com<mailto:cf.ross at gmail.com>" <cf.ross at gmail.com<mailto:cf.ross at gmail.com>>
Message-ID:
        <CY4PR01MB2822D7E68D45A748E49217D9B2E50 at CY4PR01MB2822.prod.exchangelabs.com<mailto:CY4PR01MB2822D7E68D45A748E49217D9B2E50 at CY4PR01MB2822.prod.exchangelabs.com>>

Content-Type: text/plain; charset="windows-1252"

Hi,


Could you explain more about the "using 2 process in one core" and how to find out/address it? I am not very familiar with the technical aspect of supercomputing. I thought one core in supercomputing means one CPU, but did you mean that each CPU is further made up of multiple "little-cores" themselves, or 2 cores can be on the same CPU?


I am compiling MPI using the intel compiler. But here is another problem. OpenMP does not work at all on my system. The compiler finishes successfully, but then, when I run "./real.exe", it segfaults without even creating an rsl file. I tried using "./configure -d", and updating my WRF version from 3.8 to 3.8.1, but the segfault was the same. Do you know what else I might do? I attached my "configure.wrf" and "configure.wps" files.


I also tested 44 cores with no luck. Increasing the number of nodes from 4 to 6 only increased the throughput marginally.


Thank you,

Yaoping

________________________________
From: wrf-users-bounces at ucar.edu<mailto:wrf-users-bounces at ucar.edu> <wrf-users-bounces at ucar.edu<mailto:wrf-users-bounces at ucar.edu>> on behalf of Mike Dvorak <mike at sailtactics.com<mailto:mike at sailtactics.com>>
Sent: Thursday, September 1, 2016 5:05:28 PM
To: wrf-users at ucar.edu<mailto:wrf-users at ucar.edu>
Subject: Re: [Wrf-users] What is a reasonable speed for WRF / how to increase it?

Hi Yaoping,

What parallelization option did you compile WRF with (e.g. MPI only)? Also, I've found the Intel compilers to be 3 times faster than the GNU compilers on some WRF configurations (unfortunately). What compiler did you use?

You may also want to experiment using less than the number of total cores on the machine. For example, you could try using 44 cores instead of 48. I think WRF EMS is set to do this by default. I've verified on some of my multi-core machines that this does indeed reduce the runtime.

Cheers,
Mike


On 09/01/2016 03:15 PM, Carlos Ross wrote:
I think it should be faster, Xeon x5650 CPUs are 6 cores and 12 threads, so you maybe using 2 process in one core and that is slowing it down.

2016-08-31 18:36 GMT-03:00 Wang, Yaoping <wang.3866 at buckeyemail.osu.edu<mailto:wang.3866 at buckeyemail.osu.edu><mailto:wang.3866 at buckeyemail.osu.edu<mailto:wang.3866 at buckeyemail.osu.edu>>>:

Hi All,


I am running WRF on a ~6km resolution, 91 x 121 domain in the eastern United States. I am using an adaptive time step which makes it mostly 72 sec increments. There are 34 vertical levels. I use 4 x 12 cores on a Intel Xeon x5650 CPUs machine. The throughput is about 1.2 hour wall time per 24 hours model time.


Is this a reasonable speed? I found some information here (http://www.ecmwf.int/sites/default/files/elibrary/2014/13662-performance-analysis-operational-implementaion-wrf.pdf) and after considering the domain difference, my run still seems a touch slow. And is there anyway I could figure how to make the model run faster?


Thank you,

Yaoping Wang

_______________________________________________
Wrf-users mailing list
Wrf-users at ucar.edu<mailto:Wrf-users at ucar.edu><mailto:Wrf-users at ucar.edu<mailto:Wrf-users at ucar.edu>>
http://mailman.ucar.edu/mailman/listinfo/wrf-users





_______________________________________________
Wrf-users mailing list
Wrf-users at ucar.edu<mailto:Wrf-users at ucar.edu><mailto:Wrf-users at ucar.edu<mailto:Wrf-users at ucar.edu>>
http://mailman.ucar.edu/mailman/listinfo/wrf-users


--
[Sail Tactics logo]
        Mike Dvorak, PhD
Founder
Sail Tactics, LLC
Corpus Christi, TX
+1 650-454-5243
http://sailtactics.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/wrf-users/attachments/20160902/b084de44/attachment.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: logo.png
Type: image/png
Size: 5891 bytes
Desc: logo.png
Url : http://mailman.ucar.edu/pipermail/wrf-users/attachments/20160902/b084de44/attachment.png
-------------- next part --------------
A non-text attachment was scrubbed...
Name: configure.wps
Type: application/vnd.ms-works
Size: 3328 bytes
Desc: configure.wps
Url : http://mailman.ucar.edu/pipermail/wrf-users/attachments/20160902/b084de44/attachment.bin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: configure.wrf
Type: application/octet-stream
Size: 26315 bytes
Desc: configure.wrf
Url : http://mailman.ucar.edu/pipermail/wrf-users/attachments/20160902/b084de44/attachment.obj

------------------------------

_______________________________________________
Wrf-users mailing list
Wrf-users at ucar.edu<mailto:Wrf-users at ucar.edu>
http://mailman.ucar.edu/mailman/listinfo/wrf-users


End of Wrf-users Digest, Vol 145, Issue 3
*****************************************



--
With regards

Dr.R.Jagannathan
Professor & Former Dean
Tamil Nadu Agricultural University
Coimbatore - 641 003 India

PHONE:  Mob: +91 94438 89891

DO NOT PRINT THIS E-MAIL UNLESS NECESSARY. THE ENVIRONMENT CONCERNS US ALL.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/wrf-users/attachments/20160903/70bf9e0c/attachment-0001.html 


More information about the Wrf-users mailing list