[Wrf-users] Benchmarking problems
Craig.Tierney at noaa.gov
Mon Mar 14 14:30:59 MDT 2011
On 3/14/11 1:08 PM, Bart Brashers wrote:
> I'm trying to benchmark WRF on two comparable systems, Intel X5660 and
> AMD 6174, before I buy. I'm also trying to do a benchmark for those of
> us who have many 5-day WRF runs to complete -- many runs with relatively
> low core counts, rather than a single run with large core counts (the
> focus of most benchmarks).
> I downloaded the WRF 3.0 Benchmark parts from
> http://www.mmm.ucar.edu/wrf/WG2/bench/. I compiled using option 2
> (smpar for PGI/gcc) with no problems. In the namelist.input I specified
> (for one particular run):
> numtiles = 1,
> nproc_x = 3,
> nproc_y = 2,
> num_metgrid_levels = 40,
> I set OMP_NUM_THREADS to 6 in my run script that calls wrf.exe.
> And yet, when I look in the resulting wrf.out file I see:
> WRF NUMBER OF TILES FROM OMP_GET_MAX_THREADS = 6
> WRF NUMBER OF TILES = 6
> Hey! I told you to use 1 tile and split it 3 by 2!
> Is this a problem with WRF v3.0? Looking at some WRF 3.2.1 runs where I
> have numtiles = 1 and specified 4 by 1, I got more verbose output like "
> WRF TILE 1 IS 1 IE 165 JS 1 JE 33".
> Scaling is poor after only 4 cores, so I suspect something is going
> wrong. Any suggestions you have would be greatly appreciated.
I am not sure about the split, but I do know that at low core core counts
you are going to get better performance using MPI, even if you are only
using a single node.
If you are using OpenMP, you need to look into issues related to
socket-affinity to ensure that your OpenMP threads stay local to the
socket. If you don't get the affinity right, it is going to hurt scalability.
> This message contains information that may be confidential, privileged or otherwise protected by law from disclosure. It is intended for the exclusive use of the Addressee(s). Unless you are the addressee or authorized agent of the addressee, you may not review, copy, distribute or disclose to anyone the message or any information contained within. If you have received this message in error, please contact the sender by electronic reply to email at environcorp.com and immediately delete all copies of the message.
> Wrf-users mailing list
> Wrf-users at ucar.edu
More information about the Wrf-users