[Wrf-users] hardware questions
Kevin Matthew Nuss
wrf at nusculus.com
Thu Aug 20 07:56:18 MDT 2009
Hi Mike,
I don't keep up well with hardware and don't even know if my quad core Intel
Q9550 is part of the Nehalem line. But no one else seemed to respond to your
question, so I thought a little information may (or may not) be better than
none.
I ran a bunch of tests on my quad core using different compile options and
enabling different numbers of cores to participate. By increasing from 2
cores to 4 cores for smpar only, my sample runtime decreased from 14:12 to
10:12. When using dmpar only, going from mpiexec -n 2 to mpiexec -n 4
decreased the same sample runtimes from 18:54 to 16:52. Smpar seems to help
more than dmpar when run on the same processor. I also have other
combinations including some with mixed smpar and dmpar. To see these, check
my webpage:
http://www.nusculus.com/Home/main-research-page/wrf-benchmarks-and-profiling
Message passing between boxes adds another factor. If you have to use
ethernet, pay extra for high quality network cards that have low latency.
Otherwise, doubling the number of boxes involved in your WRF run provides
minimal benefit. Expensive networks like Myrinet and Infinibans scale
better, mostly because of their low latency. Because WRF has to share
information across boxes after each and every time step, latency rather than
network speed (gigabit vs 100 Mbps) matters a lot. Sorry, I don't have
enough specialized knowledge to say where the price vs performance line is
for either processors or networks. Even a hardware guru might not know the
specifics for a WRF implementation.
Hope someone else responds with better information, but that's the best I
can do.
Kevin Nuss
On Mon, Aug 17, 2009 at 10:49 AM, Zulauf, Michael <
Michael.Zulauf at iberdrolausa.com> wrote:
> Hi all. . .
>
> --------------
>
> I sent this message a few days ago, but since my email address had
> changed since I subscribed, it wasn't immediately accepted (pending
> moderator approval). I've changed my settings, and I'm resubmitting. I
> apologize if this yields duplicate postings. . .
>
> --------------
>
> I have a quick question to WRF users and anybody else who might feel
> like responding. My group currently runs WRF on a cluster of dual core
> / dual CPU nodes (ie, 4 cores per node). These nodes are a couple years
> old. We're looking to upgrade our system gradually (adding some newer
> nodes, and then maybe eventually retiring old nodes or just building a
> separate cluster).
>
> The question is, what are people's thoughts about the best hardware
> choices, keeping price/performance in mind. At the WRF Workshop in
> June, I heard a lot about the quad core Nehalems. It looks like you can
> get dual quad core systems for a pretty decent price (ie, 8 cores per
> node). Does this sound like a good setup?
>
> In general, most of our simulations are for pretty small grids, and
> don't need to be run on more than a node or two. On the other hand, we
> do run some that have large grids, and they need to be run on up to 8 to
> 10 nodes (currently) due to memory or performance constraints. And with
> faster processors, better scaling, and a quick interconnect, I could see
> us starting to run much larger grids.
>
> Thoughts?
>
> Thanks,
> Mike
>
> --
> Mike Zulauf
> Meteorologist
> Wind Asset Management
> Iberdrola Renewables
> 1125 NW Couch, Suite 700
> Portland, OR 97209
> Office: 503-478-6304 Cell: 503-913-0403
>
>
>
>
> This message is intended for the exclusive attention of the address(es)
> indicated. Any information contained herein is strictly confidential and
> privileged, especially as regards person data,
> which must not be disclosed. If you are the intended recipient and have
> received it by mistake or learn about it in any other way, please notify us
> by return e-mail and delete this message from
> your computer system. Any unauthorized use, reproduction, alteration,
> filing or sending of this message and/or any attached files to third parties
> may lead to legal proceedings being taken. Any
> opinion expressed herein is solely that of the author(s) and does not
> necessarily represent the opinion of Iberdrola. The sender does not
> guarantee the integrity, speed or safety of this
> message, not accept responsibility for any possible damage arising from the
> interception, incorporation of virus or any other manipulation carried out
> by third parties.
>
> _______________________________________________
> Wrf-users mailing list
> Wrf-users at ucar.edu
> http://mailman.ucar.edu/mailman/listinfo/wrf-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/wrf-users/attachments/20090820/55dde470/attachment-0001.html
More information about the Wrf-users
mailing list