Hi,<div><br></div><div>I think you have different value of dx and dy in met_em files and the one in namelist.input.</div><div>ncdump one of the met_em file and check dx and dy there. Use the same dx and dy in namelist.input.</div>
<div><br></div><div>You should be good to go thereafter.</div><div><br></div><div>Thanks.</div><div><br></div><div><br></div><div><br><br><div class="gmail_quote">On Fri, Apr 16, 2010 at 9:08 AM, <span dir="ltr"><<a href="mailto:wrf-users-request@ucar.edu">wrf-users-request@ucar.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Send Wrf-users mailing list submissions to<br>
<a href="mailto:wrf-users@ucar.edu">wrf-users@ucar.edu</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="http://mailman.ucar.edu/mailman/listinfo/wrf-users" target="_blank">http://mailman.ucar.edu/mailman/listinfo/wrf-users</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:wrf-users-request@ucar.edu">wrf-users-request@ucar.edu</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:wrf-users-owner@ucar.edu">wrf-users-owner@ucar.edu</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of Wrf-users digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: how to plot the 3 or 4 nesting domain? (Feng Liu)<br>
2. WRF 3.2 jobs hanging up sporadically on wrfout output<br>
(Zulauf, Michael)<br>
3. Re: WRF 3.2 jobs hanging up sporadically on wrfout output<br>
(Don Morton)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Thu, 15 Apr 2010 09:35:35 -0700<br>
From: "Feng Liu" <<a href="mailto:fliu@mag.maricopa.gov">fliu@mag.maricopa.gov</a>><br>
Subject: Re: [Wrf-users] how to plot the 3 or 4 nesting domain?<br>
To: "Asnor Muizan Ishak" <<a href="mailto:asnorjps@yahoo.com.my">asnorjps@yahoo.com.my</a>>, "Jie TANG"<br>
<<a href="mailto:totangjie@gmail.com">totangjie@gmail.com</a>>, <<a href="mailto:wrf-users@ucar.edu">wrf-users@ucar.edu</a>><br>
Message-ID:<br>
<<a href="mailto:A01A26511CAA69409D6D8095B3E104BD0254C919@MAIL.mag.maricopa.gov">A01A26511CAA69409D6D8095B3E104BD0254C919@MAIL.mag.maricopa.gov</a>><br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
Hi,<br>
<br>
I suppose you have inconsistent parent domain definition in WPS and WRF.<br>
Please double check dx and dy settings in the namelist.wps and<br>
namelist.input for running real.exe/wrf.exe. You may send me your both<br>
namelist files otherwise. Thanks.<br>
<br>
Feng<br>
<br>
<br>
<br>
<br>
<br>
From: Asnor Muizan Ishak [mailto:<a href="mailto:asnorjps@yahoo.com.my">asnorjps@yahoo.com.my</a>]<br>
Sent: Thursday, April 15, 2010 8:51 AM<br>
To: Feng Liu; Jie TANG; <a href="mailto:wrf-users@ucar.edu">wrf-users@ucar.edu</a><br>
Subject: Re: [Wrf-users] how to plot the 3 or 4 nesting domain?<br>
<br>
<br>
<br>
Dear ALL,<br>
<br>
<br>
<br>
May I have your guidance on how to solve the real error. First, I have<br>
ran the ./real.exe but the error shows as below :<br>
<br>
<br>
<br>
Namelist dfi_control not found in namelist.input. Using registry<br>
defaults for v<br>
ariables in dfi_control<br>
Namelist tc not found in namelist.input. Using registry defaults for<br>
variables<br>
in tc<br>
Namelist scm not found in namelist.input. Using registry defaults for<br>
variables<br>
in scm<br>
Namelist fire not found in namelist.input. Using registry defaults for<br>
variable<br>
s in fire<br>
REAL_EM V3.1.1 PREPROCESSOR<br>
*************************************<br>
Parent domain<br>
ids,ide,jds,jde 1 34 1 35<br>
ims,ime,jms,jme -4 39 -4 40<br>
ips,ipe,jps,jpe 1 34 1 35<br>
*************************************<br>
DYNAMICS OPTION: Eulerian Mass Coordinate<br>
alloc_space_field: domain 1, 33593916 bytes allocated<br>
Time period # 1 to process = 1994-01-01_00:00:00.<br>
Time period # 2 to process = 1994-01-01_06:00:00.<br>
Time period # 3 to process = 1994-01-01_12:00:00.<br>
Time period # 4 to process = 1994-01-01_18:00:00.<br>
Time period # 5 to process = 1994-01-02_00:00:00.<br>
Time period # 6 to process = 1994-01-02_06:00:00.<br>
Time period # 7 to process = 1994-01-02_12:00:00.<br>
Time period # 8 to process = 1994-01-02_18:00:00.<br>
Time period # 9 to process = 1994-01-03_00:00:00.<br>
Time period # 10 to process = 1994-01-03_06:00:00.<br>
Time period # 11 to process = 1994-01-03_12:00:00.<br>
Time period # 12 to process = 1994-01-03_18:00:00.<br>
Total analysis times to input = 12.<br>
<br>
<br>
------------------------------------------------------------------------<br>
-----<br>
<br>
Domain 1: Current date being processed: 1994-01-01_00:00:00.0000,<br>
which is loop # 1 out of 12<br>
configflags%julyr, %julday, %gmt: 1994 1<br>
0.0000000E+00<br>
dx_compare,dy_compare = 30000.00 30000.00<br>
-------------- FATAL CALLED ---------------<br>
FATAL CALLED FROM FILE: <stdin> LINE: 331<br>
DX and DY do not match from the namelist and the input file<br>
-------------------------------------------<br>
<br>
Any thoughts on how to fit this problem? Many thanks in advance.<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
_____<br>
<br>
From: Feng Liu <<a href="mailto:fliu@mag.maricopa.gov">fliu@mag.maricopa.gov</a>><br>
To: Jie TANG <<a href="mailto:totangjie@gmail.com">totangjie@gmail.com</a>>; <a href="mailto:wrf-users@ucar.edu">wrf-users@ucar.edu</a><br>
Sent: Tuesday, 13 April 2010 16:04:29<br>
Subject: Re: [Wrf-users] how to plot the 3 or 4 nesting domain?<br>
<br>
<br>
<br>
<br>
Plotgrid.exe can do that. This file, a NCAR-Graphics-based utility,<br>
should be located at WPS/util/src and one symbolic link at WPS/util if<br>
you compile it successfully. If not make sure you set the NCAR Graphics<br>
path correctly. Plotgrids creates an NCAR Graphics metafile, gmeta, you<br>
can view your nested domain by using idt command.<br>
<br>
Feng<br>
<br>
<br>
<br>
<br>
<br>
From: <a href="mailto:wrf-users-bounces@ucar.edu">wrf-users-bounces@ucar.edu</a> [mailto:<a href="mailto:wrf-users-bounces@ucar.edu">wrf-users-bounces@ucar.edu</a>] On<br>
Behalf Of Jie TANG<br>
Sent: Monday, April 12, 2010 6:21 PM<br>
To: <a href="mailto:wrf-users@ucar.edu">wrf-users@ucar.edu</a><br>
Subject: [Wrf-users] how to plot the 3 or 4 nesting domain?<br>
<br>
<br>
<br>
hello,every one.<br>
<br>
When run a 3 or 4 domain nesting wrf run,how to draw the four nested<br>
domain in one figure conveniently just like the TER.PLT in MM5?<br>
<br>
I tried to found the command plotgrids.exe,but only the plotgrids.o<br>
file there.Can anyone tell me how to draw the figure?thanks.<br>
<br>
<br>
--<br>
<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <a href="http://mailman.ucar.edu/pipermail/wrf-users/attachments/20100415/945f7ba3/attachment-0001.html" target="_blank">http://mailman.ucar.edu/pipermail/wrf-users/attachments/20100415/945f7ba3/attachment-0001.html</a><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Thu, 15 Apr 2010 14:49:40 -0700<br>
From: "Zulauf, Michael" <<a href="mailto:Michael.Zulauf@iberdrolausa.com">Michael.Zulauf@iberdrolausa.com</a>><br>
Subject: [Wrf-users] WRF 3.2 jobs hanging up sporadically on wrfout<br>
output<br>
To: <<a href="mailto:wrf-users@ucar.edu">wrf-users@ucar.edu</a>><br>
Message-ID:<br>
<<a href="mailto:B2A259FAA3CF26469FF9A7C7402C49970913EB06@POREXUW03.ppmenergy.us">B2A259FAA3CF26469FF9A7C7402C49970913EB06@POREXUW03.ppmenergy.us</a>><br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
Hi all,<br>
<br>
I'm trying to get WRF V3.2 running by utilizing a setup that I've<br>
successfully run with V3.1.1 (and earlier). The configure/compile<br>
seemed to go fine using the same basic configuration details that have<br>
worked in the past. When I look over the Updates in V3.2, I don't see<br>
anything problematic for me.<br>
<br>
We're running with four grids, nesting from 27km to 1km, initialized and<br>
forced with GFS output. The nest initializations are delayed from the<br>
outer grid initialization by 3, 6, and 9 hours, respecitively. The 1km<br>
grid has wrfout (netcdf) output every 20 minutes, the other grids every<br>
hour.<br>
<br>
What I'm seeing is that the job appears to be running fine for some<br>
time, but eventually the job hangs up during wrfout output - usually on<br>
the finest grid - but not exclusively. Changing small details (such as<br>
changing restart_interval) can make it run longer or shorter. Sometimes<br>
even with no changes it will run a different length of time.<br>
<br>
I've got debug_level set to 300, so I get tons of output. When it<br>
hangs, the wrf process don't die, but all output stops. There are no<br>
error messages or anything else that indicate a problem (at least none<br>
that I can find). What I do get is a truncated (always 32 byte) wrfout<br>
file. For example:<br>
<br>
-rw-r--r-- 1 p20457 staff 32 Apr 15 13:02<br>
wrfout_d04_2009-12-14_09:00:00<br>
<br>
The wrfout's that get written before it hangs appear to be fine, with<br>
valid data. frames_per_outfile is set to 1, so the files never get<br>
excessively large - maybe on the order of 175MB. All of the previous<br>
versions of WRF that I've used continue work fine on this hardware/OS<br>
combination (a cluster of dual-dual core Opterons, running CentOS) -<br>
just V3.2 has issues.<br>
<br>
Like I said, the wrf processes don't die, but all output ceases, even<br>
with the massive amount of debug info. The last lines in the rsl.error<br>
and rsl.out files is always something of this type:<br>
<br>
date 2009-12-14_09:00:00<br>
ds 1 1 1<br>
de 1 1 1<br>
ps 1 1 1<br>
pe 1 1 1<br>
ms 1 1 1<br>
me 1 1 1<br>
output_wrf.b writing 0d real<br>
<br>
The specific times and and variables being written vary, depending on<br>
when the job hangs.<br>
<br>
I haven't dug deeply into what's going on, but it seems like possibly<br>
some sort of race condition or communications deadlock or something.<br>
Does anybody have ideas of where I should go from here? It seems to me<br>
like maybe something basic has changed with V3.2, and perhaps I need to<br>
adjust something in my configuration or setup.<br>
<br>
Thanks,<br>
Mike<br>
<br>
--<br>
Mike Zulauf<br>
Meteorologist<br>
Wind Asset Management<br>
Iberdrola Renewables<br>
1125 NW Couch, Suite 700<br>
Portland, OR 97209<br>
Office: 503-478-6304 Cell: 503-913-0403<br>
<br>
<br>
<br>
<br>
<br>
This message is intended for the exclusive attention of the address(es) indicated. Any information contained herein is strictly confidential and privileged, especially as regards person data,<br>
which must not be disclosed. If you are the intended recipient and have received it by mistake or learn about it in any other way, please notify us by return e-mail and delete this message from<br>
your computer system. Any unauthorized use, reproduction, alteration, filing or sending of this message and/or any attached files to third parties may lead to legal proceedings being taken. Any<br>
opinion expressed herein is solely that of the author(s) and does not necessarily represent the opinion of Iberdrola. The sender does not guarantee the integrity, speed or safety of this<br>
message, not accept responsibility for any possible damage arising from the interception, incorporation of virus or any other manipulation carried out by third parties.<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Fri, 16 Apr 2010 08:08:46 -0800<br>
From: Don Morton <<a href="mailto:Don.Morton@alaska.edu">Don.Morton@alaska.edu</a>><br>
Subject: Re: [Wrf-users] WRF 3.2 jobs hanging up sporadically on<br>
wrfout output<br>
To: "Zulauf, Michael" <<a href="mailto:Michael.Zulauf@iberdrolausa.com">Michael.Zulauf@iberdrolausa.com</a>><br>
Cc: <a href="mailto:wrf-users@ucar.edu">wrf-users@ucar.edu</a><br>
Message-ID:<br>
<<a href="mailto:s2j78a8f32a1004160908u6aa331a2p386bec8f7f513457@mail.gmail.com">s2j78a8f32a1004160908u6aa331a2p386bec8f7f513457@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
I was having these sorts of problems with WRF 3.1.1 a few weeks ago on our<br>
Sun Opteron cluster. It was always hanging on the writing of wrfout,<br>
typically on an inner nest, and it wasn't consistent from run to run. I had<br>
the luxury of being able to try these cases on other machines, and didn't<br>
experience problems on those.<br>
<br>
Our folks here suggested I turn off the MPI RDMA (Remote Direct Memory<br>
Access) optimizations, which slowed performance substantially, but resolved<br>
the issue.<br>
<br>
It's been my experience over the years with WRF, that frequently these<br>
problems are resolved if you turn off optimizations.<br>
<br>
If you're using a Sun cluster, I can give you a little more info privately.<br>
<br>
On Thu, Apr 15, 2010 at 1:49 PM, Zulauf, Michael <<br>
<a href="mailto:Michael.Zulauf@iberdrolausa.com">Michael.Zulauf@iberdrolausa.com</a>> wrote:<br>
<br>
> Hi all,<br>
><br>
> I'm trying to get WRF V3.2 running by utilizing a setup that I've<br>
> successfully run with V3.1.1 (and earlier). The configure/compile<br>
> seemed to go fine using the same basic configuration details that have<br>
> worked in the past. When I look over the Updates in V3.2, I don't see<br>
> anything problematic for me.<br>
><br>
> We're running with four grids, nesting from 27km to 1km, initialized and<br>
> forced with GFS output. The nest initializations are delayed from the<br>
> outer grid initialization by 3, 6, and 9 hours, respecitively. The 1km<br>
> grid has wrfout (netcdf) output every 20 minutes, the other grids every<br>
> hour.<br>
><br>
> What I'm seeing is that the job appears to be running fine for some<br>
> time, but eventually the job hangs up during wrfout output - usually on<br>
> the finest grid - but not exclusively. Changing small details (such as<br>
> changing restart_interval) can make it run longer or shorter. Sometimes<br>
> even with no changes it will run a different length of time.<br>
><br>
> I've got debug_level set to 300, so I get tons of output. When it<br>
> hangs, the wrf process don't die, but all output stops. There are no<br>
> error messages or anything else that indicate a problem (at least none<br>
> that I can find). What I do get is a truncated (always 32 byte) wrfout<br>
> file. For example:<br>
><br>
> -rw-r--r-- 1 p20457 staff 32 Apr 15 13:02<br>
> wrfout_d04_2009-12-14_09:00:00<br>
><br>
> The wrfout's that get written before it hangs appear to be fine, with<br>
> valid data. frames_per_outfile is set to 1, so the files never get<br>
> excessively large - maybe on the order of 175MB. All of the previous<br>
> versions of WRF that I've used continue work fine on this hardware/OS<br>
> combination (a cluster of dual-dual core Opterons, running CentOS) -<br>
> just V3.2 has issues.<br>
><br>
> Like I said, the wrf processes don't die, but all output ceases, even<br>
> with the massive amount of debug info. The last lines in the rsl.error<br>
> and rsl.out files is always something of this type:<br>
><br>
> date 2009-12-14_09:00:00<br>
> ds 1 1 1<br>
> de 1 1 1<br>
> ps 1 1 1<br>
> pe 1 1 1<br>
> ms 1 1 1<br>
> me 1 1 1<br>
> output_wrf.b writing 0d real<br>
><br>
> The specific times and and variables being written vary, depending on<br>
> when the job hangs.<br>
><br>
> I haven't dug deeply into what's going on, but it seems like possibly<br>
> some sort of race condition or communications deadlock or something.<br>
> Does anybody have ideas of where I should go from here? It seems to me<br>
> like maybe something basic has changed with V3.2, and perhaps I need to<br>
> adjust something in my configuration or setup.<br>
><br>
> Thanks,<br>
> Mike<br>
><br>
> --<br>
> Mike Zulauf<br>
> Meteorologist<br>
> Wind Asset Management<br>
> Iberdrola Renewables<br>
> 1125 NW Couch, Suite 700<br>
> Portland, OR 97209<br>
> Office: 503-478-6304 Cell: 503-913-0403<br>
><br>
><br>
><br>
><br>
><br>
> This message is intended for the exclusive attention of the address(es)<br>
> indicated. Any information contained herein is strictly confidential and<br>
> privileged, especially as regards person data,<br>
> which must not be disclosed. If you are the intended recipient and have<br>
> received it by mistake or learn about it in any other way, please notify us<br>
> by return e-mail and delete this message from<br>
> your computer system. Any unauthorized use, reproduction, alteration,<br>
> filing or sending of this message and/or any attached files to third parties<br>
> may lead to legal proceedings being taken. Any<br>
> opinion expressed herein is solely that of the author(s) and does not<br>
> necessarily represent the opinion of Iberdrola. The sender does not<br>
> guarantee the integrity, speed or safety of this<br>
> message, not accept responsibility for any possible damage arising from the<br>
> interception, incorporation of virus or any other manipulation carried out<br>
> by third parties.<br>
><br>
> _______________________________________________<br>
> Wrf-users mailing list<br>
> <a href="mailto:Wrf-users@ucar.edu">Wrf-users@ucar.edu</a><br>
> <a href="http://mailman.ucar.edu/mailman/listinfo/wrf-users" target="_blank">http://mailman.ucar.edu/mailman/listinfo/wrf-users</a><br>
><br>
<br>
<br>
<br>
--<br>
Arctic Region Supercomputing Center<br>
<a href="http://www.arsc.edu/~morton/" target="_blank">http://www.arsc.edu/~morton/</a><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <a href="http://mailman.ucar.edu/pipermail/wrf-users/attachments/20100416/e6832d05/attachment.html" target="_blank">http://mailman.ucar.edu/pipermail/wrf-users/attachments/20100416/e6832d05/attachment.html</a><br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
Wrf-users mailing list<br>
<a href="mailto:Wrf-users@ucar.edu">Wrf-users@ucar.edu</a><br>
<a href="http://mailman.ucar.edu/mailman/listinfo/wrf-users" target="_blank">http://mailman.ucar.edu/mailman/listinfo/wrf-users</a><br>
<br>
<br>
End of Wrf-users Digest, Vol 68, Issue 20<br>
*****************************************<br>
</blockquote></div><br><br clear="all"><br>-- <br>Regards,<br><br>Ashish Sharma<br>Graduate Research Associate<br>Center for Environmental Fluid Dynamics<br>Aerospace Engineering, PhD Candidate<br>Arizona State University<br>
</div>