[Wrf-users] Wrf-users Digest, Vol 68, Issue 20

Ashish Sharma Ashish.Sharma.1 at asu.edu
Fri Apr 16 11:41:44 MDT 2010


Hi,

I think you have different value of dx and dy in met_em  files and the one
in namelist.input.
ncdump one of the met_em file and check dx and dy there. Use the same dx and
dy in namelist.input.

You should be good to go thereafter.

Thanks.




On Fri, Apr 16, 2010 at 9:08 AM, <wrf-users-request at ucar.edu> wrote:

> Send Wrf-users mailing list submissions to
>        wrf-users at ucar.edu
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://mailman.ucar.edu/mailman/listinfo/wrf-users
> or, via email, send a message with subject or body 'help' to
>        wrf-users-request at ucar.edu
>
> You can reach the person managing the list at
>        wrf-users-owner at ucar.edu
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Wrf-users digest..."
>
>
> Today's Topics:
>
>   1. Re: how to plot the 3 or 4 nesting domain? (Feng Liu)
>   2. WRF 3.2 jobs hanging up sporadically on wrfout output
>      (Zulauf, Michael)
>   3. Re: WRF 3.2 jobs hanging up sporadically on wrfout        output
>      (Don Morton)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 15 Apr 2010 09:35:35 -0700
> From: "Feng Liu" <fliu at mag.maricopa.gov>
> Subject: Re: [Wrf-users] how to plot the 3 or 4 nesting domain?
> To: "Asnor Muizan Ishak" <asnorjps at yahoo.com.my>,       "Jie TANG"
>        <totangjie at gmail.com>, <wrf-users at ucar.edu>
> Message-ID:
>        <A01A26511CAA69409D6D8095B3E104BD0254C919 at MAIL.mag.maricopa.gov>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi,
>
> I suppose you have inconsistent parent domain definition in WPS and WRF.
> Please double check dx and dy settings in the namelist.wps and
> namelist.input for running real.exe/wrf.exe. You may send me your both
> namelist files otherwise. Thanks.
>
> Feng
>
>
>
>
>
> From: Asnor Muizan Ishak [mailto:asnorjps at yahoo.com.my]
> Sent: Thursday, April 15, 2010 8:51 AM
> To: Feng Liu; Jie TANG; wrf-users at ucar.edu
> Subject: Re: [Wrf-users] how to plot the 3 or 4 nesting domain?
>
>
>
> Dear ALL,
>
>
>
> May I have your guidance on how to solve the real error. First, I have
> ran the ./real.exe but the error shows as below :
>
>
>
>  Namelist dfi_control not found in namelist.input. Using registry
> defaults for v
>  ariables in dfi_control
>  Namelist tc not found in namelist.input. Using registry defaults for
> variables
>  in tc
>  Namelist scm not found in namelist.input. Using registry defaults for
> variables
>  in scm
>  Namelist fire not found in namelist.input. Using registry defaults for
> variable
>  s in fire
>  REAL_EM V3.1.1 PREPROCESSOR
>  *************************************
>  Parent domain
>  ids,ide,jds,jde            1          34           1          35
>  ims,ime,jms,jme           -4          39          -4          40
>  ips,ipe,jps,jpe            1          34           1          35
>  *************************************
>  DYNAMICS OPTION: Eulerian Mass Coordinate
>    alloc_space_field: domain            1,     33593916 bytes allocated
> Time period #   1 to process = 1994-01-01_00:00:00.
> Time period #   2 to process = 1994-01-01_06:00:00.
> Time period #   3 to process = 1994-01-01_12:00:00.
> Time period #   4 to process = 1994-01-01_18:00:00.
> Time period #   5 to process = 1994-01-02_00:00:00.
> Time period #   6 to process = 1994-01-02_06:00:00.
> Time period #   7 to process = 1994-01-02_12:00:00.
> Time period #   8 to process = 1994-01-02_18:00:00.
> Time period #   9 to process = 1994-01-03_00:00:00.
> Time period #  10 to process = 1994-01-03_06:00:00.
> Time period #  11 to process = 1994-01-03_12:00:00.
> Time period #  12 to process = 1994-01-03_18:00:00.
> Total analysis times to input =   12.
>
>
> ------------------------------------------------------------------------
> -----
>
>  Domain  1: Current date being processed: 1994-01-01_00:00:00.0000,
> which is loop #   1 out of   12
>  configflags%julyr, %julday, %gmt:        1994           1
> 0.0000000E+00
>  dx_compare,dy_compare =    30000.00       30000.00
>  -------------- FATAL CALLED ---------------
>  FATAL CALLED FROM FILE:  <stdin>  LINE:     331
>  DX and DY do not match from the namelist and the input file
>  -------------------------------------------
>
> Any thoughts on how to fit this problem? Many thanks in advance.
>
>
>
>
>
>
>
>  _____
>
> From: Feng Liu <fliu at mag.maricopa.gov>
> To: Jie TANG <totangjie at gmail.com>; wrf-users at ucar.edu
> Sent: Tuesday, 13 April 2010 16:04:29
> Subject: Re: [Wrf-users] how to plot the 3 or 4 nesting domain?
>
>
>
>
> Plotgrid.exe can do that. This file, a NCAR-Graphics-based utility,
> should be located at WPS/util/src and one symbolic link at WPS/util if
> you compile it successfully. If not make sure you set the NCAR Graphics
> path correctly.  Plotgrids creates an NCAR Graphics metafile, gmeta, you
> can view your nested domain by using idt command.
>
> Feng
>
>
>
>
>
> From: wrf-users-bounces at ucar.edu [mailto:wrf-users-bounces at ucar.edu] On
> Behalf Of Jie TANG
> Sent: Monday, April 12, 2010 6:21 PM
> To: wrf-users at ucar.edu
> Subject: [Wrf-users] how to plot the 3 or 4 nesting domain?
>
>
>
> hello,every one.
>
> When run a 3 or 4 domain nesting wrf run,how to draw the four nested
> domain in one figure  conveniently just like the TER.PLT in MM5?
>
>  I tried to found the command plotgrids.exe,but only the plotgrids.o
> file there.Can anyone tell me how to draw the figure?thanks.
>
>
> --
>
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://mailman.ucar.edu/pipermail/wrf-users/attachments/20100415/945f7ba3/attachment-0001.html
>
> ------------------------------
>
> Message: 2
> Date: Thu, 15 Apr 2010 14:49:40 -0700
> From: "Zulauf, Michael" <Michael.Zulauf at iberdrolausa.com>
> Subject: [Wrf-users] WRF 3.2 jobs hanging up sporadically on wrfout
>        output
> To: <wrf-users at ucar.edu>
> Message-ID:
>        <B2A259FAA3CF26469FF9A7C7402C49970913EB06 at POREXUW03.ppmenergy.us>
> Content-Type: text/plain;       charset="us-ascii"
>
> Hi all,
>
> I'm trying to get WRF V3.2 running by utilizing a setup that I've
> successfully run with V3.1.1 (and earlier).  The configure/compile
> seemed to go fine using the same basic configuration details that have
> worked in the past.  When I look over the Updates in V3.2, I don't see
> anything problematic for me.
>
> We're running with four grids, nesting from 27km to 1km, initialized and
> forced with GFS output.  The nest initializations are delayed from the
> outer grid initialization by 3, 6, and 9 hours, respecitively.  The 1km
> grid has wrfout (netcdf) output every 20 minutes, the other grids every
> hour.
>
> What I'm seeing is that the job appears to be running fine for some
> time, but eventually the job hangs up during wrfout output - usually on
> the finest grid - but not exclusively.  Changing small details (such as
> changing restart_interval) can make it run longer or shorter.  Sometimes
> even with no changes it will run a different length of time.
>
> I've got debug_level set to 300, so I get tons of output.  When it
> hangs, the wrf process don't die, but all output stops.  There are no
> error messages or anything else that indicate a problem (at least none
> that I can find).  What I do get is a truncated (always 32 byte) wrfout
> file.  For example:
>
> -rw-r--r--  1 p20457 staff 32 Apr 15 13:02
> wrfout_d04_2009-12-14_09:00:00
>
> The wrfout's that get written before it hangs appear to be fine, with
> valid data.  frames_per_outfile is set to 1, so the files never get
> excessively large - maybe on the order of 175MB.  All of the previous
> versions of WRF that I've used continue work fine on this hardware/OS
> combination (a cluster of dual-dual core Opterons, running CentOS) -
> just V3.2 has issues.
>
> Like I said, the wrf processes don't die, but all output ceases, even
> with the massive amount of debug info.  The last lines in the rsl.error
> and rsl.out files is always something of this type:
>
>  date 2009-12-14_09:00:00
>  ds             1            1            1
>  de             1            1            1
>  ps             1            1            1
>  pe             1            1            1
>  ms             1            1            1
>  me             1            1            1
>  output_wrf.b writing 0d real
>
> The specific times and and variables being written vary, depending on
> when the job hangs.
>
> I haven't dug deeply into what's going on, but it seems like possibly
> some sort of race condition or communications deadlock or something.
> Does anybody have ideas of where I should go from here?  It seems to me
> like maybe something basic has changed with V3.2, and perhaps I need to
> adjust something in my configuration or setup.
>
> Thanks,
> Mike
>
> --
> Mike Zulauf
> Meteorologist
> Wind Asset Management
> Iberdrola Renewables
> 1125 NW Couch, Suite 700
> Portland, OR 97209
> Office: 503-478-6304  Cell: 503-913-0403
>
>
>
>
>
> This message is intended for the exclusive attention of the address(es)
> indicated.  Any information contained herein is strictly confidential and
> privileged, especially as regards person data,
> which must not be disclosed.  If you are the intended recipient and have
> received it by mistake or learn about it in any other way, please notify us
> by return e-mail and delete this message from
>  your computer system. Any unauthorized use, reproduction, alteration,
> filing or sending of this message and/or any attached files to third parties
> may lead to legal proceedings being taken. Any
> opinion expressed herein is solely that of the author(s) and does not
> necessarily represent the opinion of Iberdrola. The sender does not
> guarantee the integrity, speed or safety of this
> message, not accept responsibility for any possible damage arising from the
> interception, incorporation of virus or any other manipulation carried out
> by third parties.
>
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 16 Apr 2010 08:08:46 -0800
> From: Don Morton <Don.Morton at alaska.edu>
> Subject: Re: [Wrf-users] WRF 3.2 jobs hanging up sporadically on
>        wrfout  output
> To: "Zulauf, Michael" <Michael.Zulauf at iberdrolausa.com>
> Cc: wrf-users at ucar.edu
> Message-ID:
>        <s2j78a8f32a1004160908u6aa331a2p386bec8f7f513457 at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> I was having these sorts of problems with WRF 3.1.1 a few weeks ago on our
> Sun Opteron cluster.  It was always hanging on the writing of wrfout,
> typically on an inner nest, and it wasn't consistent from run to run.  I
> had
> the luxury of being able to try these cases on other machines, and didn't
> experience problems on those.
>
> Our folks here suggested I turn off the MPI RDMA (Remote Direct Memory
> Access) optimizations, which slowed performance substantially, but resolved
> the issue.
>
> It's been my experience over the years with WRF, that frequently these
> problems are resolved if you turn off optimizations.
>
> If you're using a Sun cluster, I can give you a little more info privately.
>
> On Thu, Apr 15, 2010 at 1:49 PM, Zulauf, Michael <
> Michael.Zulauf at iberdrolausa.com> wrote:
>
> > Hi all,
> >
> > I'm trying to get WRF V3.2 running by utilizing a setup that I've
> > successfully run with V3.1.1 (and earlier).  The configure/compile
> > seemed to go fine using the same basic configuration details that have
> > worked in the past.  When I look over the Updates in V3.2, I don't see
> > anything problematic for me.
> >
> > We're running with four grids, nesting from 27km to 1km, initialized and
> > forced with GFS output.  The nest initializations are delayed from the
> > outer grid initialization by 3, 6, and 9 hours, respecitively.  The 1km
> > grid has wrfout (netcdf) output every 20 minutes, the other grids every
> > hour.
> >
> > What I'm seeing is that the job appears to be running fine for some
> > time, but eventually the job hangs up during wrfout output - usually on
> > the finest grid - but not exclusively.  Changing small details (such as
> > changing restart_interval) can make it run longer or shorter.  Sometimes
> > even with no changes it will run a different length of time.
> >
> > I've got debug_level set to 300, so I get tons of output.  When it
> > hangs, the wrf process don't die, but all output stops.  There are no
> > error messages or anything else that indicate a problem (at least none
> > that I can find).  What I do get is a truncated (always 32 byte) wrfout
> > file.  For example:
> >
> > -rw-r--r--  1 p20457 staff 32 Apr 15 13:02
> > wrfout_d04_2009-12-14_09:00:00
> >
> > The wrfout's that get written before it hangs appear to be fine, with
> > valid data.  frames_per_outfile is set to 1, so the files never get
> > excessively large - maybe on the order of 175MB.  All of the previous
> > versions of WRF that I've used continue work fine on this hardware/OS
> > combination (a cluster of dual-dual core Opterons, running CentOS) -
> > just V3.2 has issues.
> >
> > Like I said, the wrf processes don't die, but all output ceases, even
> > with the massive amount of debug info.  The last lines in the rsl.error
> > and rsl.out files is always something of this type:
> >
> >  date 2009-12-14_09:00:00
> >  ds             1            1            1
> >  de             1            1            1
> >  ps             1            1            1
> >  pe             1            1            1
> >  ms             1            1            1
> >  me             1            1            1
> >  output_wrf.b writing 0d real
> >
> > The specific times and and variables being written vary, depending on
> > when the job hangs.
> >
> > I haven't dug deeply into what's going on, but it seems like possibly
> > some sort of race condition or communications deadlock or something.
> > Does anybody have ideas of where I should go from here?  It seems to me
> > like maybe something basic has changed with V3.2, and perhaps I need to
> > adjust something in my configuration or setup.
> >
> > Thanks,
> > Mike
> >
> > --
> > Mike Zulauf
> > Meteorologist
> > Wind Asset Management
> > Iberdrola Renewables
> > 1125 NW Couch, Suite 700
> > Portland, OR 97209
> > Office: 503-478-6304  Cell: 503-913-0403
> >
> >
> >
> >
> >
> > This message is intended for the exclusive attention of the address(es)
> > indicated.  Any information contained herein is strictly confidential and
> > privileged, especially as regards person data,
> > which must not be disclosed.  If you are the intended recipient and have
> > received it by mistake or learn about it in any other way, please notify
> us
> > by return e-mail and delete this message from
> >  your computer system. Any unauthorized use, reproduction, alteration,
> > filing or sending of this message and/or any attached files to third
> parties
> > may lead to legal proceedings being taken. Any
> > opinion expressed herein is solely that of the author(s) and does not
> > necessarily represent the opinion of Iberdrola. The sender does not
> > guarantee the integrity, speed or safety of this
> > message, not accept responsibility for any possible damage arising from
> the
> > interception, incorporation of virus or any other manipulation carried
> out
> > by third parties.
> >
> > _______________________________________________
> > Wrf-users mailing list
> > Wrf-users at ucar.edu
> > http://mailman.ucar.edu/mailman/listinfo/wrf-users
> >
>
>
>
> --
> Arctic Region Supercomputing Center
> http://www.arsc.edu/~morton/
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://mailman.ucar.edu/pipermail/wrf-users/attachments/20100416/e6832d05/attachment.html
>
> ------------------------------
>
> _______________________________________________
> Wrf-users mailing list
> Wrf-users at ucar.edu
> http://mailman.ucar.edu/mailman/listinfo/wrf-users
>
>
> End of Wrf-users Digest, Vol 68, Issue 20
> *****************************************
>



-- 
Regards,

Ashish Sharma
Graduate Research Associate
Center for Environmental Fluid Dynamics
Aerospace Engineering, PhD Candidate
Arizona State University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/wrf-users/attachments/20100416/bbe12984/attachment-0001.html 


More information about the Wrf-users mailing list