[Wrf-users] Re: Input data for forecasting
Arnold Moene
arnold.moene at wur.nl
Tue Jan 31 13:52:58 MST 2006
>
> Hi wrf-users !
>
> We finished to set up the wrf version in our machine for central America
> 24hr forecast.
> Since the beginning, we have been using twice daily input data indirectly,
> it means, another institution downloads the input data, and then ....
>
> The idea is to have all-automatized process, even the direct way to get
> input data. So, does anybody know a direct way to get the input data for
> 24hr-forecast using ftp? Does exist any protocole or restriction?
What I do for an MM5 forecast (but should be equally valid for WRF) is to use
the data from the nomad servers of NCEP (http://nomad5.ncep.noaa.gov/ and
http://nomad3.ncep.noaa.gov/). What I did is to first define the data I wanted
through the web-interface and then copied the URL that was produced to
actually
get those data. Using the command 'wget' you can get the data without a
browser.
This is what the result looks like:
Step 1.: first define your request (I assume you want current data, not
historic
data):
wget -O $LISTFILE "http://nomad5.ncep.noaa.gov/cgi-bin/ftp2u_gfs.sh?\
file=gfs.t${STARTHOUR}z.pgrbf00&\
file=gfs.t${STARTHOUR}z.pgrbf03&\
file=gfs.t${STARTHOUR}z.pgrbf06&\
# ......... (include as many files as you want).
wildcard=&\
all_lev=on&\
all_var=on&\
subregion=&\
leftlon=${LEFTLON}&\
rightlon=${RIGHTLON}&\
toplat=${TOPLAT}&\
bottomlat=${BOTTOMLAT}&\
results=SAVE&\
rtime=1hr&\
machine=xxx.xxx.xxx.xxx&\ #enter the ip of your machine here
user=anonymous&\
passwd=&\
ftpdir=%2Fincoming_1hr&\
prefix=&\
dir="
where STARTHOUR is the base time of the forecast (e.g. 00), LEFTLON, RIGHTLON,
TOPLAT and BOTTOMLAT contain the boundaries of the subset you want. The part
file=gfs.t${STARTHOUR}z.pgrbf06 can be repeated for as many forecast hours as
you want. O yes, LISTFILE is the name of the file where wget should send it's
output to (which is in fact a new html-page, containing information
about where
the requested data are stored).
Step 2. Download the files one-by-one from the temperorary directory at the
nomad-server:
# Determine the list of files
FILES=`grep ftp $LISTFILE | awk -F '"' '{print $2}'|grep gfs`
# Count number of files
NFILES=`echo $FILES|wc -w`
# Get the name of the directory where the data are stored
HTTPDIR=`grep http $LISTFILE | awk -F '"' '{print $2}'|grep ftp_data`
# Get the files one-by-one
for file in $FILES; do
localfile=`echo $file | awk -F / '{print $NF}'`
remotefile=${HTTPDIR}/$localfile
wget -O $localfile $remotefile
# Don't overload the server
sleep 10
done
I hope this helps. To really make it automatic you have to build some extra
intelligence around the above method to allow for the fact that:
* the server is overloaded
* the data are not yet there
* the ftp2u server is overloaded (then you can fall back on downloading the
entire global data set).
Best regards,
Arnold Moene
PS: the results of my automatic system can be found at
http://www.met.wau.nl/haarwegdata/model
--
------------------------------------------------------------------------
Arnold F. Moene NEW tel: +31 (0)317 482604
Meteorology and Air Quality Group fax: +31 (0)317 482811
Wageningen University e-mail: Arnold.Moene at wur.nl
Duivendaal 2 url: http://www.met.wau.nl
6701 AP Wageningen
The Netherlands
------------------------------------------------------------------------
Openoffice.org - Freedom at work
Firefox - The browser you can trust (www.mozilla.org)
------------------------------------------------------------------------
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
More information about the Wrf-users
mailing list