[ncl-talk] OPeNDAP issues
Jon Meyer
jonathan.meyer at aggiemail.usu.edu
Mon Jun 1 15:44:01 MDT 2015
Hi all,
I am using NCL to access CFSR data through the Nomads OPeNDAP portal and
have been getting some intriguing random, and not-so-random fatal calls.
First off, I am accessing and opening the 6hr CFSR files to concatenate and
subset some variables within these files and then writing these into my own
.nc files.
I've encountered two issues; the first is minor and I've created a
work-around, but the second one has me stumped at this point.
Issue one is a failure to pass the url string through the interface, and is
completely random on which file it fails to find. My work-around is a
simple do loop which tries again if a file is not found to exist. In 100%
of the cases I've checked in the NCL output, the second attempt to access
the file works---hence the randomness. Here is the error and my print
statements.
syntax error, unexpected WORD_STRING, expecting WORD_WORD
context: Error { code = 404; message =
"/tmp/%2Fmodeldata%2Fcmd_pgbh%2F1979%2F197907%2F19790726%2Fpgbh00.gdas.1979072600.grb2.gbx8
(No such file or directory)"^;};
ncopen: filename "
http://nomads.ncdc.noaa.gov/thredds/dodsC/modeldata/cmd_pgbh/1979/197907/19790726/pgbh00.gdas.1979072600.grb2":
NetCDF: file not found
(0) OPeNDAP isfilepresent test unsuccessful:---Attempt 1 of 5
(0) FILE FOUND:-----Attempt 2 of 5
The second issues is causing me a little more frustration. In about the
same position (early August) in the yearly loop (a total of 1460 6hr
files), a CURL error occurs in which the host name cannot be resolved,
which causes a fatal call. The pattern for this issue seems to be connected
to the number of times the OPeNDAP interface is used, whereas if I restart
the code in the middle of the year loop, the error is no longer encountered
in early August. Here is the error for this issue.
Cannot create cookie file
CURL Error: Couldn't resolve host name
curl error details:
fatal:Could not open (
http://nomads.ncdc.noaa.gov/thredds/dodsC/modeldata/cmd_pgbh/1981/198109/19810908/pgbh00.gdas.1981090818.grb2
)
fatal:file (f_in) isn't defined
In the short term, I've resorted to chopping the data-writing into monthly
arrays so I can simply restart the code in the most recent month when the
failure occurs. I am however curious if there is a more permanent solution
I can implement that doesn't involve so much baby siting of NCL jobs.
Attached is my complete NCL script for reference.
Thanks for any help.
Jon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/ncl-talk/attachments/20150601/67dc6489/attachment.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: get_CFSR_VerticalVelocity.ncl
Type: application/octet-stream
Size: 4683 bytes
Desc: not available
Url : http://mailman.ucar.edu/pipermail/ncl-talk/attachments/20150601/67dc6489/attachment.obj
More information about the ncl-talk
mailing list