[ncl-talk] Segmentation fault on a GFS file fragment

David Brown dbrown at ucar.edu
Mon Oct 5 13:40:48 MDT 2015


Hi Peter,
Because this problem only showed up on some systems, and when it did,
the crash was occurring in a library used to decode JPEG data (used as
a compression scheme for some GRIB2 data), it was difficult to figure
out what was going on here. But thanks to a piece of information on
the wgrib2 web site we now have an answer. Note that wgrib2 depends on
the same libraries for handling GRIB2 data that NCL does. Here is the
key text:

===============
Execution: Segment Faults

Wgrib2 is used in NCEP operations and by many daily processing jobs at
CPC. There are only two known causes of seg faults in the current
released version of wgrib2.

The library used to handle jpeg2000 stores the arrays on the stack and
if the stack is too small, wgrib2 will seg fault. The solution is to
set the stack size to unlimited (requires root) or to set the stack to
some larger value.

   ulimit -a                           : this will show the size of
the stack in bash/linux

   ulimit -s (new stack size in KB)    : set the stack to some larger value

================

It turns out that this will fix the issue in NCL as well. In my test
scenario I more or less doubled the value returned from 'ulimit -a'
(8196 K) by executing
'ulimit -s 16000'
and then was able to read the complete sample variable without error.

Since it is unlikely we will be able to make changes to the supporting
library (libjasper.a for those interested), we will need to document
this issue and the fix.
 -dave



On Tue, Sep 29, 2015 at 5:40 PM, Peter Novak <P.Novak at tudelft.nl> wrote:
> Dear NCL users and developers,
> I encountered a segmentation fault of NCL on a GRIB2 file extracted from
> NOAA GFS forecast datasets. The file is located here
> http://www.aronde.net/dump/gfs.t18z.sfluxgrbf03.grib2PRES_TOP_high.grib2.
>
> The file is extracted with curl from GFS dataset
> http://www.ftp.ncep.noaa.gov/data/nccf/com/gfs/prod/gfs.2015092918/gfs.t18z.sfluxgrbf03.grib2,
> variable "PRES:high cloud top level" (as referred to in the index file
> at http://www.ftp.ncep.noaa.gov/data/nccf/com/gfs/prod/gfs.2015092918/gfs.t18z.sfluxgrbf03.grib2.idx).
> The extraction command with ranges was:
> $ curl -f -s -r 24660742-27538435 <url above> -o <outfile>
>
> I am getting segfault upon the following:
> $ ncl_filedump gfs.t18z.sfluxgrbf03.grib2PRES_TOP_high.grib2 -v PRES_P8_L233_GGA0_avg
>
> The same when I try to read the file with an ncl script:
>         indat=addfile(<file>, "r")
>         vars=getfilevarnames(indat)
>         printVarSummary(indat->$vars(0)$)
>
> At the same time PanoplyJ reads the file without a hickup. Other files I
> extracted using curl from the same GFS datasets work fine, the only
> difference I spotted is that this one contains missing values (NaNf in
> PanoplyJ), perhaps that could be the cause of the problem?
>
> I would appreciate any advice regarding how to read and process this
> file with NCL, or what is exactly wrong with it.
>
> As for the NCL setup I have:
> $ ncl
> Copyright (C) 1995-2015 - All Rights Reserved
> University Corporation for Atmospheric Research
> NCAR Command Language Version 6.3.0
> The use of this software is governed by a License Agreement.
> See http://www.ncl.ucar.edu/ for more details.
>
> $ uname -a
> Linux ... 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>
> Thanks for your help.
>
> Best,
>
> Peter.
>
> _______________________________________________
> ncl-talk mailing list
> ncl-talk at ucar.edu
> List instructions, subscriber options, unsubscribe:
> http://mailman.ucar.edu/mailman/listinfo/ncl-talk


More information about the ncl-talk mailing list