[Wrf-users] Problems with NetCDF Large File Support with geogrid.exe

Dominikus Heinzeller climbfuji at ymail.com
Tue May 2 21:40:47 MDT 2017


Hi Klemens,

I am not entirely sure, but: The classical netCDF file format imposes a limit of 4GB on individual variables. With 4000x2500 grid columns (10 million), you are probably facing this problem for the 3D variables. The MPAS website https://mpas-dev.github.io <https://mpas-dev.github.io/> (MPAS-Atmosphere Download -> MPAS-Atmosphere meshes) describes this problem briefly for the 10km mesh with 6 million cells (and three times as many edges).

Can you try to use the NetCDF4 format and see if the problem goes away? NetCDF4 usage is activated by setting the environment variable NETCDF4 to 1 before running ./configure and compiling the model (http://www2.mmm.ucar.edu/wrf/users/docs/user_guide_V3/users_guide_chap5.htm#_Installing_WRF <http://www2.mmm.ucar.edu/wrf/users/docs/user_guide_V3/users_guide_chap5.htm#_Installing_WRF>). Make sure to do a ./clean -a beforehand.

Cheers,

Dom

> On 30/04/2017, at 11:52 AM, Klemens Barfus <klemens.barfus at tu-dresden.de> wrote:
> 
> Dear all,
> 
> I am working with WRF3.8.1 on the supercomputer of our university and have problems writing files larger than 2.2 GB from geogrid.exe (may be from other routines, too).
> Large file support has been enabled by setting WRFIO_NCD_LARGE_FILE_SUPPORT to 1. Running geogrid.exe for 4 nested domains geo_em.d<>.nc for the first three domains are written properly as '64-bit offset', so I expect that WRFIO_NCD_LARGE_FILE_SUPPORT is recognized. Errror Message for the last domain (~4000*2500 gridpoints) is 
> 
> Processing domain 4 of 4
>   Processing XLAT and XLONG
> ERROR: Error in ext_pkg_write_commit
> 
> There are no limitations on the file size in our system (tested as suggested from https://www.unidata.ucar.edu/software/netcdf/faq-lfs.html <https://www.unidata.ucar.edu/software/netcdf/faq-lfs.html>). I run geogrid.exe with 30GB memory of each CPU to avoid problems due to insufficient memory. 
> 
> WRF and WPS have been compiled with the following libraries for application in distributed memory mode:
> 
> 8) GCCcore/5.3.0                                      
> 9) binutils/2.26-GCCcore-5.3.0          
> 10) icc/2016.3.210-GCC-5.3.0-2.26
> 11) ifort/2016.3.210-GCC-5.3.0-2.26
> 12) iccifort/2016.3.210-GCC-5.3.0-2.26                
> 13) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.3.0-2.26 
> 14) iimpi/2016.03-GCC-5.3.0-2.26                      
> 15) imkl/11.3.3.210-iimpi-2016.03-GCC-5.3.0-2.26
> 16) intel/2016.03-GCC-5.3                              
> 17) JasPer/1.900.1-intel-2016.03-GCC-5.3
> 18) zlib/1.2.8-intel-2016.03-GCC-5.3
> 19) Szip/2.1-intel-2016.03-GCC-5.3
> 20) HDF5/1.8.16-intel-2016.03-GCC-5.3
> 21) cURL/7.47.0-intel-2016.03-GCC-5.3
> 22) netCDF/4.4.0-intel-2016.03-GCC-5.3
> 23) netCDF-Fortran/4.4.3-intel-2016.03-GCC-5.3
> 
> Does anybody has an idea, what could be the problem ? Any help is greatly appreciated !
> 
> Many thanks for your help in advance !
> 
> Klemens
> _______________________________________________
> Wrf-users mailing list
> Wrf-users at ucar.edu
> http://mailman.ucar.edu/mailman/listinfo/wrf-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/wrf-users/attachments/20170503/0027c453/attachment.html 


More information about the Wrf-users mailing list