[Wrf-users] Compiling for OpenMP on Cray systems?
Dmitry N. Mikushin
maemarcus at gmail.com
Wed Sep 12 11:20:55 MDT 2012
Dear Carl,
In GCC OpenMP runtime is abstracted from the language in the library called
libgomp:
$ readelf -a /usr/lib/gcc/x86_64-linux-gnu/4.6/libgomp.so | grep
GOMP_parallel_start
139: 0000000000006fb0 79 FUNC GLOBAL DEFAULT 12
GOMP_parallel_start@@GOMP_1.0
Try to link with it on the final link step, e.g. add -lgomp.
However, Cray compiler should provide its own OpenMP runtime, and I'm
wondering if Cray's and gcc's could happily coexist in the same application.
Best,
- D.
2012/9/12 Carl Ponder <cponder at nvidia.com>
> I downloaded WRFV3.4.1.TAR.gz and am working on a Cray system.
> The configurator gave me these options (among others)
>
> 31. Cray XT CLE/Linux x86_64, Cray CCE compiler with gcc (serial)
> 32. Cray XT CLE/Linux x86_64, Cray CCE compiler with gcc (smpar)
> 33. Cray XT CLE/Linux x86_64, Cray CCE compiler with gcc (dmpar)
> 34. Cray XT CLE/Linux x86_64, Cray CCE compiler with gcc (dm+sm)
>
> and I picked number 34. When I compile I get the errors
>
> ftn -o wrf.exe -Oomp -N255 -Onomodinline -em -Onoomp -f free -h
> byteswapio -Oomp wrf.o ../main/module_wrf_top.o libwrflib.a /mnt/lustr
> e_server/cponder/CUDA/WRF/UCAR/WRFV3/external/fftpack/fftpack5/libfftpack.a
> /mnt/lustre_server/cponder/CUDA/WRF/UCAR/WRFV3/external/io_grib
> 1/libio_grib1.a
> /mnt/lustre_server/cponder/CUDA/WRF/UCAR/WRFV3/external/io_grib_share/libio_grib_share.a
> /mnt/lustre_server/cponder/CUDA/WR
> F/UCAR/WRFV3/external/io_int/libwrfio_int.a
> -L/mnt/lustre_server/cponder/CUDA/WRF/UCAR/WRFV3/external/esmf_time_f90
> -lesmf_time /mnt/lustre
> _server/cponder/CUDA/WRF/UCAR/WRFV3/external/RSL_LITE/librsl_lite.a
> /mnt/lustre_server/cponder/CUDA/WRF/UCAR/WRFV3/frame/module_internal_he
> ader_util.o
> /mnt/lustre_server/cponder/CUDA/WRF/UCAR/WRFV3/frame/pack_utils.o
> -L/mnt/lustre_server/cponder/CUDA/WRF/UCAR/WRFV3/external/io
> _netcdf -lwrfio_nf -L/opt/cray/netcdf/4.2.0/cray/74/lib -lnetcdff
> -lnetcdf
> libwrflib.a(setfeenv.o): In function `setfeenv_':
> *setfeenv.c:(.text+0x46): undefined reference to `GOMP_parallel_start'*
> * **setfeenv.c:(.text+0x53): undefined reference to `GOMP_parallel_end'*
> make[1]: [em_wrf] Error 1 (ignored)
> make[1]: Leaving directory
> `/mnt/lustre_server/cponder/CUDA/WRF/UCAR/WRFV3/main'
>
> I can make these go away by removing the definitions
>
> 101 OMPCPP = -D_OPENMP
> 102 OMP = -mp -Mrecursive
> 103 OMPCC = -mp
>
> from configure.wrf, essentially making it run single-threaded.
> Are there some environment settings I need to use, to get this to run with
> OpenMP on the Cray?
> I tried loading the GCC module, in case the GOMP_parallel components were
> supposed to come from the gfortran library, but it didn't help.
> If I have to edit the configure.wrf file, I'd say it wasn't generated
> correctly to begin with, and the WRF toolkit ought to be fixed.
> Can any of you send me a configure.wrf and list of build instructions
> that will work?
> Thanks,
>
> Carl Ponder
>
> ------------------------------
> This email message is for the sole use of the intended recipient(s) and
> may contain confidential information. Any unauthorized review, use,
> disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies
> of the original message.
> ------------------------------
>
>
> _______________________________________________
> Wrf-users mailing list
> Wrf-users at ucar.edu
> http://mailman.ucar.edu/mailman/listinfo/wrf-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/wrf-users/attachments/20120912/ac70e048/attachment.html
More information about the Wrf-users
mailing list