[Wrf-users] run-time error in WRF-ARW and WRF-NMM (Itanium2processor, ifort/icc, Intel MPI, RSL_LITE configuration)

Gustafson, William I william.gustafson at pnl.gov
Wed Nov 7 13:06:52 MST 2007


Eric,

We tried both with and without the endian flag. The input netcdf files are
fine, since we can do an ncdump and get the correct values that way. My
suspicion is that there is something being assumed as 32-bit when going
between the C and F90 code in the I/O API that is corrupting things.

-Bill


On 11/7/07 11:57 AM, "Kemp, Eric M." <Eric.Kemp at ngc.com> wrote:

> 
> 
> Eduardo and Bill:
> 
> Did you use the ifort "-convert big_endian" flag when compiling WRF and WPS?
> 
> -Eric
> 
> Eric M. Kemp
> Meteorologist
> Northrop Grumman Information Technology
> Intelligence Group (TASC)
> 4801 Stonecroft Boulevard
> Chantilly, VA 20151
> (703) 633-8300 x7078 (lab)
> (703) 633-8300 x8278 (office)
> (703) 449-3400       (fax)
> eric.kemp at ngc.com
> 
> 
> 
> -----Original Message-----
> From: wrf-users-bounces at ucar.edu on behalf of Gustafson, William I
> Sent: Wed 11/7/2007 1:35 PM
> To: edu.penabad at meteogalicia.es; WRF Help Users Desk
> Cc: wrf-users at ucar.edu
> Subject: Re: [Wrf-users] run-time error in WRF-ARW and WRF-NMM
> (Itanium2processor, ifort/icc, Intel MPI, RSL_LITE configuration)
> 
> Eduardo,
> 
> I too have been unsuccessful with WRF on an Ifort + Itanium based machine.
> Both me and one of my system admin types has tried working with the
> configure to get things correct, but no luck. We finally got a compile, but
> now the data being read in from the wrfinput file is corrupted. For example,
> the land use type comes in as something like number*10^6 instead of between
> 1 and 25. It appears to be either an endian or pointer size issue, but we
> haven¹t been able to track it down. If you have any luck, please post back
> to the group so we can all learn together.
> 
> -Bill
> 
> 
> --------------------------------------------------------------------
> William I. Gustafson Jr.
> Atmospheric Science and Global Change Division
> Pacific Northwest National Laboratory
> 3200 Q Ave., MSIN K9-30
> Richland, WA 99352
> (509)372-6110
> 
> 
> On 11/7/07 10:24 AM, "Eduardo Penabad Ramos" <edu.penabad at meteogalicia.es>
> wrote:
> 
>> > Hello!
>> > 
>> > I¹ve successfully compiled both WRF cores on an Itanium2 cluster (RSL_LITE,
>> > Intel compilers&MPI3.0) and while I¹m trying to run a simple 2 nested grids
>> > configuration I¹m getting an error. Moreover, I¹m not able to find useful
>> > information within the rsl.out/error files.
>> > 
>> > When I try to run the model ³serially² I get the segmentation fault error:
>> > 
>> > orballo at rx1:~/EDU/WRF2.2.1/WRFV2/test/em_real> wrf.exe
>> >  starting wrf task            0 of            1
>> > Segmentation fault
>> > 
>> > And when I try to run it with mpiexec (1 processor), this is what I get:
>> > 
>> > orballo at rx1:~/EDU/WRF2.2.1/WRFV2/test/em_real> mpiexec -np 1 wrf.exe
>> >  starting wrf task            0 of            1
>> > rank 0 in job 1 rx1.cesga.es_20637   caused collective abort of all ranks
>> >   exit status of rank 0: killed by signal 9
>> > 
>> > In both cases (and for both cores) the rsl output/error files (with
>> > debug_level=500) don¹t give very much information. Below, it goes one
>> sample
>> > of their ³tails².
>> > 
>> > Do you have any suggestion? Thanks in advance
>> > 
>> > Eduardo Penabad
>> >
>> > 
>> > ARW core rsl.out.0000 tail:
>> > d01 2007-10-29_00:00:00 module_io.F: in wrf_read_field
>> >   inc/wrf_bdyin.inc ext_write_field QRAIN memorder XZY Status =
0
>> >  inc/wrf_bdyin.inc ext_write_field QRAIN memorder XZY
>> >   date 2007-10-29_00:00:00
>> >   ds            1           1           1
>> >   de           99          27           5
>> >   ps            1           1           1
>> >   pe           99          27           5
>> >   ms            1           1           1
>> >   me          100          28           5
>> >  d01 2007-10-29_00:00:00 module_io.F: in wrf_read_field
>> >   inc/wrf_bdyin.inc ext_write_field QRAIN memorder XZY Status =
0
>> >  d01 2007-10-29_00:00:00  input_wrf: end, fid =            2
>> > Timing for processing lateral boundary for domain        1:    0.06850
>> elapsed
>> > seconds.
>> >  d01 2007-10-29_00:00:00 module_integrate: calling solve interface
>> >
>> > 
>> > 
>> > 
>> > 
>> > NMM core rsl.out.0000 tail:
>> > d01 2007-10-29_00:00:00 module_io.F: in wrf_read_field
>> >   inc/wrf_bdyin.inc ext_read_field CWM_BTYS memorder YSZ Status =
0
>> >  inc/wrf_bdyin.inc ext_read_field CWM_BTYE memorder YEZ
>> >   date 2007-10-29_00:00:00
>> >   ds            1           1           1
>> >   de           59          37           1
>> >   ps            1           1           1
>> >   pe           59          37           1
>> >   ms            1           1           1
>> >   me           92          38           1
>> >  d01 2007-10-29_00:00:00 module_io.F: in wrf_read_field
>> >   inc/wrf_bdyin.inc ext_read_field CWM_BTYE memorder YEZ Status =
0
>> >  d01 2007-10-29_00:00:00  input_wrf: end, fid =            1
>> > Timing for processing lateral boundary for domain        1:    0.12110
>> elapsed
>> > seconds.
>> >  d01 2007-10-29_00:00:00 module_integrate: calling solve interface
>> >  WRF NUMBER OF TILES =   1
>> >   SOLVE_NMM: TIMESTEP IS     0   TIME IS   0.000 HOURS
>> >  d01 2007-10-29_00:00:00 nmm: in patch
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_ZZ.inc
>> >  ZEROED OUT PRECIP/RUNOFF ARRAYS
>> >  ZEROED OUT SFC EVAP/FLUX ARRAYS
>> >  ZEROED OUT ACCUMULATED SHORTWAVE FLUX ARRAYS
>> >  ZEROED OUT ACCUMULATED LONGWAVE FLUX ARRAYS
>> >  ZEROED OUT ACCUMULATED CLOUD FRACTION ARRAYS
>> >  ZEROED OUT ACCUMULATED LATENT HEATING ARRAYS
>> >  RESET MAX/MIN TEMPERTURES
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_A.inc
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_A.inc
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_B.inc
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_A.inc
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_D.inc
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_F.inc
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_F1.inc
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_G.inc
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_H.inc
>> >  d01 2007-10-29_00:00:00 calling inc/HALO_NMM_I.inc
>> > 
>> > 
>> > 
>> > 
>> > 
>> >   
>> >   
>> >   
>> >    <http://www.meteogalicia.es>    Conselleria de Medio Ambiente e
>> > Desenvolvemento Sostible - Xunta de Galicia
>> >   
>> > 
>> >   
>> >   
>> >   Eduardo Penabad Ramos
>> >  Investigación e Predición Numérica    MeteoGalicia
>> >  Area Central Local 31-C
>> >  Poligono de Fontiñas s/n
>> >  15.703 Santiago de Compostela
>> >   edu.penabad at meteogalicia.es <mailto:edu.penabad at meteogalicia.es>
>> >  http://www.meteogalicia.es <http://www.meteogalicia.es>
>> >   tel:
>> >  fax:    +34 981 957 462
>> > 
>> 
<http://www.plaxo.com/click_to_call?src=jj_signature&amp;To=%2B34+981+957+462&
>> > amp;Email=edu.penabad at meteogalicia.es>
>> >  +34 981 957 466
>> >    
>> >    
>> >   
>> > 
>> >    
>> >   
>> >   
>> >   Want a signature like this?
>> > <http://www.plaxo.com/signature?src=client_sig_212_1_banner_sig>
>> >  
>> > 
>> >
>> >
>> > _______________________________________________
>> > Wrf-users mailing list
>> > Wrf-users at ucar.edu
>> > http://mailman.ucar.edu/mailman/listinfo/wrf-users
> 
> 
> 
> 


--------------------------------------------------------------------
William I. Gustafson Jr.
Atmospheric Science and Global Change Division
Pacific Northwest National Laboratory
3200 Q Ave., MSIN K9-30
Richland, WA 99352
(509)372-6110

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/wrf-users/attachments/20071107/637312e7/attachment.html


More information about the Wrf-users mailing list