<HTML>
<HEAD>
<TITLE>Re: [Wrf-users] run-time error in WRF-ARW and WRF-NMM (Itanium2processor, ifort/icc, Intel MPI, RSL_LITE configuration)</TITLE>
</HEAD>
<BODY>
<FONT FACE="Helvetica, Verdana, Arial"><SPAN STYLE='font-size:12.0px'>Eric,<BR>
<BR>
We tried both with and without the endian flag. The input netcdf files are fine, since we can do an ncdump and get the correct values that way. My suspicion is that there is something being assumed as 32-bit when going between the C and F90 code in the I/O API that is corrupting things.<BR>
<BR>
-Bill<BR>
<BR>
<BR>
On 11/7/07 11:57 AM, "Kemp, Eric M." <Eric.Kemp@ngc.com> wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE="Helvetica, Verdana, Arial"><SPAN STYLE='font-size:12.0px'><BR>
<BR>
Eduardo and Bill:<BR>
<BR>
Did you use the ifort "-convert big_endian" flag when compiling WRF and WPS?<BR>
<BR>
-Eric<BR>
<BR>
Eric M. Kemp<BR>
Meteorologist<BR>
Northrop Grumman Information Technology<BR>
Intelligence Group (TASC)<BR>
4801 Stonecroft Boulevard<BR>
Chantilly, VA 20151<BR>
(703) 633-8300 x7078 (lab)<BR>
(703) 633-8300 x8278 (office)<BR>
(703) 449-3400 (fax)<BR>
eric.kemp@ngc.com<BR>
<BR>
<BR>
<BR>
-----Original Message-----<BR>
From: wrf-users-bounces@ucar.edu on behalf of Gustafson, William I<BR>
Sent: Wed 11/7/2007 1:35 PM<BR>
To: edu.penabad@meteogalicia.es; WRF Help Users Desk<BR>
Cc: wrf-users@ucar.edu<BR>
Subject: Re: [Wrf-users] run-time error in WRF-ARW and WRF-NMM (Itanium2processor, ifort/icc, Intel MPI, RSL_LITE configuration)<BR>
<BR>
Eduardo,<BR>
<BR>
I too have been unsuccessful with WRF on an Ifort + Itanium based machine.<BR>
Both me and one of my system admin types has tried working with the<BR>
configure to get things correct, but no luck. We finally got a compile, but<BR>
now the data being read in from the wrfinput file is corrupted. For example,<BR>
the land use type comes in as something like number*10^6 instead of between<BR>
1 and 25. It appears to be either an endian or pointer size issue, but we<BR>
haven’t been able to track it down. If you have any luck, please post back<BR>
to the group so we can all learn together.<BR>
<BR>
-Bill<BR>
<BR>
<BR>
--------------------------------------------------------------------<BR>
William I. Gustafson Jr.<BR>
Atmospheric Science and Global Change Division<BR>
Pacific Northwest National Laboratory<BR>
3200 Q Ave., MSIN K9-30<BR>
Richland, WA 99352<BR>
(509)372-6110<BR>
<BR>
<BR>
On 11/7/07 10:24 AM, "Eduardo Penabad Ramos" <edu.penabad@meteogalicia.es><BR>
wrote:<BR>
<BR>
> Hello!<BR>
> <BR>
> I’ve successfully compiled both WRF cores on an Itanium2 cluster (RSL_LITE,<BR>
> Intel compilers&MPI3.0) and while I’m trying to run a simple 2 nested grids<BR>
> configuration I’m getting an error. Moreover, I’m not able to find useful<BR>
> information within the rsl.out/error files.<BR>
> <BR>
> When I try to run the model “serially” I get the segmentation fault error:<BR>
> <BR>
> orballo@rx1:~/EDU/WRF2.2.1/WRFV2/test/em_real> wrf.exe<BR>
> starting wrf task 0 of 1<BR>
> Segmentation fault<BR>
> <BR>
> And when I try to run it with mpiexec (1 processor), this is what I get:<BR>
> <BR>
> orballo@rx1:~/EDU/WRF2.2.1/WRFV2/test/em_real> mpiexec -np 1 wrf.exe<BR>
> starting wrf task 0 of 1<BR>
> rank 0 in job 1 rx1.cesga.es_20637 caused collective abort of all ranks<BR>
> exit status of rank 0: killed by signal 9<BR>
> <BR>
> In both cases (and for both cores) the rsl output/error files (with<BR>
> debug_level=500) don’t give very much information. Below, it goes one sample<BR>
> of their “tails”.<BR>
> <BR>
> Do you have any suggestion? Thanks in advance<BR>
> <BR>
> Eduardo Penabad<BR>
><BR>
> <BR>
> ARW core rsl.out.0000 tail:<BR>
> d01 2007-10-29_00:00:00 module_io.F: in wrf_read_field<BR>
> inc/wrf_bdyin.inc ext_write_field QRAIN memorder XZY Status = 0<BR>
> inc/wrf_bdyin.inc ext_write_field QRAIN memorder XZY<BR>
> date 2007-10-29_00:00:00<BR>
> ds 1 1 1<BR>
> de 99 27 5<BR>
> ps 1 1 1<BR>
> pe 99 27 5<BR>
> ms 1 1 1<BR>
> me 100 28 5<BR>
> d01 2007-10-29_00:00:00 module_io.F: in wrf_read_field<BR>
> inc/wrf_bdyin.inc ext_write_field QRAIN memorder XZY Status = 0<BR>
> d01 2007-10-29_00:00:00 input_wrf: end, fid = 2<BR>
> Timing for processing lateral boundary for domain 1: 0.06850 elapsed<BR>
> seconds.<BR>
> d01 2007-10-29_00:00:00 module_integrate: calling solve interface<BR>
><BR>
> <BR>
> <BR>
> <BR>
> <BR>
> NMM core rsl.out.0000 tail:<BR>
> d01 2007-10-29_00:00:00 module_io.F: in wrf_read_field<BR>
> inc/wrf_bdyin.inc ext_read_field CWM_BTYS memorder YSZ Status = 0<BR>
> inc/wrf_bdyin.inc ext_read_field CWM_BTYE memorder YEZ<BR>
> date 2007-10-29_00:00:00<BR>
> ds 1 1 1<BR>
> de 59 37 1<BR>
> ps 1 1 1<BR>
> pe 59 37 1<BR>
> ms 1 1 1<BR>
> me 92 38 1<BR>
> d01 2007-10-29_00:00:00 module_io.F: in wrf_read_field<BR>
> inc/wrf_bdyin.inc ext_read_field CWM_BTYE memorder YEZ Status = 0<BR>
> d01 2007-10-29_00:00:00 input_wrf: end, fid = 1<BR>
> Timing for processing lateral boundary for domain 1: 0.12110 elapsed<BR>
> seconds.<BR>
> d01 2007-10-29_00:00:00 module_integrate: calling solve interface<BR>
> WRF NUMBER OF TILES = 1<BR>
> SOLVE_NMM: TIMESTEP IS 0 TIME IS 0.000 HOURS<BR>
> d01 2007-10-29_00:00:00 nmm: in patch<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_ZZ.inc<BR>
> ZEROED OUT PRECIP/RUNOFF ARRAYS<BR>
> ZEROED OUT SFC EVAP/FLUX ARRAYS<BR>
> ZEROED OUT ACCUMULATED SHORTWAVE FLUX ARRAYS<BR>
> ZEROED OUT ACCUMULATED LONGWAVE FLUX ARRAYS<BR>
> ZEROED OUT ACCUMULATED CLOUD FRACTION ARRAYS<BR>
> ZEROED OUT ACCUMULATED LATENT HEATING ARRAYS<BR>
> RESET MAX/MIN TEMPERTURES<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_A.inc<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_A.inc<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_B.inc<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_A.inc<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_D.inc<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_F.inc<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_F1.inc<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_G.inc<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_H.inc<BR>
> d01 2007-10-29_00:00:00 calling inc/HALO_NMM_I.inc<BR>
> <BR>
> <BR>
> <BR>
> <BR>
> <BR>
> <BR>
> <BR>
> <BR>
> <a href="http://www.meteogalicia.es"><http://www.meteogalicia.es></a> Conselleria de Medio Ambiente e<BR>
> Desenvolvemento Sostible - Xunta de Galicia<BR>
> <BR>
> <BR>
> <BR>
> <BR>
> Eduardo Penabad Ramos<BR>
> Investigación e Predición Numérica MeteoGalicia<BR>
> Area Central Local 31-C<BR>
> Poligono de Fontiñas s/n<BR>
> 15.703 Santiago de Compostela<BR>
> edu.penabad@meteogalicia.es <a href="mailto:edu.penabad@meteogalicia.es"><mailto:edu.penabad@meteogalicia.es></a><BR>
> <a href="http://www.meteogalicia.es">http://www.meteogalicia.es</a> <a href="http://www.meteogalicia.es"><http://www.meteogalicia.es></a><BR>
> tel:<BR>
> fax: +34 981 957 462<BR>
> <<a href="http://www.plaxo.com/click_to_call?src=jj_signature&To=%2B34+981+957+462&">http://www.plaxo.com/click_to_call?src=jj_signature&amp;To=%2B34+981+957+462&</a><BR>
> amp;Email=edu.penabad@meteogalicia.es><BR>
> +34 981 957 466<BR>
> <BR>
> <BR>
> <BR>
> <BR>
> <BR>
> <BR>
> <BR>
> Want a signature like this?<BR>
> <a href="http://www.plaxo.com/signature?src=client_sig_212_1_banner_sig"><http://www.plaxo.com/signature?src=client_sig_212_1_banner_sig></a><BR>
> <BR>
> <BR>
><BR>
><BR>
> _______________________________________________<BR>
> Wrf-users mailing list<BR>
> Wrf-users@ucar.edu<BR>
> <a href="http://mailman.ucar.edu/mailman/listinfo/wrf-users">http://mailman.ucar.edu/mailman/listinfo/wrf-users</a><BR>
<BR>
<BR>
<BR>
<BR>
</SPAN></FONT></BLOCKQUOTE><FONT FACE="Helvetica, Verdana, Arial"><SPAN STYLE='font-size:12.0px'><BR>
<BR>
</SPAN><SPAN STYLE='font-size:11.0px'>--------------------------------------------------------------------<BR>
<I>William I. Gustafson Jr.<BR>
</I>Atmospheric Science and Global Change Division<BR>
Pacific Northwest National Laboratory<BR>
3200 Q Ave., MSIN K9-30<BR>
Richland, WA 99352<BR>
(509)372-6110</SPAN><SPAN STYLE='font-size:12.0px'><BR>
</SPAN></FONT>
</BODY>
</HTML>