[Wrf-users] openmpi error in wrf

Preeti preeti at csa.iisc.ernet.in
Wed Mar 10 23:42:53 MST 2010


Hello

This is regarding running WRF on a Linux machine with openmpi, infiniband
interconnect.
I ran wrf on 8processors on 1 node, but there is a problem when I run across
nodes. Below is the error I get when I run across multiple nodes. This might
be an openmpi issue but just wanted to check in this forum if anyone any
idea about this?
If anyone has faced a similar issue, please help me out.

Thanks in advance
Preeti

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------



--------------------------------------------------------------------------
WARNING: There are more than one active ports on host 'moria08', but the

default subnet GID prefix was detected on more than one of these
ports.  If these ports are connected to different physical IB
networks, this configuration will fail in Open MPI.  This version of
Open MPI requires that every physically separate IB subnet that is

used between connected MPI processes must have different subnet ID
values.

Please see this FAQ entry for more details:

  http://www.open-mpi.org/faq/?category=openfabrics#ofa-default-subnet-gid

NOTE: You can turn off this warning by setting the MCA parameter
      btl_openib_warn_default_gid_prefix to 0.
--------------------------------------------------------------------------
 starting wrf task            0  of           12

 starting wrf task            2  of           12
 starting wrf task            5  of           12
 starting wrf task            6  of           12
 starting wrf task            4  of           12
 starting wrf task            1  of           12

 starting wrf task            3  of           12
 starting wrf task            7  of           12
[moria08:28420] 11 more processes have sent help message
help-mpi-btl-openib.txt / default subnet prefix
[moria08:28420] Set MCA parameter "orte_base_help_aggregate" to 0 to
see all help / error messages

--------------------------------------------------------------------------
The InfiniBand retry count between two MPI processes has been
exceeded.  "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):

    The total number of times that the sender wishes the receiver to
    retry timeout, packet sequence, etc. errors before posting a
    completion error.

This error typically means that there is something awry within the

InfiniBand fabric itself.  You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.

Two MCA parameters can be used to control Open MPI's behavior with

respect to the retry count:

* btl_openib_ib_retry_count - The number of times the sender will
  attempt to retry (defaulted to 7, the maximum value).
* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted

  to 10).  The actual timeout value used is calculated as:

     4.096 microseconds * (2^btl_openib_ib_timeout)

  See the InfiniBand spec 1.2 (section 12.7.34) for more details.

Below is some information about the host that raised the error and the

peer to which it was connected:

  Local host:   moria08
  Local device: mthca0
  Peer host:    moria09

You may need to consult with your system administrator to get this
problem fixed.
--------------------------------------------------------------------------

--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 28423 on
node moria08 exiting without calling "finalize". This may
have caused other processes in the application to be

terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[moria08:28420] 3 more processes have sent help message
help-mpi-btl-openib.txt / pp retry exceeded

00:35:18 vss at moria08
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/wrf-users/attachments/20100311/9ddaccf7/attachment.html 


More information about the Wrf-users mailing list