[mpas-developers] MPAS I/O requirements and design doc

Michael Duda duda at ucar.edu
Fri Feb 24 14:08:42 MST 2012


Hi, Folks.

I've been slowly working on a requirements and design document for
a new I/O layer in MPAS that will provide parallel I/O (almost
certainly to be implemented using PIO) and I/O for multiple blocks
per MPI task. The Implementation and Testing chapters are still
blank, as I first wanted to get some feedback on the requirements
and proposed design to see whether I'm headed in the right
direction.

Attached is the document and its source; if anyone has questions,
comments, or other suggestions, I'd be glad to hear them.

Thanks!
Michael
-------------- next part --------------
\documentclass[11pt]{report}

\usepackage{graphicx}
\usepackage{listings}
\usepackage{color}
\usepackage{hyperref}

\setlength{\topmargin}{0in}
\setlength{\headheight}{0in}
\setlength{\headsep}{0in}
\setlength{\textheight}{9.0in}
\setlength{\textwidth}{6.5in}
\setlength{\evensidemargin}{0in}
\setlength{\oddsidemargin}{0in}

\newlength{\hangfunction}

\newenvironment{routine}{\vspace{12pt}\hrule\par\vspace{12pt}}{\vspace{24pt}}
\newenvironment{inputs}{\vspace{12pt} \par \noindent{\large \textbf{Input}\par\vspace{6pt}\par}}{}
\newenvironment{outputs}{\vspace{12pt} \par \noindent{\large \textbf{Output}\par\vspace{6pt}\par}}{}
\newcommand{\function}[2]{\phantomsection\addcontentsline{toc}{subsection}{#1}\noindent{\large function}\par\vspace{-8pt}\settowidth{\hangfunction}{{\Large #1(}}\begin{flushleft}\hangindent=\hangfunction \Large #1(#2)\end{flushleft}\vspace{0pt}}
\newcommand{\subroutine}[2]{\phantomsection\addcontentsline{toc}{subsection}{#1}\noindent{\large subroutine}\par\vspace{-8pt}\settowidth{\hangfunction}{{\Large #1(}}\begin{flushleft}\hangindent=\hangfunction \Large #1(#2)\end{flushleft}\vspace{0pt}}
\newcommand{\summary}[1]{\noindent #1}
\newcommand{\argument}[3]{\hangindent=0.75in #2 :: {\large #1} --- \emph{#3}\par\vspace{4pt}}
\newcommand{\returnvalue}[1]{\vspace{12pt} \par \noindent{\large \textbf{Return value}\par\vspace{6pt}\par}{#1}\par}

\begin{document}

\title{MPAS I/O}
\author{}

\maketitle
\tableofcontents


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Introduction
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Introduction}

In order to support multiple blocks of cells per MPI task, there are a number of
development issues that need to be addressed:

\begin{enumerate}

\item Update/extend the fundamental derived types in mpas\_grid\_types.F.                                
   In order for other parts of the infrastructure to handle multiple                                
   blocks per task in a clean way, we'll need to be able to pass a head                             
   pointer to a field into a routine, and have that routine loop through                            
   all blocks for that field, with information about which cells/edges/vertices                     
   in that field need to be communicated.                                                           
                                                                                                    
\item Decide on a new MPAS I/O abstraction layer, which will provide a                                
   high-level interface to the PIO layer for the rest of MPAS. This layer                           
   should work with blocks of fields, and make it possible to define an                             
   arbitrary set of I/O streams at run-time.                                                        
                                                                                                    
\item Add a new module to parse a run-time I/O configuration file that                                
   will describe which fields are read or written to each of the I/O                                
   streams that a user requests via the file. This module will make calls                           
   to the new MPAS I/O layer to register the requested fields for I/O in                            
   the requested streams.                                          
   
\item Update the mpas\_dmpar module to support communication operations on                              
   multiple blocks per task. This will likely involve revising the                                  
   internal data structures used to define communication of cells                                   
   between tasks, and also require revisions to the public interface                                
   routines themselves.                                                                             
                                                                                                    
\item Modify the block\_decomp module to enable a task to get a list of                                 
   cells in more than one block that it is to be the owner of.                                      
   Implemented in the simplest way, there could simply be a namelist                                
   option to specify how many blocks each task should own, and the                                  
   block\_decomp module could look for a graph.info.part.n file, with                                
   n=num\_blocks\_per\_task*num\_tasks, and assign blocks k, 2k, 3k, ...,                               
   num\_blocks\_per\_task*k to task k.    

\end{enumerate}                                                             
                                                                                                    
This document addresses the requirements and design of a new MPAS I/O layer (Item 2, above) that will provide
much-needed functionality, including the ability to perform I/O on multiple blocks per MPI task,
to perform input and output in parallel, and to allow for an arbitrary number of I/O streams (as well arbitrary
set of fields in each of those streams) to be defined by the user.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Requirements
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Requirements}

In order to meet current I/O needs, and to provide flexibility for future extension,
the new I/O layer in MPAS must meet the following requirements.

\begin{itemize}

\item The I/O interface must allow the user to define sets of fields (constituting a ``stream'') that are read or written
as a group from/to a file at a common time. The I/O times, as well as the fields in the stream,
are decided on a per-stream basis.

\item It must be possible to designate each stream as either an input stream or an output stream.

\item There must be no artificial limit (i.e., aside from memory limits) to the number of streams that can be concurrently in use. 

\item The I/O interface must allow the user to choose which I/O ``format'' (among any that are implemented by the I/O layer) to
use on a per-stream basis. A format refers to the file format and method used to write the file, e.g., netCDF, pNetCDF or binary via MPI-IO. 
At a minimum, the I/O layer must implement both serial netCDF and pNetCDF.

\item The I/O interface must support the ability to read and write variable attributes and global attributes.

\item The I/O interface must support fields with multiple blocks on an MPI task.

\item For an identical field, it must be possible to produce identical file output through the I/O layer regardless of the MPI task count, the distribution
of blocks between MPI tasks, or the distribution of cells between blocks.

\end{itemize}



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Design
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Design}

In this chapter, the design of the MPAS I/O layer is described. First, a summary of implementation issues is presented to help
in understanding the constraints placed on the design; a major consideration is the fact that, although partitions of the MPAS SCVT 
meshes are currently computed off-line, to be read at model start-up, MPAS should ultimately be capable of computing this partition `on-line', that is, at run-time.
Having discussed the major design constraints, the interface to the new I/O layer is then presented in detail.

\section{Issues and constraints}

\subsection{Model bootstrapping}

The process of reading fields at the beginning of an MPAS model run is inherently tied to the process of partitioning the
SCVT mesh at run-time. This follows from the fact that cell-connectivity information must be used in the generation of the partitions, yet this information resides
in the very file to be read as the {\em cellsOnCell} field. Further, in order to partition the edges and vertices of the mesh, information on the connections between the edges, vertices,
and cells is needed, and this information is also stored in a file as {\em edgesOnCell}, {\em cellsOnEdge}, {\em verticesOnCell}, and {\em cellsOnVertex}.
In short, cell-based fields are needed in order to partition the SCVT mesh so that these and other cell-based fields can be read in parallel onto their (computed)
computational partitions (i.e., blocks), and similarly for edge-based and vertex-based fields.

The current procedure for dealing with these issues relies on a bootstrapping procedure, in which:

\begin{enumerate}

\item The total number of cells, edges, and vertices in the mesh are read from the input file from the dimensions {\em nCells}, {\em nEdges}, and {\em nVertices}.

\item A contiguous range of cell, edge, and vertex global indices is assigned to each task, e.g., cells nint(mpi\_rank * nCells / mpi\_size)+1 through nint((mpi\_rank+1) * nCells / mpi\_size).

\item Each I/O task reads its range of global indices for the fields {\em indexToCellID}, {\em indexToEdgeID}, {\em indexToVertexID}, {\em nEdgesOnCell},
{\em cellsOnCell}, {\em edgesOnCell}, {\em verticesOnCell}, {\em cellsOnEdge}, and {\em cellsOnVertex}.

\item A partitioning of the SCVT mesh is requested from the block\_decomp module, given a distributed description of the mesh connectivity based on the {\em cellsOnCell} field 
(distributed across all I/O tasks); currently, this partitioning is read from a {\tt graph.info} file.

\item The {\em indexToCellID}, {\em nEdgesOnCell}, and {\em cellsOnCell} fields are re-distributed so that each of the tasks owns the global indices of these fields that
were assigned to it by the partitioning of the mesh.

\item Halos are constructed for the cells; a halo consists of all of the cells referenced in the {\em cellsOnCell} array that are not in the {indexToCellID} array.

\item The {\em edgesOnCell} and {\em verticesOnCell} fields are re-distributed so that each task has these fields for all cells in its block(s), including halo cells.

\item Each task constructs a list of edges and vertices adjacent to cells in the blocks(s) owned by that task. 

\item The {\em cellsOnEdge} and {\em cellsOnVertex} fields are re-distributed so that each task has these fields for all edges and cells in its block(s).

\item The edges and vertices in each block are divided into owned and halo edges and vertices based on the {\em cellsOnEdge} and {\em cellsOnVertex} fields; an edge {\em iEdge} is
owned iff {\em cellsOnEdge(1, iEdge)} is an owned cell, and a vertex {\em iVtx} is owned iff {\em cellsOnVertex(1, iVtx)} is an owned cell.

\item Knowing how many (and which) cells, edges, and vertices are in each block (as well as which are owned and which are ghost), block data structures are allocated by each task.

\item Fields are then read in parallel and re-distributed among the tasks into the field arrays of the block data structures on each task.

\end{enumerate}

In this procedure, it is important to note that every compute task is also an I/O task. This will not be true in future, where the I/O tasks may be a subset of the MPI tasks (or possibly
even a disjoint set of tasks).

\subsection{Super-arrays}

Currently, the registry-generated I/O code in the mpas\_io\_input and mpas\_io\_output modules handles the details of packing and unpacking individual constituent arrays from super-arrays; for example, in the atmosphere models, the fields {\em qv}, {\em qc}, and {\em qr} exist as individual fields in input and output files, but are packaged together in a ``super-array'' of one higher dimension in the model, namely, as the array {\em scalars}. To support multiple blocks per MPI task, the I/O system will most naturally work with the derived data types for fields and blocks, since these types contain links
between blocks on the same MPI task. However, there is currently no information available in the field types to indicate whether the field is a super-array, and, if so, the names of its constituents. The new I/O system could rely on the MPAS registry to generate code internal to the module to handle super-arrays, but such an approach would not easily facilitate run-time determination of the number of scalar constituents in a model, nor would it lead to completely general I/O code. In the new I/O layer, it would be preferable to have no registry-generated internal code (internal to the module), and to require that the field types be extended to contain super-array information, so that the I/O layer would be presented with all necessary information to pack or unpack these super-arrays.

\section{High-level approach}

Conceptually, the bootstrapping part of the input procedure described in the preceding section (steps 1 -- 11) is independent of the particular I/O streams that will be later used by the model
to read initial conditions, periodically update boundary conditions, and write history or restart files (step 12); also, the bootsrapping only needs to be performed once at model start-up, regardless
of the number of streams used in the model. Toward the goal of presenting a simple stream-oriented interface to the model developer and hiding the details of getting, assigning, and allocating blocks, yet minimizing the amount of code to be written and maintained, we propose to split the MPAS I/O interface into two parts. One part will provide low-level routines 
to open and close files, read and write arbitrary index ranges of individual arrays, read and write attributes, etc., with the level of abstraction similar to that of the netCDF or PIO interfaces. The second, high-level part of the I/O interface will provide routines for creating a stream, adding MPAS field-types to the stream, and reading or writing the stream. The high-level 
routines will work with the derived data types for fields and blocks defined in the mpas\_grid\_types module, and their functionality will be built on the functionality provided by the low-level interface, which will be used principally during bootstrapping.

One important consequence of allowing the user to define an arbitrary set of streams, in light of the need for a bootstrapping procedure, is that some file containing the information needed by the bootstrapping procedure must always be designated by the user. The responsibility for meeting this requirement will be taken on by the run-time I/O specification module, described in Item 3 of the Introduction.


\section{High-level interface description}

The high-level interface is expected to be the primary interface to MPAS I/O for the user (i.e., model developer), assuming the bootstrapping procedure needed to 
partition the global mesh and allocate blocks has been done. Using the interface described in this section, a typical set of calls might look something like the following.

\begin{verbatim}
call MPAS_io_init(dminfo, 16, 32, ierr)      ! From the "low-level" interface
call MPAS_createStream(init, 'x1.10242.init.nc', &
                       MPAS_STREAM_PNETCDF, MPAS_STREAM_INPUT, 0, ierr)
call MPAS_streamAddField(init, theta, ierr)
call MPAS_streamAddField(init, u, ierr)
call MPAS_streamAddField(init, w, ierr)
call MPAS_streamAddField(init, qv, ierr)
call MPAS_streamAddField(init, qc, ierr)
call MPAS_streamAddField(init, qr, ierr)
call MPAS_readStream(init, 1, ierr)
call MPAS_readStreamAtt(init, 'on_a_sphere', isSphericalGrid, ierr)
call MPAS_readStreamAtt(init, 'sphere_radius', radius, ierr)
call MPAS_closeStream(init, ierr)
call MPAS_io_finalize(ierr)                  ! From the "low-level" interface
\end{verbatim}   

One point not obvious from the interface description concerns streams that have a mix of time-varying and time-invariant fields. For such
streams, the time-invariant fields will be read only on the first call to MPAS\_readStream for the stream; subsequent calls to MPAS\_readStream will only 
read the specified time frame for time-varying fields. Similarly, calls to MPAS\_writeStream will only write time-invariant fields on the first
call for the stream, or whenever the specified number of frames per file has been exceeded and a new output file must be created; thus, for output streams,
every file created from that stream will contain a copy of the time-invariant fields.

Although there are routines for reading and writing global attributes, no analogous routines exist in the high-level interface for variable attributes. In the proposed design,
the set of variable attributes is fixed as those attributes in the {\tt io\_info} type described in the mpas\_grid\_types module; the values of these attributes are automatically
written and read when a stream is written or read. The rationale behind this decision is that, while global attributes may frequently be changed to reflect new information that needs to 
be carried around with a dataset, the variable attributes are more likely to be fixed to meet, e.g., CF metadata conventions.

\vspace{24pt}

%\begin{routine}
%\subroutine{MPAS\_streamInit}{dminfo, io\_task\_count, io\_task\_stride, ierr}
%\summary{Initializes the MPAS I/O layer; this routine must be called once by every task before any subsequent calls to MPAS I/O routines are made. }
%\begin{inputs}
%\argument{dminfo}{type(dm\_info)}{The dminfo structure returned by the mpas\_dmpar module}
%\argument{io\_task\_count}{integer}{The number of I/O tasks to use when reading and writing streams}
%\argument{io\_task\_stride}{integer}{The stride between I/O tasks}
%\end{inputs}
%\begin{outputs}
%\argument{ierr}{integer, optional}{The return error code}
%\end{outputs}
%\end{routine}

\begin{routine}
\subroutine{MPAS\_createStream}{stream, filename, io\_format, io\_direction, frames\_per\_file, ierr}
\summary{Creates a new I/O stream, to which fields can be added before reading or writing the stream. For input streams, the number of frames per file must be 0 or 1, and for output streams the number of frames per file can be any number $\ge 0$; if frames\_per\_file $>0$, the first timestamp in the file will be inserted automatically into the filename based on the value of mesh\%xtime. }
\begin{inputs}
\argument{filename}{character (len=*)}{The name of the file to which the stream will be connected; if io\_direction is MPAS\_STREAM\_INPUT, filename must refer to an existing file}
\argument{io\_format}{integer}{The form of the stream, either MPAS\_STREAM\_NETCDF or \break MPAS\_STREAM\_PNETCDF}
\argument{io\_direction}{integer}{Whether the stream is an input or output stream, specified with either of the constants MPAS\_STREAM\_INPUT, MPAS\_STREAM\_OUTPUT}
\argument{frames\_per\_file}{integer}{For time-varying fields, the maximum number of time frames that can exist in a file for the stream; 0 indicates an unlimited number of frames}
\end{inputs}
\begin{outputs}
\argument{stream}{type(MPAS\_Stream\_type)}{The newly created I/O stream}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_streamAddField}{stream, field, ierr}
\summary{Adds an MPAS field type to the set of fields in the stream; the field can be any of the field types defined in the mpas\_grid\_types module, e.g., field2DReal.}
\begin{inputs}
\argument{stream}{type(MPAS\_Stream\_type)}{An MPAS stream previously created with a call to MPAS\_createStream}
\argument{field}{type(field2DReal)}{The field to be added to the stream}
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_readStream}{stream, frame, ierr}
\summary{Reads all fields associated with the stream; for time-varying fields, the field will be read at the time-frame specified by the frame argument.}
\begin{inputs}
\argument{stream}{type(MPAS\_Stream\_type)}{The I/O stream to read}
\argument{frame}{integer}{For time-varying fields, the time frame to be read; ignored for time-invariant fields}
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_writeStream}{stream, frame, ierr}
\summary{Writes all fields associated with the stream; for time-varying fields, the field will be written at the time-frame specified by the frame argument.}
\begin{inputs}
\argument{stream}{type(MPAS\_Stream\_type)}{The I/O stream to write}
\argument{frame}{integer}{For time-varying fields, the time frame to be written}
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_readStreamAtt}{stream, attName, attValue, ierr}
\summary{Reads a global attribute from the stream; the type of the attribute in the stream must match the type of the attValue argument.}
\begin{inputs}
\argument{stream}{type(MPAS\_Stream\_type)}{The I/O stream from which the attribute will be read}
\argument{attName}{character (len=*)}{The name of the attribute to read}
\end{inputs}
\begin{outputs}
\argument{attValue}{integer}{The value of the attribute}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_writeStreamAtt}{stream, attName, attValue, ierr}
\summary{Writes a global attribute to the stream, with the type of the attribute determined by the type of the attValue argument.}
\begin{inputs}
\argument{stream}{type(MPAS\_Stream\_type)}{The I/O stream to which the attribute will be written}
\argument{attName}{character (len=*)}{The name of the attribute to write}
\argument{attValue}{integer}{The attribute value to be written}
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_closeStream}{stream, ierr}
\summary{Closes an I/O stream.}
\begin{inputs}
\argument{stream}{type(MPAS\_Stream\_type)}{The I/O stream to be closed}
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

%\begin{routine}
%\subroutine{MPAS\_streamFinalize}{ierr}
%\summary{Finalizes the MPAS I/O layer; this routine must be the last MPAS I/O routine called by all tasks. }
%\begin{outputs}
%\argument{ierr}{integer, optional}{The return error code}
%\end{outputs}
%\end{routine}


\section{Low-level interface description}

The main purpose of the low-level MPAS I/O interface is to support the bootstrapping procedure at model start-up and to support the functionality of the high-level I/O interface
(i.e., to allow the high-level interface to be implemented using a package-independent interface). Of course, if the user requires a greater level of control over the reading or writing
of a file, the low-level interface could in principle be used directly without tying the resulting user code to a particular external package (e.g., PIO or netCDF).

\vspace{24pt}
 
\begin{routine}
\subroutine{MPAS\_io\_init}{dminfo, io\_task\_count, io\_task\_stride, ierr}
\summary{Initializes the MPAS I/O layer; this routine must be called once by every task before any subsequent calls to MPAS I/O routines are made. }
\begin{inputs}
\argument{dminfo}{type(dm\_info)}{The dminfo structure returned by the mpas\_dmpar module}
\argument{io\_task\_count}{integer}{The number of I/O tasks to use when reading and writing streams}
\argument{io\_task\_stride}{integer}{The stride between I/O tasks}
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}
 
\begin{routine}
\function{MPAS\_io\_open}{filename, mode, ioformat, ierr}
\summary{Opens a file, either for reading or writing, using the specified file-level format. }
\returnvalue{A handle (of type MPAS\_IO\_Handle\_type) to the opened file to be used in subsequent calls to the MPAS low-level I/O layer.}
\begin{inputs}
\argument{filename}{character (len=*)}{The name of the file to open}
\argument{mode}{integer}{Either of the constants MPAS\_IO\_READ or MPAS\_IO\_WRITE, specifying whether the file is to be opened for reading or writing}
\argument{ioformat}{integer}{The format of the file, either MPAS\_IO\_NETCDF or MPAS\_IO\_PNETCDF}
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_inq\_dim}{handle, dimname, dimsize, ierr}
\summary{Returns the value of a dimension in a file opened for reading. }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\argument{dimname}{character (len=*)}{The name of the dimension}
\end{inputs}
\begin{outputs}
\argument{dimsize}{integer}{The size of the dimension}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_def\_dim}{handle, dimname, dimsize, ierr}
\summary{Sets the value of a dimension in a file opened for writing. }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\argument{dimname}{character (len=*)}{The name of the dimension}
\argument{dimsize}{integer}{The size of the dimension; the constant MPAS\_IO\_UNLIMITED\_DIM indicates an unlimited (record) dimension; only one unlimited dimension may be defined in a file }
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_inq\_var}{handle, fieldname, fieldtype, ndims, dimnames, dimsizes, ierr}
\summary{Returns information (determined by the optional parameters passed to the routine) about a variable in a file opened for reading. }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\argument{fieldname}{character (len=*)}{The name of the field}
\end{inputs}
\begin{outputs}
\argument{fieldtype}{integer, optional}{The type of the field, identified by one of the module constants MPAS\_IO\_REAL, MPAS\_IO\_INTEGER, or MPAS\_IO\_LOGICAL}
\argument{ndims}{integer, optional}{The dimensionality of the field}
\argument{dimnames}{character (len=64), dimension(:), pointer, optional}{An array of dimension names for the field, which will be allocated by the routine with size ndims}
\argument{dimsizes}{integer, dimension(:), pointer, optional}{An array of dimension sizes for the field, which will be allocated by the routine with size ndims}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_def\_var}{handle, fieldname, fieldtype, dimnames, ierr}
\summary{Defines a variable in a file opened for writing. The dimensionality of the field is determined by the size of the dimnames argument. }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\argument{fieldname}{character (len=*)}{The name of the field}
\argument{fieldtype}{integer}{The type of the field, identified by one of the module constants MPAS\_IO\_REAL, MPAS\_IO\_INTEGER, MPAS\_IO\_LOGICAL }
\argument{dimnames}{character (len=64), dimension(:)}{An array of dimension names, all of which must have been defined previously with calls to MPAS\_io\_def\_dim(), 
                                                                                                     with the size of the array determining the dimensionality of the field}
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_get\_var\_indices}{handle, fieldname, indices, ierr}
\summary{Returns the global indices into the decomposed outermost dimension that will be read by the MPI task for the specified field. Each global index must be specified by at most one task. }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\argument{fieldname}{character (len=*)}{The name of the field}
\end{inputs}
\begin{outputs}
\argument{indices}{integer, dimension(:), pointer}{An array giving the global indices that will be read on this task for the field; the routine will allocate the array to match the size of the index set to be returned }
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_set\_var\_indices}{handle, fieldname, indices, ierr}
\summary{Sets the global indices into the decomposed outermost dimension that will be read by the MPI task for the specified field. Each global index must be specified by at most one task. }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\argument{fieldname}{character (len=*)}{The name of the field}
\argument{indices}{integer, dimension(:)}{An array of global indices to be written by this task for the field}
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_get\_var}{handle, fieldname, array, ierr}
\summary{Reads the part of a field determined by the global indices that were previously specified in a call to MPAS\_io\_set\_var\_indices(); the size of the outer-most dimension of the array
                   argument must match the size of the index array passed to a call to MPAS\_io\_set\_var\_indices() for the field. This is an overloaded routine, and the type of the
                   array argument must match the type of the field in the file. }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\argument{fieldname}{character (len=*)}{The name of the field}
\end{inputs}
\begin{outputs}
\argument{array}{various types, dimension(:)}{The part of the field to be read by this task }
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_put\_var}{handle, fieldname, array, ierr}
\summary{Writes the part of  a field determined by the global indices that were previously specified in a call to MPAS\_io\_set\_var\_indices(); the size of the outermost dimension of the array
                   argument must match the size of the index array passed to a call to MPAS\_io\_set\_var\_indices() for the field. This is an overloaded routine, and the type of the
                   array argument will determine the type of the field written to the file. }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\argument{fieldname}{character (len=*)}{The name of the field}
\argument{array}{various types, dimension(:)}{The part of the field to be written by this task }
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_get\_att}{handle, attName, attValue, fieldname, ierr}
\summary{Returns the value of an attribute from a file. If a fieldname is specified, the attribute is a variable attribute; otherwise, the attribute is a global attribute. This is an overloaded
                   routine, and the type of the attValue argument must match the type of the attribute in the file. }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\argument{attName}{character (len=*)}{The name of the attribute}
\argument{fieldname}{character (len=*), optional}{If present, the name of the field to which the attribute is attached }
\end{inputs}
\begin{outputs}
\argument{attValue}{various types}{The value of the attribute }
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_put\_att}{handle, attName, attValue, fieldname, ierr}
\summary{Sets the value of an attribute in a file. If a fieldname is specified, the attribute is a variable attribute; otherwise, the attribute is a global attribute.  This is an overloaded
                   routine, and the type of the attValue argument will determine the type of the attribute written to the file. }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\argument{attName}{character (len=*)}{The name of the attribute }
\argument{attValue}{various types}{The value of the attribute }
\argument{fieldname}{character (len=*), optional}{If present, the name of the field for which attName is an attribute}
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_close}{handle, ierr}
\summary{Closes a file that was previously opened with a call to MPAS\_io\_open(). }
\begin{inputs}
\argument{handle}{type(MPAS\_IO\_Handle\_type)}{An MPAS file handle  }
\end{inputs}
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}

\begin{routine}
\subroutine{MPAS\_io\_finalize}{ierr}
\summary{Finalizes the MPAS I/O layer. This routine must be the last MPAS I/O routine called once by every task. }
\begin{outputs}
\argument{ierr}{integer, optional}{The return error code}
\end{outputs}
\end{routine}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Implementation
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Implementation}

TBD


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Testing
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Testing}

TBD


\end{document}
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mpas_io.pdf
Type: application/pdf
Size: 147121 bytes
Desc: not available
Url : http://mailman.ucar.edu/pipermail/mpas-developers/attachments/20120224/26338872/attachment-0001.pdf 


More information about the mpas-developers mailing list