[Dart-dev] [5131] DART/trunk/doc/html/Kodiak_release.html: Merge in changes from kodiak branch.
nancy at ucar.edu
nancy at ucar.edu
Thu Aug 18 17:54:32 MDT 2011
Revision: 5131
Author: nancy
Date: 2011-08-18 17:54:32 -0600 (Thu, 18 Aug 2011)
Log Message:
-----------
Merge in changes from kodiak branch.
Modified Paths:
--------------
DART/trunk/doc/html/Kodiak_release.html
-------------- next part --------------
Modified: DART/trunk/doc/html/Kodiak_release.html
===================================================================
--- DART/trunk/doc/html/Kodiak_release.html 2011-08-18 23:53:20 UTC (rev 5130)
+++ DART/trunk/doc/html/Kodiak_release.html 2011-08-18 23:54:32 UTC (rev 5131)
@@ -1,15 +1,15 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
-<HTML>
-<HEAD>
-<TITLE>DART Kodiak Release Notes</TITLE>
+<html>
+<head>
+<title>DART Kodiak Release Notes</title>
<link rel="stylesheet" type="text/css" href="doc.css" />
<link href="dart.ico" rel="shortcut icon" />
-</HEAD>
-<BODY>
-<A NAME="TOP"></A>
+</head>
+<body>
+<a name="TOP"></a>
-<H1>DART Kodiak Release Notes</H1>
+<h1>DART Kodiak Release Notes</h1>
<table border=0 summary="" cellpadding=5>
<tr>
@@ -17,43 +17,1933 @@
<img src="Dartboard7.png" alt="DART project logo" height=70 />
</td>
<td>
- <P>Jump to <a href="https://proxy.subversion.ucar.edu/DAReS/DART/trunk/index.html">DART Documentation Main Index</a><br />
+ <p>Jump to DART Documentation Main Index
+ <a href="https://proxy.subversion.ucar.edu/DAReS/DART/releases/Kodiak/index.html">Website</a>
+ or <a href="../../index.html">local file</a><br />
<small><small>version information for this file: <br />
<!-- version tag follows, do not edit -->
$Id$</small></small>
- </P></td>
+ </p></td>
</tr>
</table>
-<A HREF="#Overview">Overview</A> /
-<A HREF="#Nonbackward">Non-backwards Compatible Changes</A> /
-<A HREF="#NewFeatures">New Features</A> /
-<A HREF="#NewModels">New Models</A> /
-<A HREF="#ChangedModels">Changed Models</A> /
-<A HREF="#NewObs">New Observations</A> /
-<A HREF="#NewDiagnostics">New Diagnostics and Documentation</A> /
-<A HREF="#NewUtilities">New Utilities</A> /
-<A HREF="#KnownProblems">Known Problems</A> /
-<A HREF="#Legalese">Terms of Use</A>
+<a href="#Overview">Dart Overview</a> /
+<a href="#GettingStarted">Getting Started</a> /
+<a href="#Installation">Installation</a> /
+<a href="#CurrentUsers">Notes for Current Users</a> /
+<a href="#Nonbackward">Non-backwards Compatible Changes</a> /
+<a href="#NewFeatures">New Features</a> /
+<a href="#NewModels">New Models</a> /
+<a href="#ChangedModels">Changed Models</a> /
+<a href="#NewObs">New Observations</a> /
+<a href="#NewDiagnostics">New Diagnostics and Documentation</a> /
+<a href="#NewUtilities">New Utilities</a> /
+<a href="#KnownProblems">Known Problems</a> /
+<a href="#Legalese">Terms of Use</a>
<!--==================================================================-->
-<A NAME="Overview"></A>
-<H2>Overview</H2>
+<a name="Overview"></a>
+<h2>Dart Overview</h2>
-<P>
+<p>The Data Assimilation Research Testbed (DART) is designed to
+facilitate the combination of assimilation algorithms, models,
+and real (or synthetic) observations to allow
+increased understanding of all three.
+The DART programs are highly portable, having been
+compiled with many Fortran 90 compilers
+and run on linux compute-servers, linux clusters, OSX laptops/desktops,
+SGI Altix clusters, supercomputers running AIX, and more.
+Read the
+<a href="#customizations">Customizations</a> section
+for help in building on new platforms.</p>
+
+<p>
+DART employs a modular programming approach to apply an Ensemble Kalman Filter
+which nudges models toward a state that is more consistent with information
+from a set of observations. Models may be swapped in and out, as can
+different algorithms in the Ensemble Kalman Filter. The method
+requires running multiple instances of a model to generate an ensemble of
+states. A forward operator appropriate for the type of observation being assimilated
+is applied to each of the states to generate the model's estimate of the observation.
+Comparing these estimates and their uncertainty to the observation and
+its uncertainty ultimately results in the adjustments to the model states.
+There's much more to it, described in detail in the tutorial directory
+of the package.</p>
+
+<p>
+DART diagnostic output includes two netCDF files containing
+the model states just before
+the adjustment (<em class=file>Prior_Diag.nc</em>) and just after the adjustment
+(<em class=file>Posterior_Diag.nc</em>) as well as a file
+<em class=file>obs_seq.final</em> with the model estimates of the observations.
+There is a suite of Matlab® functions that facilitate exploration of the
+results, but the netCDF files are inherently portable and contain all the
+necessary metadata to interpret the contents.
+</p>
+
+<p>In this document links are available which point to Web-based documentation
+files and also to the same information in html files distributed with DART.
+If you have used subversion to check out a local copy of the DART files you
+can open this file in a browser by loading
+<em class=file>DART/doc/html/Kodiak_release.html</em>
+and then use the <em class=file>local file</em> links to see
+other documentation pages without requiring a connection to
+the internet.
+If you are looking at this documentation from
+the <em class=file>www.image.ucar.edu</em> web server or you are
+connected to the internet you can use the
+<em class=file>Website</em> links to view other documentation pages.
+</p>
+
+<!--==================================================================-->
+
+<a name="GettingStarted"></a>
+<h2>Getting Started</h2>
+
+<h3>What's Required</h3>
+<ol><li>a Fortran 90 compiler</li>
+ <li>a netCDF library including the F90 interfaces</li>
+ <li>the C shell</li>
+ <li>(optional, to run in parallel) an MPI library</li>
+</ol>
+<p>
+DART has been tested on many Fortran compilers and platforms.
+We don't have any platform-dependent code sections and we use
+only the parts of the language that are portable across all
+the compilers we have access to.
+We explicitly set the Fortran 'kind' for all real values and do
+not rely on autopromotion or other compile-time flags to set the
+default byte size for numbers.
+It is possible that some model-specific interface code from
+outside sources may have specific compiler flag requirements;
+see the documentation for each model.
+The low-order models and all common portions of the DART code
+compile cleanly.
+<br />
+<br />
+DART uses the
+<a href="http://www.unidata.ucar.edu/packages/netcdf/">netCDF</a>
+self-describing data format with a particular metadata convention to
+describe output that is used to analyze the results of assimilation
+experiments. These files have the extension <em class=file>.nc</em>
+and can be read by a number of standard data analysis tools.
+<br />
+<br />
+Since most of the models being used with DART are
+written in Fortran and run on various UNIX or *nix platforms, the
+development environment for DART is highly skewed to these machines.
+We do most of our development on a small linux workstation and a mac laptop
+running OSX 10.x, and we have an extensive test network.
+(I've never built nor run DART on a Windows machine - so I don't even
+know if it's possible. If you have run it (under Cygwin?) please let me
+know how it went -- I'm curious. Tim - thoar 'at' ucar 'dot ' edu)
+</p>
+
+<h3>What's nice to have</h3>
+
+<strong>ncview</strong>: DART users have used
+<a href="http://meteora.ucsd.edu/~pierce/ncview_home_page.html">ncview</a>
+to create graphical displays of output data fields. The 2D rendering is
+good for 'quick-look' type uses, but I wouldn't want to publish with it.
+<br /><br />
+
+<strong>NCO</strong>: The <a href="http://nco.sourceforge.net">NCO</a> tools
+are able to perform operations on netCDF files like concatenating, slicing,
+and dicing.<br /><br />
+
+<strong>Matlab</strong>®: A set of
+<a href="http://www.mathworks.com/">Matlab®</a> scripts designed to
+produce graphical diagnostics from DART netCDF output files are also part
+of the DART project.<br /><br />
+
+<strong>MPI</strong>: The DART system includes an MPI
+option. MPI stands for 'Message Passing Interface', and is both a library and
+run-time system that enables multiple copies of a single program to run in
+parallel, exchange data, and combine to solve a problem more quickly.
+DART does <b>NOT</b> require MPI to run; the default build
+scripts do not need nor use MPI in any way. However, for larger models with
+large state vectors and large numbers of observations, the data assimilation
+step will run much faster in parallel, which requires MPI to be installed and
+used. However, if multiple ensembles of your model fit comfortably (in time
+and memory space) on a single processor, you need read no further about MPI.
+<br /><br />
+
+<h3>Types of input</h3>
+
+<p>DART programs can require three different types of input.
+First, some of the DART programs, like those for creating synthetic
+observational datasets, require interactive input from the keyboard.
+For simple cases this interactive input can be made directly
+from the keyboard. In more complicated cases a file containing
+the appropriate keyboard input can be created and this file
+can be directed to the standard input of the DART program.
+Second, many DART programs expect one or more input files in
+DART specific formats to be available. For instance,
+<em class=program>perfect_model_obs</em>, which creates a synthetic
+observation set given a particular model and a description
+of a sequence of observations, requires an input file that
+describes this observation sequence.
+At present, the observation files for DART are in a custom format in either
+human-readable ascii or more compact machine-specific binary.
+Third, many DART modules (including main programs) make use of
+the Fortran90 namelist facility to obtain values of certain parameters
+at run-time. All programs look for a namelist input file
+called <em class=file>input.nml</em> in the directory in which
+the program is executed. The <em class=file>input.nml</em>
+file can contain a sequence of individual Fortran90 namelists
+which specify values of particular parameters for modules that
+compose the executable program.
+</p>
+
+<!--==================================================================-->
+
+<h2>Document conventions</h2>
+<p>
+Anything underlined is a URL.
+<br />
+<br />
+<em class=file>All filenames look like this -- (typewriter font, green)</em>.<br />
+<em class=program>Program names look like this -- (italicized font, green)</em>.<br />
+<em class=input>user input looks like this -- (bold, magenta)</em>.
+</p>
+<div class=unix>
+commands to be typed at the command line are contained in an
+indented gray box.
+</div>
+<p>
+And the contents of a file are enclosed in a box with a border:
+</p>
+<div class=routine>
+&hypothetical_nml<br />
+ obs_seq_in_file_name = "obs_seq.in",<br />
+ obs_seq_out_file_name = "obs_seq.out",<br />
+ init_time_days = 0,<br />
+ init_time_seconds = 0,<br />
+ output_interval = 1<br />
+&end</div>
+
+<!--==================================================================-->
+
+<div><hr /><p align=right><a href="#"><small>[top]</small></a></p></div>
+<a name="Installation"></a>
+<h2>Installation</h2>
+
+<p>
+This document outlines the installation of the DART software
+and the system requirements. The entire installation process is summarized in
+the following steps:
+</p>
+
+<ol><li><a href="#compilers">Determine which F90 compiler is available</a>.</li>
+ <li><a href="#netCDFlib">Determine the location of the
+ <em class=code>netCDF</em> library</a>.</li>
+ <li><a href="#download">Download the DART software
+ into the expected source tree</a>.</li>
+ <li><a href="#customizations">Modify certain DART files to reflect
+ the available F90 compiler and location of the
+ appropriate libraries</a>.</li>
+ <li><a href="#building">Build the executables</a>.</li>
+</ol>
+
+<p>
+We have tried to make the code as portable as possible, but we
+do not have access to all compilers on all platforms, so there are no
+guarantees. We are interested in your experience building the system,
+so please email me (Tim Hoar) thoar 'at' ucar 'dot' edu
+(trying to cut down on the spam).
+</p>
+
+<p>
+After the installation, you might want to peruse the following.
+</p>
+
+<ul><li><a href="#Running">Running the Lorenz_63 Model</a>.</li>
+ <li><a href="#matlab">Using the Matlab® diagnostic scripts</a>.</li>
+ <li>A short discussion on
+ <a href="#discussion">bias, filter divergence and covariance inflation.</a></li>
+ <li>And another one on
+ <a href="#syntheticobservations">synthetic observations</a>.</li>
+</ul>
+
+<p>You should <i>absolutely</i> run the DARTLAB
+interactive tutorial (if you have Matlab available) and look at the
+DARTLAB presentation slides
+<a href="https://proxy.subversion.ucar.edu/DAReS/DART/releases/Kodiak/DART_LAB/DART_LAB.html">
+Website</a> or <a href="../../DART_LAB/DART_LAB.html">local file</a>
+in the
+<em class="file">DART_LAB</em> directory, and then take the tutorial
+in the <em class="file">DART/tutorial</em> directory.</p>
+
+<!--==================================================================-->
+
+<div><hr /><p align=right><a href="#"><small>[top]</small></a></p></div>
+<a name="compilers"></a>
+<h3>Requirements: an F90 Compiler</h3>
+
+<p>
+The DART software has been successfully built on several Linux/x86
+platforms with several versions of the
+<a href="http://www.intel.com/software/products/compilers/flin">Intel Fortran
+Compiler for Linux</a>, which (at one point) is/was free for individual
+scientific use. Also Intel Fortran for Mac OSX.
+It has also been built and successfully run with several
+versions of each of the following:
+<a href="http://www.pgroup.com">Portland Group Fortran Compiler</a>,
+<a href="http://www.lahey.com">Lahey Fortran Compiler</a>,
+<a href="http://www.pathscale.com">Pathscale Fortran Compiler</a>,
+<a href="http://gcc.gnu.org/fortran">GNU Fortran 95 Compiler ("gfortran")</a>,
+<a href="http://www.absoft.com">Absoft Fortran 90/95 Compiler (Mac OSX)</a>.
+Since recompiling the code is a necessity to experiment
+with different models, there are no binaries to distribute.
+</p>
+
+<!--==================================================================-->
+
+<div><hr /><p align=right><a href="#"><small>[top]</small></a></p></div>
+<a name="netCDFlib"></a>
+<h3>Requirements: the <em class=file>netCDF</em> library</h3>
+
+<p>
+DART uses the
+<a href="http://www.unidata.ucar.edu/packages/netcdf/">netCDF</a>
+self-describing data format for the results of assimilation
+experiments. These files have the extension <em class=file>.nc</em>
+and can be read by a number of standard data analysis tools.
+In particular, DART also makes use of the F90 interface to the library
+which is available through the <em class=file>netcdf.mod</em> and
+<em class=file>typesizes.mod</em> modules.
+<em class=bold>IMPORTANT</em>: different compilers create these modules with
+different "case" filenames, and sometimes they are not <strong>both</strong>
+installed into the expected directory. It is required that both modules
+be present. The normal place would be in the <tt>netcdf/include</tt>
+directory, as opposed to the <tt>netcdf/lib</tt> directory.
+</p>
+
+<p>
+If the netCDF library does not exist on your system, you must build
+it (as well as the F90 interface modules). The library and instructions
+for building the library or installing from an RPM may be found at
+the netCDF home page:
+<a href="http://www.unidata.ucar.edu/packages/netcdf/">
+http://www.unidata.ucar.edu/packages/netcdf/</a>
+Pay particular attention to the compiler-specific patches that must
+be applied for the Intel Fortran Compiler. (Or the PG compiler, for
+that matter.)
+</p>
+
+<p>
+The location of the netCDF library, <em class=file>libnetcdf.a</em>,
+and the locations of both <em class=file>netcdf.mod</em> and
+<em class=file>typesizes.mod</em> will be needed by the makefile
+template, as described in the <a href="#compiling">compiling</a>
+section. Depending on the netCDF build options, the Fortran 90
+interfaces may be built in a separate library named
+<em class=file>netcdff.a</em> and you may need to add
+<em class=code>-lnetcdff</em> to the library flags.
+</p>
+
+<!--==================================================================-->
+
+<div><hr /><p align=right><a href="#"><small>[top]</small></a></p></div>
+<a name="download"></a>
+<h2>Downloading the distribution.</h2>
+
+<p>
+<strong>HURRAY</strong>! The DART source code is now distributed through
+an anonymous Subversion server! The <strong>big</strong> advantage is
+the ability to patch or update existing code trees at your discretion.
+Subversion (the client-side app
+is '<strong>svn</strong>') allows you to compare your code tree with
+one on a remote server and selectively update individual files or groups of
+files. Furthermore, now everyone has access to any version of any file in
+the project, which is a huge help for developers. I have a brief summary of
+the svn commands I use most posted at:
+<a href="http://www.image.ucar.edu/~thoar/svn_primer.html">
+http://www.image.ucar.edu/~thoar/svn_primer.html</a>
+<br />
+<br />
+The resources to develop and support DART come from our ability to
+demonstrate our growing user base. We ask that you register at our
+download site <a href="http://www.image.ucar.edu/DAReS/DART/DART_download">
+http://www.image.ucar.edu/DAReS/DART/DART_download</a>
+and promise that the information will only be used to notify you
+of new DART releases and shown to our sponsers in an aggregated form:
+"Look - we have three users from Tonawanda, NY". After filling in the form,
+you will be directed to a website that has instructions on how to download
+the code.
+<br />
+<br />
+svn has adopted the strategy that "disk is cheap". In addition to downloading
+the code, it downloads an additional copy of the code to store locally (in
+hidden .svn directories) as well as some administration files. This allows
+svn to perform some commands even when the repository is not available.
+It does double the size of the code tree ... so the download is something
+like 480MB -- pretty big. BUT - all future updates are (usually) just the
+differences, so they happen very quickly.
+<br />
+<br />
+If you follow the instructions on the download site, you should wind up with
+a directory named <em class=file>DART</em>. Compiling the code in this tree
+(as is usually the case) will necessitate much more space.
+<br />
+<br />
+If you cannot use svn, just let me know and I will create a tar file for you.
+svn is so superior to a tar file that a tar file should be considered a last
+resort.
+<br />
+<br />
+The code tree is very "bushy"; there are many directories of support
+routines, etc. but only a few directories involved with the
+customization and installation of the DART software. If you can
+compile and run ONE of the low-order models, you should be able to
+compile and run ANY of the low-order models. For this reason,
+we can focus on the Lorenz `63 model. Subsequently, the only
+directories with files to be modified to check the installation
+are:
+ <em class=file>DART/mkmf</em>,
+ <em class=file>DART/models/lorenz_63/work</em>, and
+ <em class=file>DART/matlab</em> (but only for analysis).
+</p>
+
+<!--==================================================================-->
+
+<div><hr /><p align=right><a href="#"><small>[top]</small></a></p></div>
+<a name="customizations"></a>
+<h2>Customizing the build scripts -- Overview.</h2>
+
+<p>
+DART executable programs are constructed using two tools:
+<em class=program>make</em> and
+<em class=program>mkmf</em>.
+The <em class=program>make</em> utility is a very common
+piece of software that requires a user-defined input file that records
+dependencies between different source files. <em class=program>make</em>
+then performs a hierarchy of actions when one or more of the
+source files is modified. The <em class=program>mkmf</em> utility is
+a custom preprocessor that generates a <em class=program>make</em> input file
+(named <em class=file>Makefile</em>) and an example namelist
+<em class=file>input.nml.<em class=program><i>program</i>_default</em></em>
+with the default values. The <em class=file>Makefile</em> is designed
+specifically to work with object-oriented Fortran90 (and other languages)
+for systems like DART.
+</p>
+
+<p>
+<em class=program>mkmf</em> requires two separate input files.
+The first is a `template' file which specifies details of the commands
+required for a specific Fortran90 compiler and may also contain
+pointers to directories containing pre-compiled utilities required by
+the DART system. <strong>This template file will need to
+be modified to reflect your system</strong>. The second input file is a
+`path_names' file which includes a complete list of the locations
+(either relative or absolute) of all Fortran90 source files that are
+required to produce a particular DART program.
+Each 'path_names' file must contain a path for
+exactly one Fortran90 file containing a main program,
+but may contain any number of additional paths pointing to files
+containing Fortran90 modules.
+An <em class=program>mkmf</em> command is executed which
+uses the 'path_names' file and the mkmf template file to produce a
+<em class=file>Makefile</em> which is subsequently used by the
+standard <em class=program>make</em> utility.
+</p>
+
+<p>
+Shell scripts that execute the mkmf command for all standard
+DART executables are provided as part of the standard DART software.
+For more information on <em class=program>mkmf</em> see
+<a href="http://www.gfdl.gov/fms/pubrel/j/atm_dycores/doc/dycore_public_manual.html#mkmf">
+the FMS mkmf description</a>.
+<br />
+One of the benefits of using <em class=program>mkmf</em> is that it also
+creates an example namelist file for each program. The example namelist is
+called
+<em class=file>input.nml.<em class=program><i>program</i>_default</em></em>,
+so as not to clash with any
+exising <em class=file>input.nml</em> that may exist in that directory.
+</p>
+
+<a name="template"></a>
+<h3 class=indent1>Building and Customizing the 'mkmf.template' file</h3>
+
+<p>
+A series of templates for different compilers/architectures exists
+in the <em class=file>DART/mkmf/</em> directory and have names with
+extensions that identify the compiler, the architecture, or both.
+This is how you inform the build process of the specifics of your system.
+Our intent is that you copy one that is similar to your system into
+<em class=file>mkmf.template</em> and customize it.
+For the discussion that follows, knowledge of the contents of one of these
+templates (i.e. <em class=file>mkmf.template.gfortran</em>) is needed.
+Note that only the LAST lines are shown here,
+the head of the file is just a big comment (worth reading, btw).
+</p>
+
+<div class=routine>
+...<br />
+MPIFC = mpif90 <br />
+MPILD = mpif90 <br />
+FC = gfortran <br />
+LD = gfortran <br />
+NETCDF = /usr/local <br />
+INCS = ${NETCDF}/include <br />
+FFLAGS = -O2 -I$(INCS) <br />
+LIBS = -L${NETCDF}/lib -lnetcdf <br />
+LDFLAGS = -I$(INCS) $(LIBS) <br />
+</div>
+
+<p>
+Essentially, each of the lines defines some part of the resulting
+<em class=file>Makefile</em>. Since <em class=program>make</em>
+is particularly good at sorting out dependencies, the order of these
+lines really doesn't make any difference.
+The <em class=code>FC = gfortran</em> line ultimately defines the
+Fortran90 compiler to use, etc.
+The lines which are most likely to need site-specific changes
+start with <em class=code>FFLAGS</em> and <em class=code>NETCDF</em>, which
+indicate where to look for the netCDF F90 modules and the
+location of the netCDF library and modules.
+<br /><br />
+If you have MPI installed on your system <em class=code>MPIFC, MPILD</em>
+dictate which compiler will be used in that instance. If you do not have
+MPI, these variables are of no consequence.
+</p>
+
+<a href="netCDF"></a>
+<h4 class=indent2>NETCDF</h4>
+
+<p class=indent1>
+Modifying the <em class=code>NETCDF</em> value should be relatively
+straightforward.<br />
+Change the string to reflect the location of your netCDF installation
+containing <em class=file>netcdf.mod</em> and
+<em class=file>typesizes.mod</em>.
+The value of the <em class=code>NETCDF</em> variable will be used by
+the <em class=code>FFLAGS, LIBS,</em> and <em class=code>LDFLAGS</em>
+variables.<br />
+</p>
+
+
+<a href="fflags"></a>
+<h4 class=indent2>FFLAGS</h4>
+
+<p class=indent1>
+Each compiler has different compile flags, so there is really no way
+to exhaustively cover this other than to say the templates as we supply
+them should work -- depending on the location of your netCDF.
+The low-order models can be compiled without a <em class=code>-r8</em>
+switch, but the <em class=file>bgrid_solo</em> model cannot.
+</p>
+
+<a href="libs"></a>
+<h4 class=indent2>LIBS</h4>
+<p class=indent1>
+The Fortran 90 interfaces may be part of the default
+<em class=file>netcdf.a</em> library and <em class=code>-lnetcdf</em>
+is all you need. However it is also common for the
+Fortran 90
+interfaces to be built in a separate library named
+<em class=file>netcdff.a</em>. In that case you will
+need <em class=code>-lnetcdf</em> and also
+<em class=code>-lnetcdff</em> on the <strong>LIBS</strong> line.
+This is a build-time option when the netCDF libraries
+are compiled so it varies from site to site.
+</p>
+<br />
+
+
+<a name="path_names"></a>
+<h3 class=indent1>Customizing the 'path_names_*' file</h3>
+
+<p>
+Several <em class=file>path_names_*</em> files are provided in
+the <em class=file>work</em> directory for each specific model,
+in this case: <em class=file>DART/models/lorenz_63/work</em>.
+Since each model comes with its own set of files, the <em
+class=file>path_names_*</em> files need no customization.
+</p>
+
+<!--==================================================================-->
+
+<div><hr /><p align=right><a href="#"><small>[top]</small></a></p></div>
+<a name="building"></a>
+<h2>Building the Lorenz_63 DART project.</h2>
+
+<p>DART executables are constructed in a <em class=file>work</em>
+subdirectory under the directory containing code for the given model.
+From the top-level DART directory change to the L63 work
+directory and list the contents:
+</p>
+
+<div class=unix>
+cd DART/models/lorenz_63/work<br />
+ls -1
+</div>
+
+<p>
+With the result:
+<pre>
+Posterior_Diag.nc
+Prior_Diag.nc
+True_State.nc
+filter_ics
+filter_restart
+input.nml
+mkmf_create_fixed_network_seq
+mkmf_create_obs_sequence
+mkmf_filter
+mkmf_obs_diag
+mkmf_obs_sequence_tool
+mkmf_perfect_model_obs
+mkmf_preprocess
+mkmf_restart_file_tool
+mkmf_wakeup_filter
+obs_seq.final
+obs_seq.in
+obs_seq.out
+obs_seq.out.average
+obs_seq.out.x
+obs_seq.out.xy
+obs_seq.out.xyz
+obs_seq.out.z
+path_names_create_fixed_network_seq
+path_names_create_obs_sequence
+path_names_filter
+path_names_obs_diag
+path_names_obs_sequence_tool
+path_names_perfect_model_obs
+path_names_preprocess
+path_names_restart_file_tool
+path_names_wakeup_filter
+perfect_ics
+perfect_restart
+quickbuild.csh
+set_def.out
+workshop_setup.csh
+</pre>
+
+<p>
+In all the <em class=file>work</em> directories there
+ will be a
+<em class=file>quickbuild.csh</em> script that
+builds or rebuilds the executables. The following
+instructions do this work by hand to introduce you to
+the individual steps, but in practice running quickbuild
+will be the normal way to do the compiles.
+</p>
+
+<p>
+There are nine <em class=file>mkmf_</em><em class=italic>xxxxxx</em>
+files for the programs
+</p>
+
+<ol><li><em class=program>preprocess</em>,
+ <li><em class=program>create_obs_sequence</em>,
+ <li><em class=program>create_fixed_network_seq</em>,
+ <li><em class=program>perfect_model_obs</em>,
+ <li><em class=program>filter</em>,
+ <li><em class=program>wakeup_filter</em>,
+ <li><em class=program>obs_sequence_tool</em>, and
+ <li><em class=program>restart_file_tool</em>, and
+ <li><em class=program>obs_diag</em>,
+</ol>
+
+<p>
+along with the
+corresponding <em class=file>path_names_</em><em class=italic>xxxxxx</em> files.
+There are also files that contain initial conditions, netCDF output, and
+several observation sequence files, all of which will be discussed later.
+You can examine the contents of one of the
+<em class=file>path_names_</em><em class=italic>xxxxxx</em> files,
+for instance <em class=file>path_names_filter</em>, to see a list of
+the relative paths of all files that contain Fortran90 modules
+required for the program <em class=program>filter</em> for
+the L63 model. All of these paths are relative to your
+<em class=file>DART</em> directory.
+The first path is the main program
+(<em class=file>filter.f90</em>) and is followed by all
+the Fortran90 modules used by this program (after preprocessing).
+</p>
+
+<p>
+The <em class=program>mkmf_</em><em class=italic>xxxxxx</em> scripts
+are cryptic but should not need to be modified -- as long as you do not
+restructure the code tree (by moving directories, for example).
+
+The function of the <em class=program>mkmf_</em><em class=italic>xxxxxx</em>
+script is to generate a <em class=file>Makefile</em> and an
+<em class=file>input.nml.<em class=program><i>program</i>_default</em></em>
+file. It does not do the compile; <em class=program>make</em>
+does that:
+</p>
+
+<div class=unix>
+csh mkmf_preprocess<br />
+make
+</div>
+
+<p>
+The first command generates an appropriate <em class=file>Makefile</em> and
+the <em class=file>input.nml.preprocess_default</em> file.
+The second command results in the compilation of a series of
+Fortran90 modules which ultimately produces an executable file:
+<em class=program>preprocess</em>.
+Should you need to make any changes to the
+<em class=file>DART/mkmf/mkmf.template</em>,
+you will need to regenerate the <em class=file>Makefile</em>.
+<br /><br />
+The <em class=program>preprocess</em> program actually builds source code to
+be used by all the remaining modules. It is <strong>imperative</strong> to
+actually <strong>run</strong> <em class=program>preprocess</em> before building
+the remaining executables. This is how the same code can assimilate state
+vector 'observations' for the Lorenz_63 model and real radar reflectivities for WRF
+without needing to specify a set of radar operators for the Lorenz_63 model!
+<br /><br />
+<em class=program>preprocess</em> reads the <em class=code>&preprocess_nml</em>
+namelist to determine what observations and operators to incorporate.
+For this exercise, we will use the values in <em class=file>input.nml</em>.
+<em class=program>preprocess</em> is designed to abort if
+the files it is supposed to build already exist. For this reason, it is necessary
+to remove a couple files (if they exist) before you run the preprocessor.
+(The <em class=program>quickbuild.csh</em> script will do this for you
+automatically.)
+</p>
+
+<div class=unix>
+<pre>
+\rm -f ../../../obs_def/obs_def_mod.f90
+\rm -f ../../../obs_kind/obs_kind_mod.f90
+./preprocess
+ls -l ../../../obs_def/obs_def_mod.f90
+ls -l ../../../obs_kind/obs_kind_mod.f90
+</pre>
+</div>
+
+<p>
+This created <em class=file>../../../obs_def/obs_def_mod.f90</em> from
+<em class=file>../../../obs_kind/DEFAULT_obs_kind_mod.F90</em> and several other
+modules. <em class=file>../../../obs_kind/obs_kind_mod.f90</em> was created similarly.
+Now we can build the rest of the project.
+<br /><br />
+A series of object files for each module compiled will also be
+left in the work directory, as some of these are undoubtedly needed by the build
+of the other DART components.
+You can proceed to create the other programs needed to work with
+L63 in DART as follows:
+</p>
+
+<div class=unix>
+csh mkmf_create_obs_sequence<br />
+make<br />
+csh mkmf_create_fixed_network_seq<br />
+make<br />
+csh mkmf_perfect_model_obs<br />
+make<br />
+csh mkmf_filter<br />
+make<br />
+csh mkmf_obs_diag<br />
+make
+</div><br />
+
+<p>
+The result (hopefully) is that six executables now
+reside in your work directory. The most common problem is that the netCDF libraries
+and include files (particularly <em class=file>typesizes.mod</em>) are not found.
+Edit the <em class=file>DART/mkmf/mkmf.template</em>,
+recreate the <em class=file>Makefile</em>, and try again.
+</p>
+
+<table border=0 cellpadding=1 width=100% summary="executables created">
+<tr><th>program</th><th>purpose</th></tr>
+<tbody valign=top>
+
+<tr><td><em class=program>preprocess</em></td>
+ <td>creates custom source code for just the observation types
+ of interest</td>
+</tr>
+
+<tr><td><em class=program>create_obs_sequence</em></td>
+ <td>specify a (set) of observation characteristics taken
+ by a particular (set of) instruments</td>
+</tr>
+
+<tr><td><em class=program>create_fixed_network_seq</em></td>
+ <td>repeat a set of observations through time to simulate
+ observing networks where observations are taken in
+ the same location at regular (or irregular)
+ intervals</td>
+</tr>
+
+<tr><td><em class=program>perfect_model_obs</em></td>
+ <td>generate "true state" for synthetic observation experiments. Can
+ also be used to 'spin up' a model by running it for a long time.</td>
+</tr>
+
+<tr><td><em class=program>filter</em></td>
+ <td>does the assimilation</td>
+</tr>
+
+<tr><td><em class=program>obs_diag</em></td>
+ <td>creates observation-space diagnostic files to be explored by
+ the Matlab® scripts.</td>
+</tr>
+
+<tr><td><em class=program>obs_sequence_tool</em></td>
+ <td>manipulates observation sequence files. It is not generally needed
+ (particularly for low-order models) but can be used to combine
+ observation sequences or convert from ASCII to binary or vice-versa.
+ We will not cover its use in this document.</td>
+</tr>
+
+<tr><td><em class=program>restart_file_tool</em></td>
+ <td>manipulates the initial condition and restart files.
+ We're going to ignore this one here.</td>
+
+<tr><td><em class=program>wakeup_filter</em></td>
+ <td>is only needed for MPI applications. We're starting at
+ the beginning here, so we're going to ignore this one, too.</td>
+</tr>
+</table>
+
+<!--==================================================================-->
+<!--==================================================================-->
+
+<div><hr /><p align=right><a href="#"><small>[top]</small></a></p></div>
+<a name="Running"></a>
+<h2>Running Lorenz_63.</h2>
+
+<p>
+This initial sequence of exercises includes detailed instructions
+on how to work with the DART code and allows investigation of the
+basic features of one of the most famous dynamical systems, the
+3-variable Lorenz-63 model.
+The remarkable complexity of this simple model will also be used as
+a case study to introduce a number of features of a simple ensemble
+filter data assimilation system.
+To perform a synthetic observation assimilation experiment
+for the L63 model, the following steps must be performed
+(an overview of the process is given first,
+followed by detailed procedures for each step):
+</p>
+
+<h2 class=indent1>Experiment Overview</h2>
+<ol>
+ <li><a href="#integrate">Integrate the L63 model for a long time</a><br />
+ starting from arbitrary initial conditions to generate a model state
+ that lies on the attractor. The ergodic nature of the L63 system
+ means a 'lengthy' integration always converges to some point on
+ the computer's finite precision representation of the model's
+ attractor.<br /><br /></li>
+
+ <li><a href="#ensemblate">Generate a set of ensemble initial conditions</a><br />
+ from which to start an assimilation. Since L63 is ergodic, the
+ ensemble members can be designed to look like random samples from
+ the model's 'climatological distribution'. To generate an ensemble
+ member, very small perturbations can be introduced to the state on
+ the attractor generated by step 1. This perturbed state can then be
+ integrated for a very long time until all memory of its initial
+ condition can be viewed as forgotten. Any number of ensemble
+ initial conditions can be generated by repeating this procedure.<br /><br /></li>
+
+ <li><a href="#simulate">Simulate a particular observing system</a><br />
+ by first creating an 'observation set definition' and then creating
+ an 'observation sequence'. The 'observation set definition' describes the
+ instrumental characteristics of the observations and the 'observation sequence'
+ defines the temporal sequence of the observations.<br /><br /></li>
+
+ <li><a href="#generate">Populate the 'observation sequence' with 'perfect' observations</a><br />
+ by integrating the model and using the information in the
+ 'observation sequence' file to create simulated observations.
+ This entails operating on the model state at the
+ time of the observation with an appropriate forward operator
+ (a function that operates on the model state vector to produce
+ the expected value of the particular observation) and then adding
+ a random sample from the observation error distribution specified
+ in the observation set definition. At the same time, diagnostic
+ output about the 'true' state trajectory can be created.<br /><br /></li>
+
+ <li><a href="#assimilate">Assimilate the synthetic observations</a><br />
+ by running the filter; diagnostic output is generated.</li>
+</ol>
+
+<a name="integrate"></a>
+<h3 class=indent1>1. Integrate the L63 model for a 'long' time.</h3>
+<em class=program>perfect_model_obs</em> integrates the model
+for all the times specified in the 'observation sequence definition' file.
+To this end, begin by creating an 'observation sequence definition'
+file that spans a long time. Creating an 'observation sequence definition'
+file is a two-step procedure involving
+<em class=program>create_obs_sequence</em> followed by
+<em class=program>create_fixed_network_seq</em>. After they are both run, it
+is necessary to integrate the model with <em class=program>perfect_model_obs</em>.
+
+<h4 class=indent1>1.1 Create an observation set definition.</h4>
+<p>
+<em class=program>create_obs_sequence</em> creates an observation
+set definition, the time-independent part of an observation sequence.
+An observation set definition file only contains the
+<em class=code>location, type,</em>
+and <em class=code>observational error characteristics</em>
+(normally just the diagonal observational error variance)
+for a related set of observations. There are no actual observations,
+nor are there any times associated with the definition.
+For spin-up, we are only interested in integrating the L63 model,
+not in generating any particular synthetic observations.
+Begin by creating a minimal observation set definition.<br />
+<br />
+In general, for the low-order models, only a single observation set need
+be defined. Next, the number of individual scalar observations
+(like a single surface pressure observation) in the set is needed.
+To spin-up an initial condition for the L63 model, only a
+single observation is needed.
+Next, the error variance for this observation must be entered.
+Since we do not need (nor want) this observation to have any impact
+on an assimilation (it will only be used for spinning up the model
+and the ensemble), enter a very large value for the error variance.
+An observation with a very large error variance has essentially no
+impact on deterministic filter assimilations like the default variety
+implemented in DART. Finally, the location and type of the
+observation need to be defined. For all types of models,
+the most elementary form of synthetic observations are called
+'identity' observations. These observations are generated simply
+by adding a random sample from a specified observational error
+distribution directly to the value of one of the state variables.
+This defines the observation as being an identity observation of the
+first state variable in the L63 model.
+The program will respond by terminating after generating a file
+(generally named <em class=file>set_def.out</em>)
+that defines the single identity observation of the first
+state variable of the L63 model. The following is a screenshot
+(much of the verbose logging has been left off for clarity),
+the user input looks <em class=input>like this</em>.
+</p>
+
@@ Diff output truncated at 40000 characters. @@
More information about the Dart-dev
mailing list