[Dart-dev] [6275] DART/trunk/observations: rename the MODIS directory to 'tpw' since this converter

nancy at ucar.edu nancy at ucar.edu
Tue Jun 25 16:16:46 MDT 2013


Revision: 6275
Author:   nancy
Date:     2013-06-25 16:16:46 -0600 (Tue, 25 Jun 2013)
Log Message:
-----------
rename the MODIS directory to 'tpw' since this converter
starts with ASCII input files instead of reading the HDF-EOS
MODIS files.  rework the code to have more namelist values.
update the superobs code to wrap correctly around the
prime meridian, and average the values in time as well
(instead of using the time of the first obs being
averaged).  write documentation and leave a pointer
to a single data file which can be used for testing.

updated the docs for several of the other observation
converters. the snow doc was basically a copy of the
text_to_obs docs without any changes.  the quikscat
docs had the sense of the thinning backwards (the code
is fine; the description was misleading).  the AIRS
docs didn't include a namelist section at all.

also starting to convert the namelists into
the &bob a=1 b=2 format instead of namelist /bob/ a, b

Modified Paths:
--------------
    DART/trunk/observations/AIRS/AIRS.html
    DART/trunk/observations/quikscat/QuikSCAT.html
    DART/trunk/observations/quikscat/convert_L2b.nml
    DART/trunk/observations/snow/snow_to_obs.html
    DART/trunk/observations/tpw/convert_tpw.f90
    DART/trunk/observations/tpw/data/README
    DART/trunk/observations/tpw/work/input.nml
    DART/trunk/observations/tpw/work/path_names_convert_tpw

Added Paths:
-----------
    DART/trunk/observations/tpw/
    DART/trunk/observations/tpw/convert_tpw.nml
    DART/trunk/observations/tpw/tpw.html
    DART/trunk/observations/tpw/work/mkmf_advance_time
    DART/trunk/observations/tpw/work/mkmf_obs_seq_to_netcdf
    DART/trunk/observations/tpw/work/mkmf_obs_sequence_tool
    DART/trunk/observations/tpw/work/path_names_advance_time
    DART/trunk/observations/tpw/work/path_names_obs_seq_to_netcdf
    DART/trunk/observations/tpw/work/path_names_obs_sequence_tool

Removed Paths:
-------------
    DART/trunk/observations/MODIS/

-------------- next part --------------
Modified: DART/trunk/observations/AIRS/AIRS.html
===================================================================
--- DART/trunk/observations/AIRS/AIRS.html	2013-06-21 16:26:20 UTC (rev 6274)
+++ DART/trunk/observations/AIRS/AIRS.html	2013-06-25 22:16:46 UTC (rev 6275)
@@ -26,7 +26,6 @@
 
 <A HREF="#DataSources">DATA SOURCES</A> /
 <A HREF="#Programs">PROGRAMS</A> / 
-<A HREF="#Modules">MODULES</A> /
 <A HREF="#Namelist">NAMELIST</A> /
 <A HREF="#Errors">ERRORS</A> /
 <A HREF="#FuturePlans">FUTURE PLANS</A> /
@@ -226,6 +225,15 @@
 into larger time periods.
 </P>
 <P>
+It is possible to restrict the output observation sequence to contain
+data from a region of interest throught the use of the namelist
+parameters. If you need a region that spans the Prime Meridian
+lon1 can be a larger number than lon2,
+for example, a region from 300 E to 40 E and 60 S to 30 S (some of 
+the South Atlantic), would be <em class="input">lon1 = 300, lon2 =
+40, lat1 = -60, lat2 = -30</em>.
+</P>
+<P>
 The scripts directory here includes some shell scripts that
 make use of the fact that the AIRS data is also archived on the NCAR
 HPSS (tape library) in daily tar files.  The script has options 
@@ -340,6 +348,135 @@
 </blockquote>
 
 <!--==================================================================-->
+<!-- Namelist                                                         -->
+<!--==================================================================-->
+
+<A NAME="Namelist"></A>
+<HR />
+<H2>NAMELIST</H2>
+<P>This namelist is read in a file called <em class=file>input.nml</em>.
+We adhere to the F90 standard of starting a namelist with an ampersand
+'&amp;' and terminating with a slash '/' for all our namelist input.
+Character strings that contain a '/' must be
+enclosed in quotes to prevent them from prematurely terminating the namelist.
+</P>
+<div class=namelist>
+<pre>
+&amp;convert_airs_L2_nml
+   l2_files           = 'input.hdf',
+   l2_file_list       = '',
+   datadir            = '.',
+   outputdir          = '.',
+   lon1               =   0.0,
+   lon2               = 360.0,
+   lat1               = -90.0,
+   lat2               =  90.0,
+   min_MMR_threshold  = 1.0e-30,
+   top_pressure_level = 0.0001,
+   cross_track_thin   = 0,
+   along_track_thin   = 0,
+/
+</pre>
+</div>
+
+<P> </P>
+
+<div>
+<TABLE border=0 cellpadding=3 width=100%>
+<THEAD align=left>
+<TR><TH>Contents    </TH>
+    <TH>Type        </TH>
+    <TH>Description </TH>
+    <TH>Default     </TH></TR>
+</THEAD>
+
+<TBODY valign=top>
+<TR>
+ <TD> l2_files </TD>
+ <TD> character(len=128) (:) </TD>
+ <TD> A list of one or more names of the HDF file(s) to 
+read, NOT including the directory. If multiple files
+are listed, each will be read and the results will
+be placed in a separate file with an output filename
+constructed based on the input filename.
+</TD></TR>
+
+<TR>
+ <TD> l2_file_list </TD>
+ <TD> character(len=128) </TD>
+ <TD> The name of an ascii text file which contains one filename
+per line, NOT including the directory.  Each file will be read
+and the observations converted into an output file where the
+output filename is based on the input filename.  Only one
+of 'l2_files' and 'l2_file_list' can be specified.  The other
+must be ' ' (empty). 
+</TD></TR>
+
+<TR>
+ <TD> datadir </TD>
+ <TD> character(len=128) </TD>
+ <TD> The directory containing the HDF files
+</TD></TR>
+
+<TR>
+ <TD> outputdir </TD>
+ <TD> character(len=128) </TD>
+ <TD> The directory for the output observation sequence files.
+</TD></TR>
+
+<TR>
+ <TD> lon1 </TD>
+ <TD> real(r4) </TD>
+ <TD> the West-most longitude of interest in degrees. [0.0, 360]
+</TD></TR>
+
+<TR>
+ <TD> lon2 </TD>
+ <TD> real(r4) </TD>
+ <TD> the East-most longitude of interest in degrees. [0.0, 360]
+</TD></TR>
+
+<TR>
+ <TD> lat1 </TD>
+ <TD> real(r4) </TD>
+ <TD> the South-most latitude of interest in degrees. [-90.0, 90.0]
+</TD></TR>
+
+<TR>
+ <TD> lat2 </TD>
+ <TD> real(r8) </TD>
+ <TD> the North-most latitude of interest in degrees. [-90.0, 90.0]
+</TD></TR>
+
+<TR>
+ <TD> min_MMR_threshold </TD>
+ <TD> real(r8) </TD>
+ <TD> The data files contains 'Retrieved Water Vapor Mass Mixing Ratio'.
+This is the minimum threshold, in gm/kg, that will be converted into a
+specific humidity observation.
+</TD></TR>
+
+<TR>
+ <TD>cross_track_thin     </TD>
+ <TD>integer              </TD>
+ <TD>provides ability to thin the data by keeping only 
+every Nth data value in a particular row.  e.g. 3 == keep every third value.
+</TD></TR>
+
+<TR> 
+ <TD>along_track_thin     </TD>
+ <TD>integer              </TD>
+ <TD>provides ability to thin the data by keeping only 
+every Nth row. e.g. 4 == keep only every 4th row.
+</TD></TR>
+
+</TBODY>
+</TABLE>
+
+</div>
+<br />
+
+<!--==================================================================-->
 <!-- Describe the bugs.                                               -->
 <!--==================================================================-->
 
@@ -347,7 +484,13 @@
 <BR /><HR /><BR />
 <H2>KNOWN BUGS</H2>
 <P>
-none
+Earlier versions of this converter mistakenly put the moisture obs
+at level heights, in the same location as the temperature observations.
+The moisture observations are in fact an integrated value across the
+distance between two levels.
+This means the location was shifted 1/2 level in the vertical from 
+the center of the layer.  The fixed converter outputs the location
+at the center, in log space, of each layer.
 </P>
 
 <!--==================================================================-->
@@ -358,7 +501,9 @@
 <BR /><HR /><BR />
 <H2>FUTURE PLANS</H2>
 <P>
-none
+If a more accurate moisture observation was needed, the observation value
+could be computed by actually integrating multiple values between the levels.
+At this point it doesn't seem necessary.
 </P>
 
 <!--==================================================================-->

Modified: DART/trunk/observations/quikscat/QuikSCAT.html
===================================================================
--- DART/trunk/observations/quikscat/QuikSCAT.html	2013-06-21 16:26:20 UTC (rev 6274)
+++ DART/trunk/observations/quikscat/QuikSCAT.html	2013-06-25 22:16:46 UTC (rev 6275)
@@ -193,89 +193,91 @@
 <A NAME="Namelist"></A>
 <HR />
 <H2>NAMELIST</H2>
-<P>We adhere to the F90 standard of starting a namelist with an ampersand
+<P>This namelist is read from the file <em class=file>input.nml</em>.
+We adhere to the F90 standard of starting a namelist with an ampersand
 '&amp;' and terminating with a slash '/' for all our namelist input.
-Consider yourself forewarned that character strings that contain a '/' must be
+Character strings that contain a '/' must be
 enclosed in quotes to prevent them from prematurely terminating the namelist.
-The namelist declaration (i.e. what follows) has a different syntax, naturally.
+The following values are the defaults for these namelist items.
 </P>
 <div class=namelist>
 <pre>
-<em class=call>namelist / convert_L2b_nml / </em> &amp;
-    l2b_file, datadir, outputdir, lon1, lon2, lat1, lat2, &amp;
-    along_track_thin, cross_track_thin 
+&amp;convert_L2b_nml
+   l2b_file = '',
+   datadir = '.',
+   outputdir = '.',
+   lon1 = 0.0, 
+   lon2 = 360.0, 
+   lat1 = -90.0, 
+   lat2 = 90.0,
+   along_track_thin = 0,
+   cross_track_thin = 0
+ /
 </pre>
 </div>
 
-<div class=indent1><!-- Description -->
+<P></P>
 
+<div>
+
 <P>
-This namelist is read in a file called <em class=file>input.nml</em>.
-</P>
-<P>
 It is possible to restrict the output observation sequence to contain
 data from a region of interest throught the use of the namelist
-parameters. If you need a region that spans the Prime Meridian,
+parameters. If you need a region that spans the Prime Meridian
+lon1 can be a larger number than lon2,
 for example, a region from 300 E to 40 E and 60 S to 30 S (some of 
-the South Atlantic), simply specify <em class="input">lon1 = 300, lon2 =
+the South Atlantic), would be <em class="input">lon1 = 300, lon2 =
 40, lat1 = -60, lat2 = -30</em>.
 </P>
 
-<TABLE border=0 cellpadding=3 width=100%>
+<TABLE border=0 cellpadding=3 width=100% summary='QuikSCAT namelist description'>
+<THEAD align=left>
+<TR><TH>Contents    </TH>
+    <TH>Type        </TH>
+    <TH>Description </TH></TR>
+</THEAD>
 
-<TR><TH align=left>Contents    </TH>
-    <TH align=left>Type        </TH>
-    <TH align=left>Description </TH></TR>
+<TBODY valign=top>
+<TR><TD>l2b_file             </TD>
+    <TD>character(len=128)   </TD>
+    <TD>name of the HDF file to read - NOT including 
+        the directory, e.g. QS_S2B44444.20080021548</TD></TR>
 
-<TR><!--contents--><TD valign=top>l2b_file             </TD>
-    <!--  type  --><TD valign=top>character(len=128)   </TD>
-    <!--descript--><TD>name of the HDF file to read - NOT including 
-                       the directory, e.g. QS_S2B44444.20080021548
-                     <em class="unit">[default:'']</em></TD></TR>
+<TR><TD>datadir              </TD>
+    <TD>character(len=128)   </TD>
+    <TD>the directory containing the HDF files</TD></TR>
 
-<TR><!--contents--><TD valign=top>datadir              </TD>
-    <!--  type  --><TD valign=top>character(len=128)   </TD>
-    <!--descript--><TD>the directory containing the HDF files
-                     <em class="unit">[default:'.']</em></TD></TR>
+<TR><TD>outputdir            </TD>
+    <TD>character(len=128)   </TD>
+    <TD>the directory for the output observation sequence files.</TD></TR>
 
-<TR><!--contents--><TD valign=top>outputdir            </TD>
-    <!--  type  --><TD valign=top>character(len=128)   </TD>
-    <!--descript--><TD>the directory for the output observation
-                       sequence files.
-                     <em class="unit">[default:'.']</em></TD></TR>
+<TR><TD>lon1                 </TD>
+    <TD>real(r4)             </TD>
+    <TD>the West-most longitude of interest in degrees. [0.0, 360]</TD></TR>
 
-<TR><!--contents--><TD valign=top>lon1                 </TD>
-    <!--  type  --><TD valign=top>real(r4)             </TD>
-    <!--descript--><TD>the West-most longitude of interest. [0.0, 360]
-                     <em class="unit">[default:0.0]</em></TD></TR>
+<TR><TD>lon2                 </TD>
+    <TD>real(r4)             </TD>
+    <TD>the East-most longitude of interest in degrees. [0.0, 360]</TD></TR>
 
-<TR><!--contents--><TD valign=top>lon2                 </TD>
-    <!--  type  --><TD valign=top>real(r4)             </TD>
-    <!--descript--><TD>the East-most longitude of interest. [0.0, 360]
-                     <em class="unit">[default:360.0]</em></TD></TR>
+<TR><TD>lat1                 </TD>
+    <TD>real(r4)             </TD>
+    <TD>the South-most latitude of interest in degrees. [-90.0, 90.0]</TD></TR>
 
-<TR><!--contents--><TD valign=top>lat1                 </TD>
-    <!--  type  --><TD valign=top>real(r4)             </TD>
-    <!--descript--><TD>the South-most latitude of interest. [-90.0, 90.0]
-                     <em class="unit">[default:-90.0]</em></TD></TR>
+<TR><TD>lat2                 </TD>
+    <TD>real(r8)             </TD>
+    <TD>the North-most latitude of interest in degrees. [-90.0, 90.0]</TD></TR>
 
-<TR><!--contents--><TD valign=top>lat2                 </TD>
-    <!--  type  --><TD valign=top>real(r8)             </TD>
-    <!--descript--><TD>the North-most latitude of interest. [-90.0, 90.0]
-                     <em class="unit">[default:90.0</em></TD></TR>
+<TR><TD>along_track_thin     </TD>
+    <TD>integer              </TD>
+    <TD>provides ability to thin the data by keeping only
+        every Nth row.  e.g. 3 == keep every 3rd row.</TD></TR>
 
-<TR><!--contents--><TD valign=top>along_track_thin     </TD>
-    <!--  type  --><TD valign=top>integer              </TD>
-    <!--descript--><TD>provides ability to thin the data by ignoring 
-                       entire WVC rows. 2 == ignore every other row.
-                     <em class="unit">[default:0</em></TD></TR>
-
-<TR><!--contents--><TD valign=top>cross_track_thin     </TD>
-    <!--  type  --><TD valign=top>integer              </TD>
-    <!--descript--><TD>provides ability to thin the data by ignoring 
-                       wind vector cells in a particular row. 
-                       2 == ignore every other cell.
-                     <em class="unit">[default:0</em></TD></TR>
+<TR><TD>cross_track_thin     </TD>
+    <TD>integer              </TD>
+    <TD>provides ability to thin the data by keeping only every Nth 
+        wind vector cell in a particular row. 
+        e.g. 5 == keep every 5th cell.</TD></TR>
+</TBODY>
 </TABLE>
 
 </div>
@@ -300,7 +302,7 @@
 <HR />
 <H2>FUTURE PLANS</H2>
 <ol>
-   <li>There is one bit of error-checking that I did not survive the
+   <li>There is one bit of error-checking that did not survive the
        conversion from F77 to F90. I need to restore the check that the
        HDF file being read is a 'Level 2B' product.</li>
    <li>There is a lot of error-checking that is not being done. I need
@@ -309,10 +311,14 @@
        highest-ranked ambiguity.</li>
    <li>We need namelist options to select more QC flags - not 
        just the ones with the 'perfect' QC value of 0</li>
-   <li>The ability to leave the observations as speed and direction
+   <li>Add an option to leave the observations as speed and direction
        instead of converting them to U,V components. This is a 
        natural implementation of the instrument error 
-       characteristics.</li>
+       characteristics. However, it would require writing a specialized 
+       forward operator in order to assimilate obs of this type (speed,
+       direction), and there is still a numerical problem with trying to 
+       do the statistics required during the assimilation of a 
+       cyclic direction value.</li>
 </ol>
 
 <!--==================================================================-->

Modified: DART/trunk/observations/quikscat/convert_L2b.nml
===================================================================
--- DART/trunk/observations/quikscat/convert_L2b.nml	2013-06-21 16:26:20 UTC (rev 6274)
+++ DART/trunk/observations/quikscat/convert_L2b.nml	2013-06-25 22:16:46 UTC (rev 6275)
@@ -1,9 +1,11 @@
 &convert_L2b_nml
-    l2b_file = 'QS_S2B44444.20080021548',
-     datadir = '.',
-   outputdir = '.',
-   lon1 = 0.0, lon2 = 360.0, 
-   lat1 = -90.0, lat2 = 90.0,
+   l2b_file         = '',
+   datadir          = '.',
+   outputdir        = '.',
+   lon1             = 0.0, 
+   lon2             = 360.0, 
+   lat1             = -90.0, 
+   lat2             = 90.0,
    along_track_thin = 0,
    cross_track_thin = 0
  /

Modified: DART/trunk/observations/snow/snow_to_obs.html
===================================================================
--- DART/trunk/observations/snow/snow_to_obs.html	2013-06-21 16:26:20 UTC (rev 6274)
+++ DART/trunk/observations/snow/snow_to_obs.html	2013-06-25 22:16:46 UTC (rev 6275)
@@ -2,14 +2,14 @@
           "http://www.w3.org/TR/html4/strict.dtd">
 <HTML>
 <HEAD>
-<TITLE>program text_to_obs</TITLE>
+<TITLE>Snow observations</TITLE>
 <link rel="stylesheet" type="text/css" href="../../doc/html/doc.css">
 <link href="../../doc/html/dart.ico" rel="shortcut icon" />
 </HEAD>
 <BODY>
 <A NAME="TOP"></A>
 
-<H1>PROGRAM <em class=program>text_to_obs</em></H1>
+<H1>PROGRAM <em class=program>snow_to_obs</em></H1>
 
 <table border=0 summary="" cellpadding=5>
 <tr>
@@ -27,47 +27,22 @@
 
 <A HREF="#DataSources">DATA SOURCES</A> /
 <A HREF="#Programs">PROGRAMS</A> /
-<A HREF="#Decisions">DECISIONS</A> /
-<A HREF="#References">REFERENCES</A> /
+<A HREF="#Namelist">NAMELIST</A> /
 <A HREF="#Errors">ERRORS</A> /
 <A HREF="#FuturePlans">PLANS</A> / 
 <A HREF="#Legalese">TERMS OF USE</A>
 
-<H1>Overview</H1>
 
-<H4>Text File to DART Converter</H4>
+<H2>MODIS Snowcover Fraction Observation Converter</H2>
 
-<P>
-If you have observations in spreadsheet or column format, in text, 
-with a single line per observation, then the files this directory
-are a template for how to convert these observations
-into a format suitable for DART use.
+<H4>Overview</H4>
+
+<P>There are several satellite sources for snow observations.  
+Generally the data is distributed in HDF-EOS format.  The converter
+code in this directory DOES NOT READ HDF FILES as input.
+It expects the files to have been preprocessed to contain text,
+one line per observation, with northern hemisphere data only.
 </P>
-<P>
-The workflow is usually: 
-</P>
-<ul>
-<li>read in the needed information about each
-observation - location, time, data value, observation type - from 
-a data source (usually a file)</li>
-<li>call a series of DART library routines to construct a derived type that
-contains all the information about a single observation</li>
-<li>call another set of DART library
-routines to put it into a time-sorted series</li>
-<li>repeat the last 2 steps until all observations are processed</li>
-<li>finally, call a write subroutine that writes out the entire series to a file in 
-a format that DART can read in</li>
-</ul>
-<P>
-It is not recommended that you try to mimic the ascii file format
-by other means; the format is subject to change and the library routines
-will continue to be supported even if the physical format changes.
-</P>
-<P>
-If your input data is in some kind of format like netCDF or HDF,
-then one of the other converters (e.g. the MADIS ones for netCDF) might
-be a better starting place for adapting code.
-</P>
 
 
 <!--==================================================================-->
@@ -77,12 +52,7 @@
 <H2>DATA SOURCES</H2>
 
 <P>
-This part is up to you.  
-For each observation you will need a location, a data value,
-a type, a time, and some kind of error estimate.  The error 
-estimate can be hardcoded in the converter if they are not 
-available in the input data.  See below for more details on 
-selecting an appropriate error value.
+not sure.
 </P>
 
 <!--==================================================================-->
@@ -91,13 +61,8 @@
 <HR />
 <H2>PROGRAMS</H2>
 <P>
-The <em class=file>text_to_obs.f90</em> file is the source
+The <em class=file>snow_to_obs.f90</em> file is the source
 for the main converter program.
-Look at the source code where it reads the example data file.  You will
-almost certainly need to change the "read" statement to match your data
-format.  The example code 
-reads each text line into a character buffer
-and then reads from that buffer to parse up the data items.
 </P> <P>
 To compile and test,
 go into the work subdirectory and run the <em class=file>quickbuild.csh</em>
@@ -108,42 +73,105 @@
 once they have been created.
 </P>
 <P>
-To change the observation types, look in the 
-<em class=file>DART/obs_def</em> directory.  If you can
-find an obs_def_XXX_mod.f90 file with an appropriate set
-of observation types, change the 'use' lines in the converter
-source to include those types.  Then add that filename in the 
-<em class=file>input.nml</em> namelist file
-to the &amp;preprocess_nml namelist, the 'input_files' variable.
-Multiple files can be listed.  Then run quickbuild.csh again.
-It remakes the table of supported observation types before 
-trying to recompile the source code.
+This converter creates observations of the "MODIS_SNOWCOVER_FRAC" type.
 </P>
 <P>
-An example script for converting batches of files is
-in the <em class=file>shell_scripts</em> directory. A tiny example
-data file is in the <em class=file>data</em> directory.
-These are <em>NOT</em> intended to be turnkey scripts; they will
-certainly need to be customized for your use.  There are comments
-at the top of the script saying what options they include, and
-should be commented enough to indicate where changes will be
-likely to need to be made.
+There is another program in this directory called 
+<em class=file>snow_to_obs_netcdf.f90</em> which is a prototype for
+reading netcdf files that contain some metadata and presumably have
+been converted from the original HDF.  THIS HAS NOT BEEN TESTED but
+if you have such data, please contact 
+<a href='mailto:dart at ucar.edu'>dart at ucar.edu</a>
+for more assistance.  If you write something that reads the HDF-EOS
+MODIS files directly, please, please contact us!  Thanks.
 </P>
 
 <!--==================================================================-->
 
-<A NAME="Decisions"></A>
+<A NAME="Namelist"></A>
 <HR />
-<H2>DECISIONS YOU MIGHT NEED TO MAKE</H2>
-
+<H2>NAMELIST</H2>
 <P>
-See the discussion in the
-<a href="../observations.html#Decisions">observations introduction</a> 
-page about what options are available for the things you need to
-specify.  These include setting a time, specifying an expected error,
-setting a location, and an observation type.
+This namelist is read from the file <em class=file>input.nml</em>.
+Namelists start with an ampersand
+'&amp;' and terminate with a slash '/'.
+Character strings that contain a '/' must be
+enclosed in quotes to prevent them from
+prematurely terminating the namelist.
 </P>
 
+<div class=namelist>
+<pre>
+&snow_to_obs_nml
+  longrid         = 360,
+  latgrid         = 90, 
+  year            = 2000, 
+  doy             = 1,
+  snow_input_file = 'snowdata.input', 
+  missing_value   = -20.0, 
+  debug           = .false.
+/
+</pre>
+</div>
+
+<TABLE border=0 cellpadding=10 width=100% summary='snow_to_obs namelist description'>
+<THEAD align=left>
+<TR><TH> Item </TH>
+    <TH> Type </TH>
+    <TH> Description </TH> </TR>
+</THEAD>
+
+<TBODY valign=top>
+<TR>
+ <TD> longrid </TD>
+ <TD> integer </TD>
+ <TD> The number of divisions in the longitude dimension.
+ </TD> </TR>
+
+<TR>
+ <TD> latgrid </TD>
+ <TD> integer </TD>
+ <TD> The number of divisions in the latitude dimension.
+This converter assumes the data is for the northern hemisphere
+only.  A namelist item could be added to select northern 
+verses southern hemisphere if needed.
+ </TD> </TR>
+
+<TR>
+ <TD> year </TD>
+ <TD> integer </TD>
+ <TD> The year number of the data.
+ </TD> </TR>
+
+<TR>
+ <TD> doy </TD>
+ <TD> integer </TD>
+ <TD> The day number in the year.  Valid range 1 to 365 in a non-leap year,
+1 to 366 in a leap year.
+ </TD> </TR>
+
+<TR>
+ <TD> snow_input_file </TD>
+ <TD> character(len=128) </TD>
+ <TD> The name of the input file.
+ </TD> </TR>
+
+<TR>
+ <TD> missing_value </TD>
+ <TD> real(r8) </TD>
+ <TD> The value used to mark missing data.
+ </TD> </TR>
+
+<TR>
+ <TD> debug </TD>
+ <TD> logical </TD>
+ <TD> If set to .true. the converter will print out
+more information as it does the conversion.
+ </TD> </TR>
+
+</TBODY>
+</TABLE>
+
 <!--==================================================================-->
 <!-- Describe the bugs.                                               -->
 <!--==================================================================-->
@@ -152,7 +180,8 @@
 <HR />
 <H2>KNOWN BUGS</H2>
 <P>
-none
+This program is hardcoded to read only northern hemisphere data.
+It should handle global values.
 </P>
 
 <!--==================================================================-->
@@ -163,7 +192,10 @@
 <HR />
 <H2>FUTURE PLANS</H2>
 <P>
-none
+This program should use the HDF-EOS libraries to read the
+native MODIS granule files.  Right now the ascii intermediate
+files contain no metadata, so if the namelist values don't
+match the actual division of the globe, bad things will happen.
 </P>
 
 <!--==================================================================-->

Modified: DART/trunk/observations/tpw/convert_tpw.f90
===================================================================
--- DART/trunk/observations/MODIS/convert_tpw.f90	2013-06-21 16:26:20 UTC (rev 6274)
+++ DART/trunk/observations/tpw/convert_tpw.f90	2013-06-25 22:16:46 UTC (rev 6275)
@@ -1,4 +1,4 @@
-! DART software - Copyright 2004 - 2013 UCAR. This open source software is
+! DART software - Copyright 2005 - 2013 UCAR. This open source software is
 ! provided by UCAR, "as is", without charge, subject to all terms of use at
 ! http://www.image.ucar.edu/DAReS/DART/DART_download
 !
@@ -7,28 +7,65 @@
 program convert_tpw
 
 ! convert MODIS observations of Total Precipitable Water into
-! DART observation sequence files.
+! DART observation sequence files.  since the original observations
+! are very dense, this program has options to bin both in time and
+! in space.  THIS CODE ASSUMES ALL ELEVATIONS ARE AT 0.0 meters.
+! this is true for all ocean obs.  if you have land obs, you will
+! have to get the hght of the surface elevation at the lat/lon and
+! use that instead.
+!
+! assumes data is in text files which match the following read line:
+!      read(iunit, '(f11.6, f13.5, f10.4, 4x, i4, 4i3, f7.3)') &
+!                lat, lon, tpw, iyear, imonth, iday, ihour, imin, seconds
+!
+! constructs a input filename based on:
+!  ObsBase/InfilePrefix + YYYYMMDD + InfileSuffix
+! 
+! constructs a output filename based on:
+!  ./OutfilePrefix + YYYYMMDD + OutfileSuffix
+!
+! any of the prefix or suffixes can be '' (blank)
+! 
+! FIXME:
+! cannot loop past month boundaries.  also does not handle time windows
+! that span day boundaries, in the sense that it reads one day at a time
+! and outputs one obs_seq file for each day.  if a time bin crosses the
+! day boundary it should read in the next day, keep binning the obs for
+! the rest of the current time bin, and write out an obs_seq file that
+! has a full bin of all available observations.  the current code starts
+! at 0Z and ends at 0Z and so you should construct your time bins carefully
+! around the day boundaries.
+!
 
-use        types_mod, only : r8, metadatalength
-use time_manager_mod, only : time_type, set_calendar_type, GREGORIAN, set_time,&
-                             increment_time, get_time, set_date, operator(-),  &
-                             print_date
-use    utilities_mod, only : initialize_utilities, find_namelist_in_file,    &
-                             check_namelist_read, nmlfileunit, do_nml_file,   &
-                             get_next_filename, error_handler, E_ERR, E_MSG, &
-                             nc_check, find_textfile_dims, do_nml_term
-use     location_mod, only : VERTISSURFACE, set_location
-use obs_sequence_mod, only : obs_sequence_type, obs_type, read_obs_seq,       &
-                             static_init_obs_sequence, init_obs, destroy_obs, &
-                             write_obs_seq, init_obs_sequence, get_num_obs,   &
-                             insert_obs_in_seq, destroy_obs_sequence,         &
-                             set_copy_meta_data, set_qc_meta_data, set_qc,    &
-                             set_obs_values, set_obs_def, insert_obs_in_seq
-use obs_def_mod,      only : obs_def_type, set_obs_def_time, set_obs_def_kind, &
-                             set_obs_def_error_variance, set_obs_def_location, &
-                             set_obs_def_key
-use     obs_kind_mod, only : MODIS_TOTAL_PRECIPITABLE_WATER
 
+use         types_mod, only : r8, metadatalength, missing_r8
+use  time_manager_mod, only : time_type, set_calendar_type, GREGORIAN, set_time,&
+                              increment_time, get_time, set_date, operator(-),  &
+                              print_date, decrement_time, operator(>), print_time
+use     utilities_mod, only : initialize_utilities, find_namelist_in_file,      &
+                              check_namelist_read, nmlfileunit, do_nml_file,    &
+                              get_next_filename, error_handler, E_ERR, E_MSG,   &
+                              nc_check, find_textfile_dims, do_nml_term,        &
+                              is_longitude_between, finalize_utilities,         &
+                              open_file, close_file, register_module
+use      location_mod, only : VERTISSURFACE, set_location
+use  obs_sequence_mod, only : obs_sequence_type, obs_type, read_obs_seq,        &
+                              static_init_obs_sequence, init_obs, destroy_obs,  &
+                              write_obs_seq, init_obs_sequence, get_num_obs,    &
+                              insert_obs_in_seq, destroy_obs_sequence,          &
+                              set_copy_meta_data, set_qc_meta_data, set_qc,     &
+                              set_obs_values, set_obs_def, insert_obs_in_seq
+use       obs_def_mod, only : obs_def_type, set_obs_def_time, set_obs_def_kind, &
+                              set_obs_def_error_variance, set_obs_def_location, &
+                              set_obs_def_key
+use      obs_kind_mod, only :  AQUA_TOTAL_PRECIPITABLE_WATER,  &
+                              TERRA_TOTAL_PRECIPITABLE_WATER,  &
+                               AMSR_TOTAL_PRECIPITABLE_WATER,  &
+                              MODIS_TOTAL_PRECIPITABLE_WATER,  &
+                              get_obs_kind_index
+use obs_utilities_mod, only : create_3d_obs, add_obs_to_seq
+
+
 use           netcdf
 
 implicit none
@@ -39,292 +76,375 @@
 character(len=32 ), parameter :: revision = "$Revision$"
 character(len=128), parameter :: revdate  = "$Date$"
 
-integer, parameter ::   num_copies = 1,   &   ! number of copies in sequence
-                        num_qc     = 1,   &   ! number of QC entries
-                        num_new_obs= 5000     ! number of observations 
+!--------------------------------------
 
-character(len = 129) :: ObsBase = '/home/hliu/data'
-character (len=metadatalength) :: meta_data
-character (len=129) :: msgstring, next_infile
-character (len=80)  :: name
-character (len=19)  :: datestr
-character (len=6)   :: subset
 
-type(obs_def_type)      :: obs_def
 type(obs_sequence_type) :: obs_seq
 type(obs_type)          :: obs, prev_obs 
-type(time_type)         :: time_obs
+type(time_type)         :: obs_time, prev_time
+type(time_type)         :: base_time, time_diff
 
-!--------------------------------------
-      integer  :: i, ibin
-      integer  :: year, month, day, hour, minute, tot_days, ierr, io
-      integer  :: iyear, imonth, iday, ihour, imin, isec
-      integer leng, nrec, nlev, idd, max_rec
-      parameter (leng=9, max_rec= 900000)
+integer  :: ibin, nr, io, obstype
+integer  :: iyear, imonth, iday, ihour, imin
+integer  :: nrec, idd, iunit, num_obs, num_avg
+integer  :: nbin, kk, nn, oday, osec, sec_diff
 
-      real(r8) lon, lat, tpw, time, obserr, seconds
-      real(r8) prof(leng), prof0(leng), prof_ave(leng)
-      real(r8) prof_all(max_rec, leng), prof_in(max_rec, leng), &
-               lon_in(max_rec), lat_in(max_rec)
-      real(r8) missing 
-      real(r8) :: obs_val(1), qc_val(1), hght
-      logical :: first_obs
+integer, allocatable :: this_bin(:)
 
-      integer  iqc, ipc , itype, num, numq, j, k, nr, iuf
-      real(r8)  lati, latk, loni, lonk
-      character infile*100
-      character(len = 8 ):: obsdate
+integer, parameter :: ncol       = 9
+integer, parameter :: num_copies = 1
+integer, parameter :: num_qc     = 1
 
-      integer  :: nloc, navg, kk, nn, obs_num, oday, osec
-      real(r8) :: nu, oerr, qc, dlat, dlon
-      real(r8) :: bin_beg, bin_end
-      integer  :: bin_start, bin_interval, bin_half_width
-      character(len=128) :: out_file
+real(r8) :: oerr, qc
+real(r8) :: bin_beg, bin_end
+real(r8) :: lon, lat, tpw, seconds
+real(r8) :: lonk, dlat, hght
+real(r8) :: nloni, nlonk
 
-      namelist /convert_tpw_nml/year, month, day, tot_days, bin_start, bin_interval, bin_half_width, ObsBase
+logical  :: first_obs, lon_between
 
-! Read namelist ...
+character(len = 8)   :: obsdate
+character(len = 256) :: msgstring, infile, outfile
 
-      open (10, file='input.nml')
-      ierr = 1
+type tpw_type
+  real(r8) :: lat
+  real(r8) :: lon
+  real(r8) :: tpw_value
+  type(time_type) :: time
+  real(r8) :: fract_hours
+  logical  :: been_used
+end type
 
-      do while(ierr /= 0)
-      read(10, nml = convert_tpw_nml, iostat = io, end = 11)
-      enddo
-   11 continue
-      close(10)
+type(tpw_type), allocatable :: tpw_all(:)
+type(tpw_type)              :: tpw_base, tpw_next, tpw_avg
 
-      missing = -9999.9
-!--------------------------------------
-      do 3333 idd =day, day+ tot_days
-!--------------------------------------
+! items in namelist, along with default values
+integer  :: start_year  = 2008
+integer  :: start_month = 1
+integer  :: start_day   = 1
+integer  :: total_days  = 31
+integer  :: max_obs     = 150000
+real(r8) :: time_bin_start      =  0.00_r8  ! fractional hours
+real(r8) :: time_bin_interval   =  0.50_r8  ! fractional hours
+real(r8) :: time_bin_half_width =  0.25_r8  ! fractional hours
+real(r8) :: time_bin_end        = 24.00_r8  ! fractional hours
+real(r8) :: delta_lat_box = 1.0_r8
+real(r8) :: delta_lon_box = 1.0_r8
+real(r8) :: min_lon = missing_r8
+real(r8) :: max_lon = missing_r8
+real(r8) :: min_lat = missing_r8
+real(r8) :: max_lat = missing_r8
+! the date, in 'YYYYMMDD' format, will be inserted between
+! the input and output file prefix and suffix.  ObsBase is
+! only prepended to the input file
+character(len=128) :: ObsBase       = '../data'
+character(len=64)  :: InfilePrefix  = 'datafile.'
+character(len=64)  :: InfileSuffix  = '.txt'
+character(len=64)  :: OutfilePrefix = 'obs_seq.'
+character(len=64)  :: OutfileSuffix = ''
+character(len=32)  :: observation_name = 'MODIS_TOTAL_PRECIPITABLE_WATER'
 
-! initialize some values
+namelist /convert_tpw_nml/ start_year, start_month, start_day, &
+      total_days, max_obs, delta_lat_box, delta_lon_box,       &
+      time_bin_start, time_bin_interval, time_bin_half_width,  &
+      min_lon, max_lon, min_lat, max_lat, time_bin_end,        &
+      ObsBase, InfilePrefix, InfileSuffix, OutfilePrefix,      &
+      OutfileSuffix, observation_name
 
-call set_calendar_type(GREGORIAN)
-call initialize_utilities()
 
+! ----------------------------------------------------------------------
+! start of executable program code
+! ----------------------------------------------------------------------
+
+call initialize_utilities('convert_tpw')
+call register_module(source,revision,revdate)
+
+! Initialize the obs_sequence module 
+
 call static_init_obs_sequence()
-call init_obs(obs, num_copies, num_qc)
-call init_obs(prev_obs, num_copies, num_qc)
 
-  call init_obs_sequence(obs_seq, num_copies, num_qc, num_new_obs)
+!----------------------------------------------------------------------
+! Read the namelist
+!----------------------------------------------------------------------
 
-  do k = 1, num_copies
-    meta_data = 'MODIS observation'
-    call set_copy_meta_data(obs_seq, k, meta_data)
-  end do
-  do k = 1, num_qc
-    meta_data = 'MODIS QC'
-    call set_qc_meta_data(obs_seq, k, meta_data)
-  end do
+call find_namelist_in_file('input.nml', 'convert_tpw_nml', iunit)
+read(iunit, nml = convert_tpw_nml, iostat = io)
+call check_namelist_read(iunit, io, 'convert_tpw_nml')
 
+! Record the namelist values used for the run ...
+if (do_nml_file()) write(nmlfileunit, nml=convert_tpw_nml)
+if (do_nml_term()) write(    *      , nml=convert_tpw_nml)
 
-    write(obsdate, '(i4.4, i2.2, i2.2)') year, month, idd
+! some simple error checking - if you specify either of the longitude
+! limits, since it's a cyclic value you have to specify both.  the included
+! area will start at min_lon and go east until max_lon.   for latitude
+! you can let one default if you only want to set a min or a max.
+if (((min_lon /= missing_r8) .and. (max_lon == missing_r8)) .or. &
+    ((min_lon == missing_r8) .and. (max_lon /= missing_r8))) then
+   write(msgstring, *) 'if you set the min or max limit on longitude you must specify both'
+   call error_handler(E_ERR,'convert_tpw', msgstring, source, revision, revdate)
+endif
 
-!infile=trim(adjustl(ObsBase))//'/sinlaku/AQUA_MODIS_TPW_'//obsdate//'_Sinlaku.txt'
-!out_file = 'tpw_AQUA_sinlaku_obs_seq.'//obsdate
+! convert a string into an observation type number
+obstype = get_obs_kind_index(observation_name)
+if (obstype < 0) then
+   write(msgstring, *) 'unrecognized observation type ', trim(observation_name)
+   call error_handler(E_ERR,'convert_tpw', msgstring, source, revision, revdate, &
+                      text2='check the obs_defs in your &preprocess list')
+endif
 
- infile=trim(adjustl(ObsBase))//'/sinlaku/TERRA_MODIS_TPW_'//obsdate//'_Sinlaku.txt'
- out_file = 'tpw_TERRA_sinlaku_obs_seq.'//obsdate
+call set_calendar_type(GREGORIAN)
 
-        print*, 'input file= ', infile
+! allocate arrays which depend on input values in the namelist
+allocate(tpw_all(max_obs), this_bin(max_obs))
 
-        iuf = 90
-        open(iuf,file=infile,form='formatted')
+!--------------------------------------
 
-      prof_all(:,:) = missing
-      nrec = 0
+DAYLOOP: do idd = start_day, start_day + total_days - 1
 
-!   read in all observation records
+   ! set up for next obs_seq for next day
 
- obsloop: do 
-    read(iuf,888, end=200) lat, lon, tpw, iyear, imonth, iday, ihour, imin, seconds
- 888 format(f11.6, f13.5, f10.4, 4x, i4,4i3, f7.3)
-    nrec = nrec + 1
-     if(lon < 0.0 ) lon = lon + 360.0
-    prof(1) = lat
-    prof(2) = lon
-    prof(3) = tpw
-    prof(4) = iyear
-    prof(5) = imonth
-    prof(6) = iday
-    prof(7) = ihour
-    prof(8) = imin
-    prof(9) = seconds
+   call init_obs(obs,      num_copies, num_qc)
+   call init_obs(prev_obs, num_copies, num_qc)
+   first_obs = .true.
+   num_obs = 0
 
-    do k=1, leng
-    prof_all(nrec, k) = prof(k)
-    end do
+   call init_obs_sequence(obs_seq, num_copies, num_qc, max_obs)
 
- end do obsloop
- 200  continue
-      close(iuf)
+   call set_copy_meta_data(obs_seq, 1, 'MODIS observation')
+   call set_qc_meta_data(obs_seq, 1, 'MODIS QC')
 
-      print*, 'nrec= ', nrec
-      if(max_rec .le. nrec) then
-      print*, 'max_rec too small'
-      stop
+   ! construct the input and output filenames
+   write(obsdate, '(i4.4, i2.2, i2.2)') start_year, start_month, idd
+   write(infile, '(A)') trim(ObsBase)//'/'//trim(InfilePrefix)//obsdate//trim(InfileSuffix)
+   write(*, '(A)') 'reading  file: '//trim(infile)
+   write(outfile, '(A)') trim(OutfilePrefix)//obsdate//trim(OutfileSuffix)
+   write(*, '(A)') 'creating file: '//trim(outfile)
+
+   iunit = open_file(infile,'formatted',action='read')
+
+   nrec = 0
+
+   ! read in all observation records and fill the tpw type array
+   READLOOP: do 
+
+      read(iunit, '(f11.6, f13.5, f10.4, 4x, i4, 4i3, f7.3)', iostat=io) &
+         lat, lon, tpw, iyear, imonth, iday, ihour, imin, seconds
+      if (io /= 0) exit READLOOP
+
+      nrec = nrec + 1
+
+      if(nrec > max_obs) then
+         write(msgstring, *) 'number of input observations larger than "max_obs" limit'
+         call error_handler(E_ERR,'convert_tpw', msgstring, source, revision, revdate, &
+                            text2='increase parameter "max_obs" in namelist and rerun')
       endif
 
- first_obs = .true.
+      tpw_all(nrec)%lat = lat
+      tpw_all(nrec)%lon = lon
+      tpw_all(nrec)%tpw_value = tpw
+      tpw_all(nrec)%time = set_date(iyear, imonth, iday, ihour, imin, int(seconds))
+      tpw_all(nrec)%fract_hours = ihour + real(imin)/60.0_r8 + seconds/3600.0_r8
+      tpw_all(nrec)%been_used = .false.
+     
+   enddo READLOOP
 
+   call close_file(iunit)
+
+   write(msgstring, '(A,I12)') 'number of observations in input file is ', nrec
+   call error_handler(E_MSG,'', msgstring, source, revision, revdate)
+
 ! ---------------------------------------------------
 !   select observations for specific time interval
 ! ---------------------------------------------------
-      navg = 0
-      obs_num = 0
-      ibin = 0
-      bin_beg = 0.0
-      bin_end = 0.0
 
-!    time bin loop
-      binloop:  do while(bin_end < 25.0)
+   ibin = 0
 
-      nloc = 0
+   TIMEBINLOOP:  do
 
-!  read in all of the observations in the selected bin info into arrays
+      bin_beg = time_bin_start + ibin*time_bin_interval - time_bin_half_width
+      bin_end = time_bin_start + ibin*time_bin_interval + time_bin_half_width
+      if (bin_beg >= time_bin_end) exit TIMEBINLOOP
 
       ibin = ibin + 1
-      bin_beg = bin_start + (ibin-1)*bin_interval - bin_half_width
-      bin_end = bin_start + (ibin-1)*bin_interval + bin_half_width
-      print*, 'bin= ' , bin_beg, bin_end
 
-      do 6000 nr =1, nrec
+      ! make a list of which tpw index numbers are in this time bin
 
-       do k=1, leng
-       prof0(k) = prof_all(nr, k)
-       end do
+      nbin = 0
+      NRECLOOP: do nr = 1, nrec
 
-      time  = prof0(7) + prof0(8)/60.0 + prof0(9)/3600.0
+         if(tpw_all(nr)%fract_hours <= bin_beg .or. tpw_all(nr)%fract_hours > bin_end) cycle NRECLOOP
 
-      if(time .le. bin_beg .or. time .gt. bin_end) go to 600
+         nbin = nbin + 1
+         this_bin(nbin) = nr
 
-       nloc = nloc + 1
-!     print*, 'nloc=', nloc, lat, lon, time
-       do k=1, leng
-       prof_in(nloc, k) = prof0(k)
-       lat_in(nloc)  = prof0(1)
-       lon_in(nloc)  = prof0(2)
-       end do
+      enddo NRECLOOP
 
- 600 continue
- 6000 continue

@@ Diff output truncated at 40000 characters. @@


More information about the Dart-dev mailing list