# [Met_help] [rt.rap.ucar.edu #63639] History for Several questions regarding MET application

John Halley Gotway via RT met_help at ucar.edu
Mon Nov 25 21:04:41 MST 2013

```----------------------------------------------------------------
Initial Request
----------------------------------------------------------------

I have several questions regarding the application of MET:

1:The threshold setting for variable(e.g. >273) is frequent in the
tutorial, whether the threshold will be invalid if I just calculate and
compare the continuous statistics.(Like if MET will get rid of the data
which is less than 273 for continuous verification?)

2:For the neighborhood method applied in gridded-gridded comparison,
whether this method is just useful for the categorical variables? Can it be
applied in the continuous statistics? I don't quite understand that why the
width value for the square must be an odd integer. Also, in the gridded
comparison, I don't quite understand why before comparison, fcst and obs
fields needed to be smoothed first.

3:In both point-stat and grid-stat, the tutorial states that it is not
recommended to use analysis field for comparison. I don't quite get
the point what the analysis field means. If I compare two wrfout by using
different physical schemes, is it counted as the situation the tutorial
states?

4: If I compare the grid fcst and grid obs for T2 in a specific
time(Setting beg/end=0),then I will get some statistics values, such as
ME,MSE. I am not quite sure about the calculation process, for example, in
the fcst field, whether MET first sum the T2 value from all grid points
first, then compare with the obs? Or it compares the value between fcst and
obs for each point and do the statistics calculation.

5: If I want to compare the variables value at the eta-level set in the wrf
namelist, any method for me to do that instead of just setting the specific
height?

6: For the MODE tool, I don't understand the convolution process. The
expression written as: C(x,y)=∑a(u,v)f(x-u)(x-v), is it the same with
C(x,y)=∑a(u,v)f(x-u,x-v)?  I know that we need to first set the R and H
value, but I don't know the true meaning for setting them. If H is large,
then R would be small, vice and versa.  However, to the value of C(x,y), it
is hard to compare (large area* lower height) versus (small area *large
height). Could you explain to me a little bit more under what condition
should I set larger H or smaller R?

7: If I want to verify the grid data from CMAQ output, like the NO2
concentration, can I do that with MET? How to set the 'field' in the config
file?

9:My last question is regarding the ascii to nc tool. My obs data is not
bufr nor the standard ascii format for MET. I then used both Fortran and
Matlab to transfer my data to the standard ascii format for MET. To the
fortran one, it showed a lot of such warnings:
WARNING:
WARNING: process_little_r_obs() -> the number of data lines specified in
the header (10) does not match the number found in the data (1) on line
number 4087.
WARNING:
WARNING:
WARNING: process_little_r_obs() -> the number of data lines specified in
the header (10) does not match the number found in the data (1) on line
number 4091.
WARNING:
WARNING:
WARNING: process_little_r_obs() -> the number of data lines specified in
the header (10) does not match the number found in the data (1) on line
number 4095.

But at last, the nc file can be produced. To the Matlab one, the process is
correct, could you please tell me the reason. Is that related to the data
type written onto the file, like the string or the float? But the format I
set is the same in both scripts. I have also attached the data transformed
by fortran and matlab to this email.

Also, since the data is not coming from bufr, to the Message_Type I just
write 'ADPUPA', whether this will influence the statistics result? The
height for different observation stations might be different, is there any
method for me to compare the fcst and obs for different specific heights
instead of just setting a height value(e.g. 2m)?

Sincerely,

Jason

----------------------------------------------------------------
Complete Ticket History
----------------------------------------------------------------

Subject: Re: [rt.rap.ucar.edu #63639] Several questions regarding MET application
From: John Halley Gotway
Time: Tue Oct 29 11:12:31 2013

Jason,

Thanks,
John

On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
>
> Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
> Transaction: Ticket created by xingchenglu2011 at u.northwestern.edu
>         Queue: met_help
>       Subject: Several questions regarding MET application
>         Owner: Nobody
>    Requestors: xingchenglu2011 at u.northwestern.edu
>        Status: new
>   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>
>
>
> I have several questions regarding the application of MET:
>
> 1:The threshold setting for variable(e.g. >273) is frequent in the
> tutorial, whether the threshold will be invalid if I just calculate
and
> compare the continuous statistics.(Like if MET will get rid of the
data
> which is less than 273 for continuous verification?)

The "cat_thresh" setting stands for "categorical threshold".  That is
used when computing contingency table counts and statistics (the CTC
and CTS output line types).  The "cat_thresh" is used to
define what constitutes an "event" when computing a 2x2 contingency
table.  It has no impact on the continuous statistics and partial sums
in the CNT and SL1L2 output line types.

However, in the future we may add a parameter to filter the matched
pairs that go into the continuous statistics.  Some users have
requested the ability to do conditional verification like that -
where you throw out some of the matched pairs before computing
continuous stats.  But that does not currently exist in the current
METv4.1 release.

>
> 2:For the neighborhood method applied in gridded-gridded comparison,
> whether this method is just useful for the categorical variables?
Can it be
> applied in the continuous statistics? I don't quite understand that
why the
> width value for the square must be an odd integer. Also, in the
gridded
> comparison, I don't quite understand why before comparison, fcst and
obs
> fields needed to be smoothed first.

To answer your second question first, they do not need to be smoothed
first.  Typically, grid_stat is run with no "interpolation", or
smoothing, done.  That's why the default looks like this:
interp = {
field      = BOTH;
vld_thresh = 1.0;

type = [
{
method = UW_MEAN;
width  = 1;
}
];
};

However, this provides an easy way to smooth the data before computing
statistics.  And that is called "upscaling".  So you could see how the
performance of your model improves the more you smooth it.
Typically, smoother forecast score much better than more detailed
ones.  But, as I mentioned, typically no smoothing it performed.

The neighborhood methods implemented in Grid-Stat must be performed
using a threshold.  First, the raw fields are thresholded to create a
0/1 bitmap in each.  Then, for each neighborhood width, a
"coverage" value is computed as the percentage of grid squares in that
box that are turned on.  The neighborhood stats are computed over
those coverage values.  The widths must be odd so that they're
centered on each grid point.  A width of 5 means you have 2 grid
points to the left and right.  7 means there's 3 on each side.  A
width of 4 wouldn't be centered on the grid box.

>
> 3:In both point-stat and grid-stat, the tutorial states that it is
not
> recommended to use analysis field for comparison. I don't quite get
> the point what the analysis field means. If I compare two wrfout by
using
> different physical schemes, is it counted as the situation the
tutorial
> states?

An analysis field is just the 0-hour forecast from a model.  Users
will often compare a 24-hour forecast from the previous day to the 0-
hour forecast of the current day.  They're assuming that the
model analysis is "truth".  The problem is that the model analysis is
typically very far from truth.  The model analysis will contain the
same type of biases and errors that the forecast will.
Verifying against a model analysis won't really tell you how good your
model is doing.

However, we set up the MET tools in a general way to enable users to
perform whatever type of comparison they'd like.  As you mention, you
can compare the output of two different physical schemes.
But the tough part will be interpreting the meaning of the resulting
statistics.

>
> 4: If I compare the grid fcst and grid obs for T2 in a specific
> time(Setting beg/end=0),then I will get some statistics values, such
as
> ME,MSE. I am not quite sure about the calculation process, for
example, in
> the fcst field, whether MET first sum the T2 value from all grid
points
> first, then compare with the obs? Or it compares the value between
fcst and
> obs for each point and do the statistics calculation.

For gridded verification, MET looks grid-point by grid-point.  For
each grid point, it considers the forecast value (f) and the
observation value (o).  If either of those contain bad data, it skips
that point.  If both data values are good, it computes an error value
as f - o.  The mean error (ME) is the average error over all grid
points.  The mean squared error (MSE) is the average squared
error over all grid points.

>
> 5: If I want to compare the variables value at the eta-level set in
the wrf
> namelist, any method for me to do that instead of just setting the
specific
> height?

No.  MET assumes that you've post-processed your raw WRF output for
two reasons.  First, post-processing destaggers the data and puts it
on a regular grid.  MET doesn't support staggered grids.
Second, post-processing interpolates the model output onto pressure
levels.  Point observations are defined at pressure levels, not hybrid
eta-levels.  In order to compare your model output to point
data, it needs to be interpolated to pressure levels.

For post-processing, we recommend using the Unified Post-Processor
which writes out GRIB files that MET supports very well.

>
> 6: For the MODE tool, I don't understand the convolution process.
The
> expression written as: C(x,y)=∑a(u,v)f(x-u)(x-v), is it the same
with
> C(x,y)=∑a(u,v)f(x-u,x-v)?  I know that we need to first set the R
and H
> value, but I don't know the true meaning for setting them. If H is
large,
> then R would be small, vice and versa.  However, to the value of
C(x,y), it
> is hard to compare (large area* lower height) versus (small area
*large
> height). Could you explain to me a little bit more under what
condition
> should I set larger H or smaller R?

I don't think it's very necessary to understand the convolution
process.  It's just a circular smoothing filter.  The convolution
config file).  That defines the convolution radius in grid units.  The
value at each grid point is just replaced by the average value of all
grid points falling within the circle of that radius around
the point.  I do suggest playing around with it.  Keep the threshold
set the same and see how the objects change as you increase/decrease

Ultimately, you should play around with both the convolution threshold
and radius to define objects that capture the phenomenon of interest.
For example, if you're interested in studying large MCS's,
you'd set the convolution radius high and the convolution threshold
low (small number of large objects).  For small scale convection,
you'd set the convolution radius low and the threshold high (large
number of small objects).

>
> 7: If I want to verify the grid data from CMAQ output, like the NO2
> concentration, can I do that with MET? How to set the 'field' in the
config
> file?
>

I'm not familiar with that data set.  If you have a gridded data file
that MET supports and have questions about extracting data from it,
just post a sample data file to our anonymous ftp site
following these instructions:
http://www.dtcenter.org/met/users/support/met_help.php#ftp

Then send us a met-help ticket about it.

>
> 9:My last question is regarding the ascii to nc tool. My obs data is
not
> bufr nor the standard ascii format for MET. I then used both Fortran
and
> Matlab to transfer my data to the standard ascii format for MET. To
the
> fortran one, it showed a lot of such warnings:
> WARNING:
> WARNING: process_little_r_obs() -> the number of data lines
specified in
> the header (10) does not match the number found in the data (1) on
line
> number 4087.
> WARNING:
> WARNING:
> WARNING: process_little_r_obs() -> the number of data lines
specified in
> the header (10) does not match the number found in the data (1) on
line
> number 4091.
> WARNING:
> WARNING:
> WARNING: process_little_r_obs() -> the number of data lines
specified in
> the header (10) does not match the number found in the data (1) on
line
> number 4095.
>
> But at last, the nc file can be produced. To the Matlab one, the
process is
> correct, could you please tell me the reason. Is that related to the
data
> type written onto the file, like the string or the float? But the
format I
> set is the same in both scripts. I have also attached the data
transformed
> by fortran and matlab to this email.

I ran the two data files you sent through ascii2nc and both ran fine
without any warnings.  The warnings about "little_r" you're seeing are
odd.  ascii2nc supports multiple ascii file formats, one of
which is named little_r.  So for some reason, it was not interpreting
the format of the ascii data you passed it correctly.  You can
explicitly tell it the file format with the "-format" command line
option.  I'd suggest passing the "-format met_point" option to
ascii2nc to explicitly tell it to interpret your data using the MET
point format.

>
> Also, since the data is not coming from bufr, to the Message_Type I
just
> write 'ADPUPA', whether this will influence the statistics result?
The
> height for different observation stations might be different, is
there any
> method for me to compare the fcst and obs for different specific
heights
> instead of just setting a height value(e.g. 2m)?

For surface data, you should set the message type to ADPSFC.  When
comparing 2-meter temperature to the ADPSFC message type, no vertical
interpolation is done.  For upper-air verification at pressure
levels, vertical interpolation is done linear in the log of pressure.
When verifying a certain number of meters above/below ground (like
winds at 30m or 40m), vertical interpolation is done linear in
height.

>
>
> Sincerely,
>
> Jason
>

------------------------------------------------
Subject: Several questions regarding MET application
From: Xingcheng Lu
Time: Wed Oct 30 10:54:33 2013

Hi John,

I still not quite understand the neighborhood method, I know that we
first
need to set a threshold to enclose other points which are closed to
the
center point, but which factor decides whether the grid within the
searching radius is turn on or not?

I ran the Ascii fortran one just now, and it worked! I don't know why,
maybe it is due to my cluster issue. By the way, what kind of data can
I
use if I want to apply the little_r option?

I just made a comparison for my observation data and forecast data for
Z0.
I made a test and found that for ADPUPA, only when the elevation is
zero
can the observation and forecast be matched. However, since the
observation
height and elevation is the same in my obs data, like if the elevation
is 5
meters, the observation height is also 5m. I don't know under such
condition whether the obs can be counted as  Z0? If yes, I don't know
why
it cannot be matched by MET. But if I set as ADPSFC, all the obs can
be
matched.

My data has exact pressure value, and to the Z0, it ranges from 990-
1014.
However, for both ADPUPA and ADPSFC, the results of P960-1013  and Z0
are
not the same. This results seem like: The temperature related to
pressure
is not the same with that related to height at the same location. I am
wondering whether there is any interpretation for the temp value
related to
the pressure?(I have attached one of my result to this email.)

Also, I need to make a full comparison between point obs and forecast
on
surface, do you have any idea that which interpretation method is more
reliable. Also, to the surface temperature, I wrote ADPSFC for the
first
column of obs-ascii, and set Z0 in the pointstat config file, am I
correct
or not? To the UW_Weight and DW_Weight method, I need to first set the
width, any suggestion for that?

Regards,

Jason

2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>

> Jason,
>
>
> Thanks,
> John
>
> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
> >
> > Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
> > Transaction: Ticket created by xingchenglu2011 at u.northwestern.edu
> >         Queue: met_help
> >       Subject: Several questions regarding MET application
> >         Owner: Nobody
> >    Requestors: xingchenglu2011 at u.northwestern.edu
> >        Status: new
> >   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
> >
> >
> >
> > I have several questions regarding the application of MET:
> >
> > 1:The threshold setting for variable(e.g. >273) is frequent in the
> > tutorial, whether the threshold will be invalid if I just
calculate and
> > compare the continuous statistics.(Like if MET will get rid of the
data
> > which is less than 273 for continuous verification?)
>
> The "cat_thresh" setting stands for "categorical threshold".  That
is used
> when computing contingency table counts and statistics (the CTC and
CTS
> output line types).  The "cat_thresh" is used to
> define what constitutes an "event" when computing a 2x2 contingency
table.
>  It has no impact on the continuous statistics and partial sums in
the CNT
> and SL1L2 output line types.
>
> However, in the future we may add a parameter to filter the matched
pairs
> that go into the continuous statistics.  Some users have requested
the
> ability to do conditional verification like that -
> where you throw out some of the matched pairs before computing
continuous
> stats.  But that does not currently exist in the current METv4.1
release.
>
> >
> > 2:For the neighborhood method applied in gridded-gridded
comparison,
> > whether this method is just useful for the categorical variables?
Can it
> be
> > applied in the continuous statistics? I don't quite understand
that why
> the
> > width value for the square must be an odd integer. Also, in the
gridded
> > comparison, I don't quite understand why before comparison, fcst
and obs
> > fields needed to be smoothed first.
>
> To answer your second question first, they do not need to be
smoothed
> first.  Typically, grid_stat is run with no "interpolation", or
smoothing,
> done.  That's why the default looks like this:
> interp = {
>     field      = BOTH;
>     vld_thresh = 1.0;
>
>     type = [
>        {
>           method = UW_MEAN;
>           width  = 1;
>        }
>     ];
> };
>
> However, this provides an easy way to smooth the data before
computing
> statistics.  And that is called "upscaling".  So you could see how
the
> performance of your model improves the more you smooth it.
>   Typically, smoother forecast score much better than more detailed
ones.
>  But, as I mentioned, typically no smoothing it performed.
>
> The neighborhood methods implemented in Grid-Stat must be performed
using
> a threshold.  First, the raw fields are thresholded to create a 0/1
bitmap
> in each.  Then, for each neighborhood width, a
> "coverage" value is computed as the percentage of grid squares in
that box
> that are turned on.  The neighborhood stats are computed over those
> coverage values.  The widths must be odd so that they're
> centered on each grid point.  A width of 5 means you have 2 grid
points to
> the left and right.  7 means there's 3 on each side.  A width of 4
wouldn't
> be centered on the grid box.
>
> >
> > 3:In both point-stat and grid-stat, the tutorial states that it is
not
> > recommended to use analysis field for comparison. I don't quite
get
> > the point what the analysis field means. If I compare two wrfout
by using
> > different physical schemes, is it counted as the situation the
tutorial
> > states?
>
> An analysis field is just the 0-hour forecast from a model.  Users
will
> often compare a 24-hour forecast from the previous day to the 0-hour
> forecast of the current day.  They're assuming that the
> model analysis is "truth".  The problem is that the model analysis
is
> typically very far from truth.  The model analysis will contain the
same
> type of biases and errors that the forecast will.
> Verifying against a model analysis won't really tell you how good
your
> model is doing.
>
> However, we set up the MET tools in a general way to enable users to
> perform whatever type of comparison they'd like.  As you mention,
you can
> compare the output of two different physical schemes.
> But the tough part will be interpreting the meaning of the resulting
> statistics.
>
> >
> > 4: If I compare the grid fcst and grid obs for T2 in a specific
> > time(Setting beg/end=0),then I will get some statistics values,
such as
> > ME,MSE. I am not quite sure about the calculation process, for
example,
> in
> > the fcst field, whether MET first sum the T2 value from all grid
points
> > first, then compare with the obs? Or it compares the value between
fcst
> and
> > obs for each point and do the statistics calculation.
>
> For gridded verification, MET looks grid-point by grid-point.  For
each
> grid point, it considers the forecast value (f) and the observation
value
> (o).  If either of those contain bad data, it skips
> that point.  If both data values are good, it computes an error
value as f
> - o.  The mean error (ME) is the average error over all grid points.
The
> mean squared error (MSE) is the average squared
> error over all grid points.
>
> >
> > 5: If I want to compare the variables value at the eta-level set
in the
> wrf
> > namelist, any method for me to do that instead of just setting the
> specific
> > height?
>
> No.  MET assumes that you've post-processed your raw WRF output for
two
> reasons.  First, post-processing destaggers the data and puts it on
a
> regular grid.  MET doesn't support staggered grids.
> Second, post-processing interpolates the model output onto pressure
> levels.  Point observations are defined at pressure levels, not
hybrid
> eta-levels.  In order to compare your model output to point
> data, it needs to be interpolated to pressure levels.
>
> For post-processing, we recommend using the Unified Post-Processor
which
> writes out GRIB files that MET supports very well.
>
> >
> > 6: For the MODE tool, I don't understand the convolution process.
The
> > expression written as: C(x,y)=∑a(u,v)f(x-u)(x-v), is it the same
with
> > C(x,y)=∑a(u,v)f(x-u,x-v)?  I know that we need to first set the R
and H
> > value, but I don't know the true meaning for setting them. If H is
large,
> > then R would be small, vice and versa.  However, to the value of
C(x,y),
> it
> > is hard to compare (large area* lower height) versus (small area
*large
> > height). Could you explain to me a little bit more under what
condition
> > should I set larger H or smaller R?
>
> I don't think it's very necessary to understand the convolution
process.
>  It's just a circular smoothing filter.  The convolution process is
> config file).  That defines the convolution radius in grid units.
The
> value at each grid point is just replaced by the average value of
all grid
> points falling within the circle of that radius around
> the point.  I do suggest playing around with it.  Keep the threshold
set
> the same and see how the objects change as you increase/decrease the
>
> Ultimately, you should play around with both the convolution
threshold and
> radius to define objects that capture the phenomenon of interest.
For
> example, if you're interested in studying large MCS's,
> you'd set the convolution radius high and the convolution threshold
low
> (small number of large objects).  For small scale convection, you'd
set the
> convolution radius low and the threshold high (large
> number of small objects).
>
> >
> > 7: If I want to verify the grid data from CMAQ output, like the
NO2
> > concentration, can I do that with MET? How to set the 'field' in
the
> config
> > file?
> >
>
> I'm not familiar with that data set.  If you have a gridded data
file that
> MET supports and have questions about extracting data from it, just
post a
> sample data file to our anonymous ftp site
> following these instructions:
>     http://www.dtcenter.org/met/users/support/met_help.php#ftp
>
> Then send us a met-help ticket about it.
>
> >
> > 9:My last question is regarding the ascii to nc tool. My obs data
is not
> > bufr nor the standard ascii format for MET. I then used both
Fortran and
> > Matlab to transfer my data to the standard ascii format for MET.
To the
> > fortran one, it showed a lot of such warnings:
> > WARNING:
> > WARNING: process_little_r_obs() -> the number of data lines
specified in
> > the header (10) does not match the number found in the data (1) on
line
> > number 4087.
> > WARNING:
> > WARNING:
> > WARNING: process_little_r_obs() -> the number of data lines
specified in
> > the header (10) does not match the number found in the data (1) on
line
> > number 4091.
> > WARNING:
> > WARNING:
> > WARNING: process_little_r_obs() -> the number of data lines
specified in
> > the header (10) does not match the number found in the data (1) on
line
> > number 4095.
> >
> > But at last, the nc file can be produced. To the Matlab one, the
process
> is
> > correct, could you please tell me the reason. Is that related to
the data
> > type written onto the file, like the string or the float? But the
format
> I
> > set is the same in both scripts. I have also attached the data
> transformed
> > by fortran and matlab to this email.
>
> I ran the two data files you sent through ascii2nc and both ran fine
> without any warnings.  The warnings about "little_r" you're seeing
are odd.
>  ascii2nc supports multiple ascii file formats, one of
> which is named little_r.  So for some reason, it was not
interpreting the
> format of the ascii data you passed it correctly.  You can
explicitly tell
> it the file format with the "-format" command line
> option.  I'd suggest passing the "-format met_point" option to
ascii2nc to
> explicitly tell it to interpret your data using the MET point
format.
>
> >
> > Also, since the data is not coming from bufr, to the Message_Type
I just
> > write 'ADPUPA', whether this will influence the statistics result?
The
> > height for different observation stations might be different, is
there
> any
> > method for me to compare the fcst and obs for different specific
heights
> > instead of just setting a height value(e.g. 2m)?
>
> For surface data, you should set the message type to ADPSFC.  When
> comparing 2-meter temperature to the ADPSFC message type, no
vertical
> interpolation is done.  For upper-air verification at pressure
> levels, vertical interpolation is done linear in the log of
pressure.
>  When verifying a certain number of meters above/below ground (like
winds
> at 30m or 40m), vertical interpolation is done linear in
> height.
>
> >
> >
> > Sincerely,
> >
> > Jason
> >
>
>

------------------------------------------------
Subject: Several questions regarding MET application
From: Xingcheng Lu
Time: Wed Oct 30 10:54:33 2013

OBS_VALID_BEG   OBS_VALID_END   FCST_VAR FCST_LEV  OBS_VAR OBS_LEV
OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH
COV_THRESH ALPHA   LINE_TYPE TOTAL FBAR      FBAR_NCL  FBAR_NCU
FBAR_BCL  FBAR_BCU  FSTDEV  FSTDEV_NCL FSTDEV_NCU FSTDEV_BCL
FSTDEV_BCU OBAR      OBAR_NCL  OBAR_NCU  OBAR_BCL  OBAR_BCU  OSTDEV
OSTDEV_NCL OSTDEV_NCU OSTDEV_BCL OSTDEV_BCU PR_CORR PR_CORR_NCL
PR_CORR_NCU PR_CORR_BCL PR_CORR_BCU SP_CORR KT_CORR RANKS FRANK_TIES
ORANK_TIES ME       ME_NCL   ME_NCU   ME_BCL   ME_BCU   ESTDEV
ESTDEV_NCL ESTDEV_NCU ESTDEV_BCL ESTDEV_BCU MBIAS   MBIAS_BCL
MBIAS_BCU MAE     MAE_BCL MAE_BCU MSE     MSE_BCL MSE_BCU BCMSE
BCMSE_BCL BCMSE_BCU RMSE    RMSE_BCL RMSE_BCU E10      E10_BCL
E10_BCU  E25      E25_BCL  E25_BCU  E50      E50_BCL  E50_BCU  E75
E75_BCL  E75_BCU  E90     E90_BCL E90_BCU
V4.1    WRF   200000    20110701_200000 20110701_200000 000000
20110701_200000 20110701_200000 TMP      Z2        TMP     Z2
ADPSFC FULL    UW_MEAN     1           NA          NA         NA
0.05000 CNT       487   299.44346 299.36257 299.52434 299.36132
299.52702 0.91071 0.85688    0.97181    0.86064    0.95662
300.07402 300.00502 300.14303 300.00418 300.14220 0.77694 0.73102
0.82907    0.70418    0.85780    0.46674 0.39424     0.53347
0.38474     0.54189     0.47523 0.33760 487   8          443
-0.63057 -0.70863 -0.55251 -0.70536 -0.55437 0.87894 0.82699
0.93791    0.81180    0.95094    0.99790 0.99765   0.99815   0.83268
0.77318 0.89461 1.16856 1.01152 1.34779 0.77095 0.65767   0.90243
1.08100 1.00574  1.16095  -1.75654 -1.87600 -1.63606 -1.11810 -1.31902
-1.07500 -0.53810 -0.61378 -0.46370 -0.10220 -0.17059 -0.00320 0.37589
0.26645 0.50667
V4.1    WRF   200000    20110701_200000 20110701_200000 000000
20110701_200000 20110701_200000 TMP      Z2        TMP     Z2
ADPSFC FULL    DW_MEAN     9           NA          NA         NA
0.05000 CNT       487   299.41606 299.34054 299.49157 299.33962
299.49664 0.85024 0.79998    0.90728    0.80473    0.89476
300.07402 300.00502 300.14303 300.00273 300.14223 0.77694 0.73102
0.82907    0.69955    0.85985    0.50723 0.43812     0.57038
0.42820     0.58121     0.51575 0.36955 487   1          443
-0.65797 -0.72992 -0.58601 -0.73249 -0.58375 0.81019 0.76230
0.86454    0.73882    0.88283    0.99781 0.99756   0.99806   0.81872
0.76074 0.87654 1.08797 0.93872 1.26669 0.65505 0.54473   0.77778
1.04306 0.96888  1.12547  -1.65465 -1.78447 -1.53520 -1.06929 -1.16390
-0.99037 -0.64062 -0.70725 -0.55806 -0.15923 -0.26177 -0.04226 0.26677
0.14691 0.35878
V4.1    WRF   200000    20110701_200000 20110701_200000 000000
20110701_200000 20110701_200000 TMP      Z2        TMP     Z2
ADPSFC KK2     UW_MEAN     1           NA          NA         NA
0.05000 CNT       467   299.47961 299.39849 299.56074 299.40508
299.56103 0.89451 0.84059    0.95589    0.84294    0.93845
300.10053 300.03204 300.16903 300.03415 300.17099 0.75519 0.70967
0.80701    0.68436    0.83290    0.44147 0.36537     0.51171
0.35718     0.52862     0.45613 0.32356 467   8          425
-0.62092 -0.70071 -0.54113 -0.69993 -0.54019 0.87978 0.82674
0.94015    0.80026    0.95350    0.99793 0.99767   0.99820   0.82490
0.76119 0.88753 1.15790 0.98080 1.35312 0.77236 0.63905   0.90721
1.07606 0.99035  1.16324  -1.73287 -1.87431 -1.59785 -1.10609 -1.27691
-1.04972 -0.51649 -0.60010 -0.43209 -0.09700 -0.16780 -0.00379 0.37850
0.24042 0.51271
V4.1    WRF   200000    20110701_200000 20110701_200000 000000
20110701_200000 20110701_200000 TMP      Z2        TMP     Z2
ADPSFC KK2     DW_MEAN     9           NA          NA         NA
0.05000 CNT       467   299.45420 299.37883 299.52958 299.37913
299.53343 0.83104 0.78094    0.88806    0.78081    0.88001
300.10053 300.03204 300.16903 300.03178 300.17164 0.75519 0.70967
0.80701    0.67861    0.83116    0.48598 0.41348     0.55236
0.40282     0.56319     0.50030 0.35804 467   1          425
-0.64633 -0.71951 -0.57316 -0.72163 -0.57369 0.80681 0.75817
0.86217    0.73309    0.88599    0.99785 0.99760   0.99809   0.80839
0.75303 0.86739 1.06730 0.90870 1.25757 0.64955 0.53626   0.78330
1.03310 0.95326  1.12141  -1.60111 -1.78067 -1.50215 -1.06416 -1.16383
-0.98020 -0.63106 -0.69577 -0.54302 -0.15599 -0.25269 -0.03697 0.27048
0.13101 0.36077
V4.1    WRF   200000    20110701_200000 20110701_200000 000000
20110701_200000 20110701_200000 TMP      P1014-990 TMP     P1014-990
ADPSFC FULL    UW_MEAN     1           NA          NA         NA
0.05000 CNT       487   299.52154 299.48164 299.56144 299.48111
299.55897 0.44922 0.42266    0.47935    0.41335    0.48367
300.07402 300.00502 300.14303 300.00071 300.14344 0.77694 0.73102
0.82907    0.70184    0.85677    0.49324 0.42292     0.55765
0.41552     0.56182     0.47464 0.33988 487   67         443
-0.55248 -0.61280 -0.49217 -0.61471 -0.49397 0.67907 0.63893
0.72463    0.60978    0.75032    0.99816 0.99795   0.99835   0.73014
0.68671 0.77511 0.76543 0.66528 0.89497 0.46019 0.37107   0.56183
0.87489 0.81565  0.94603  -1.22846 -1.33425 -1.18943 -0.93725 -0.99876
-0.88319 -0.63524 -0.69743 -0.55721 -0.22576 -0.30875 -0.08925 0.25315
0.14776 0.44460
V4.1    WRF   200000    20110701_200000 20110701_200000 000000
20110701_200000 20110701_200000 TMP      P1014-990 TMP     P1014-990
ADPSFC FULL    DW_MEAN     9           NA          NA         NA
0.05000 CNT       487   299.51926 299.48015 299.55838 299.48138
299.55960 0.44039 0.41436    0.46993    0.40368    0.47432
300.07402 300.00502 300.14303 300.00357 300.13912 0.77694 0.73102
0.82907    0.70261    0.85508    0.50452 0.43517     0.56791
0.42863     0.57338     0.48337 0.34598 487   1          443
-0.55476 -0.61449 -0.49503 -0.60968 -0.49221 0.67256 0.63281
0.71768    0.60450    0.74632    0.99815 0.99797   0.99836   0.72766
0.68488 0.76939 0.75916 0.65718 0.87444 0.45141 0.36467   0.55585
0.87130 0.81067  0.93511  -1.23263 -1.33668 -1.18534 -0.93305 -1.00190
-0.88440 -0.64215 -0.69772 -0.55372 -0.22279 -0.31837 -0.08355 0.25169
0.17523 0.42864
V4.1    WRF   200000    20110701_200000 20110701_200000 000000
20110701_200000 20110701_200000 TMP      P1014-990 TMP     P1014-990
ADPSFC KK2     UW_MEAN     1           NA          NA         NA
0.05000 CNT       467   299.54594 299.50721 299.58468 299.50799
299.58726 0.42708 0.40134    0.45639    0.39569    0.45699
300.10053 300.03204 300.16903 300.03220 300.17078 0.75519 0.70967
0.80701    0.68142    0.83186    0.48441 0.41177     0.55093
0.40971     0.55715     0.45651 0.32760 467   64         425
-0.55459 -0.61477 -0.49441 -0.61869 -0.49110 0.66351 0.62351
0.70903    0.59943    0.72818    0.99815 0.99794   0.99836   0.72167
0.67971 0.76594 0.74687 0.64896 0.86363 0.43930 0.35854   0.52912
0.86422 0.80558  0.92932  -1.22665 -1.32172 -1.18904 -0.93476 -0.99224
-0.87919 -0.62924 -0.70525 -0.55624 -0.23025 -0.30875 -0.09924 0.24294
0.12475 0.41374
V4.1    WRF   200000    20110701_200000 20110701_200000 000000
20110701_200000 20110701_200000 TMP      P1014-990 TMP     P1014-990
ADPSFC KK2     DW_MEAN     9           NA          NA         NA
0.05000 CNT       467   299.54275 299.50455 299.58094 299.50436
299.58170 0.42111 0.39572    0.45000    0.38895    0.45027
300.10053 300.03204 300.16903 300.03094 300.16565 0.75519 0.70967
0.80701    0.68025    0.83459    0.49466 0.42291     0.56026
0.41608     0.57219     0.46500 0.33358 467   1          425
-0.55779 -0.61747 -0.49811 -0.61630 -0.49372 0.65805 0.61838
0.70320    0.58930    0.72979    0.99814 0.99795   0.99835   0.72025
0.67618 0.76351 0.74323 0.63272 0.86255 0.43210 0.34653   0.53145
0.86211 0.79543  0.92874  -1.22097 -1.32473 -1.17870 -0.93131 -0.99993
-0.88159 -0.64215 -0.70491 -0.55261 -0.22659 -0.32272 -0.09476 0.23589
0.16326 0.38847

------------------------------------------------
Subject: Several questions regarding MET application
From: Xingcheng Lu
Time: Wed Nov 06 07:07:12 2013

Dear John,

I met another problem when I ran the MET. In my ascii observation
data, the
height and elevation are the same. In the config file I set both
Z0(TMP)
and Z2(TMP) and found that the RMSE of Z0 reached around 40 and Z2
only
around 2. In theory, I think that my observation data should be the
temperature near the ground(Not the soil temperature from wrf) because
elevation=height. So, I want to know if I set Z0(TMP), whether MET
will use
the soil temperature from wrf to compare with the observation data?

Also, if it is possible, hope that you can answer my question about
the
pressure issue I asked one week ago at your convenience. Thank you in

Sincerely,

Jason

2013/10/31 Xingcheng Lu <xingchenglu2011 at u.northwestern.edu>

> Hi John,
>
>
> I still not quite understand the neighborhood method, I know that we
first
> need to set a threshold to enclose other points which are closed to
the
> center point, but which factor decides whether the grid within the
> searching radius is turn on or not?
>
> I ran the Ascii fortran one just now, and it worked! I don't know
why,
> maybe it is due to my cluster issue. By the way, what kind of data
can I
> use if I want to apply the little_r option?
>
> I just made a comparison for my observation data and forecast data
for Z0.
> I made a test and found that for ADPUPA, only when the elevation is
zero
> can the observation and forecast be matched. However, since the
observation
> height and elevation is the same in my obs data, like if the
elevation is 5
> meters, the observation height is also 5m. I don't know under such
> condition whether the obs can be counted as  Z0? If yes, I don't
know why
> it cannot be matched by MET. But if I set as ADPSFC, all the obs can
be
> matched.
>
> My data has exact pressure value, and to the Z0, it ranges from 990-
1014.
> However, for both ADPUPA and ADPSFC, the results of P960-1013  and
Z0 are
> not the same. This results seem like: The temperature related to
pressure
> is not the same with that related to height at the same location. I
am
> wondering whether there is any interpretation for the temp value
related to
> the pressure?(I have attached one of my result to this email.)
>
> Also, I need to make a full comparison between point obs and
forecast on
> surface, do you have any idea that which interpretation method is
more
> reliable. Also, to the surface temperature, I wrote ADPSFC for the
first
> column of obs-ascii, and set Z0 in the pointstat config file, am I
correct
> or not? To the UW_Weight and DW_Weight method, I need to first set
the
> width, any suggestion for that?
>
>
> Regards,
>
> Jason
>
>
> 2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>
>
>> Jason,
>>
>>
>> Thanks,
>> John
>>
>> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
>> >
>> > Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
>> > Transaction: Ticket created by xingchenglu2011 at u.northwestern.edu
>> >         Queue: met_help
>> >       Subject: Several questions regarding MET application
>> >         Owner: Nobody
>> >    Requestors: xingchenglu2011 at u.northwestern.edu
>> >        Status: new
>> >   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639>
>> >
>> >
>> >
>> > I have several questions regarding the application of MET:
>> >
>> > 1:The threshold setting for variable(e.g. >273) is frequent in
the
>> > tutorial, whether the threshold will be invalid if I just
calculate and
>> > compare the continuous statistics.(Like if MET will get rid of
the data
>> > which is less than 273 for continuous verification?)
>>
>> The "cat_thresh" setting stands for "categorical threshold".  That
is
>> used when computing contingency table counts and statistics (the
CTC and
>> CTS output line types).  The "cat_thresh" is used to
>> define what constitutes an "event" when computing a 2x2 contingency
>> table.  It has no impact on the continuous statistics and partial
sums in
>> the CNT and SL1L2 output line types.
>>
>> However, in the future we may add a parameter to filter the matched
pairs
>> that go into the continuous statistics.  Some users have requested
the
>> ability to do conditional verification like that -
>> where you throw out some of the matched pairs before computing
continuous
>> stats.  But that does not currently exist in the current METv4.1
release.
>>
>> >
>> > 2:For the neighborhood method applied in gridded-gridded
comparison,
>> > whether this method is just useful for the categorical variables?
Can
>> it be
>> > applied in the continuous statistics? I don't quite understand
that why
>> the
>> > width value for the square must be an odd integer. Also, in the
gridded
>> > comparison, I don't quite understand why before comparison, fcst
and obs
>> > fields needed to be smoothed first.
>>
>> To answer your second question first, they do not need to be
smoothed
>> first.  Typically, grid_stat is run with no "interpolation", or
smoothing,
>> done.  That's why the default looks like this:
>> interp = {
>>     field      = BOTH;
>>     vld_thresh = 1.0;
>>
>>     type = [
>>        {
>>           method = UW_MEAN;
>>           width  = 1;
>>        }
>>     ];
>> };
>>
>> However, this provides an easy way to smooth the data before
computing
>> statistics.  And that is called "upscaling".  So you could see how
the
>> performance of your model improves the more you smooth it.
>>   Typically, smoother forecast score much better than more detailed
ones.
>>  But, as I mentioned, typically no smoothing it performed.
>>
>> The neighborhood methods implemented in Grid-Stat must be performed
using
>> a threshold.  First, the raw fields are thresholded to create a 0/1
bitmap
>> in each.  Then, for each neighborhood width, a
>> "coverage" value is computed as the percentage of grid squares in
that
>> box that are turned on.  The neighborhood stats are computed over
those
>> coverage values.  The widths must be odd so that they're
>> centered on each grid point.  A width of 5 means you have 2 grid
points
>> to the left and right.  7 means there's 3 on each side.  A width of
4
>> wouldn't be centered on the grid box.
>>
>> >
>> > 3:In both point-stat and grid-stat, the tutorial states that it
is not
>> > recommended to use analysis field for comparison. I don't quite
get
>> > the point what the analysis field means. If I compare two wrfout
by
>> using
>> > different physical schemes, is it counted as the situation the
tutorial
>> > states?
>>
>> An analysis field is just the 0-hour forecast from a model.  Users
will
>> often compare a 24-hour forecast from the previous day to the 0-
hour
>> forecast of the current day.  They're assuming that the
>> model analysis is "truth".  The problem is that the model analysis
is
>> typically very far from truth.  The model analysis will contain the
same
>> type of biases and errors that the forecast will.
>> Verifying against a model analysis won't really tell you how good
your
>> model is doing.
>>
>> However, we set up the MET tools in a general way to enable users
to
>> perform whatever type of comparison they'd like.  As you mention,
you can
>> compare the output of two different physical schemes.
>> But the tough part will be interpreting the meaning of the
resulting
>> statistics.
>>
>> >
>> > 4: If I compare the grid fcst and grid obs for T2 in a specific
>> > time(Setting beg/end=0),then I will get some statistics values,
such as
>> > ME,MSE. I am not quite sure about the calculation process, for
example,
>> in
>> > the fcst field, whether MET first sum the T2 value from all grid
points
>> > first, then compare with the obs? Or it compares the value
between fcst
>> and
>> > obs for each point and do the statistics calculation.
>>
>> For gridded verification, MET looks grid-point by grid-point.  For
each
>> grid point, it considers the forecast value (f) and the observation
value
>> (o).  If either of those contain bad data, it skips
>> that point.  If both data values are good, it computes an error
value as
>> f - o.  The mean error (ME) is the average error over all grid
points.  The
>> mean squared error (MSE) is the average squared
>> error over all grid points.
>>
>> >
>> > 5: If I want to compare the variables value at the eta-level set
in the
>> wrf
>> > namelist, any method for me to do that instead of just setting
the
>> specific
>> > height?
>>
>> No.  MET assumes that you've post-processed your raw WRF output for
two
>> reasons.  First, post-processing destaggers the data and puts it on
a
>> regular grid.  MET doesn't support staggered grids.
>> Second, post-processing interpolates the model output onto pressure
>> levels.  Point observations are defined at pressure levels, not
hybrid
>> eta-levels.  In order to compare your model output to point
>> data, it needs to be interpolated to pressure levels.
>>
>> For post-processing, we recommend using the Unified Post-Processor
which
>> writes out GRIB files that MET supports very well.
>>
>> >
>> > 6: For the MODE tool, I don't understand the convolution process.
The
>> > expression written as: C(x,y)=∑a(u,v)f(x-u)(x-v), is it the same
with
>> > C(x,y)=∑a(u,v)f(x-u,x-v)?  I know that we need to first set the R
and H
>> > value, but I don't know the true meaning for setting them. If H
is
>> large,
>> > then R would be small, vice and versa.  However, to the value of
>> C(x,y), it
>> > is hard to compare (large area* lower height) versus (small area
*large
>> > height). Could you explain to me a little bit more under what
condition
>> > should I set larger H or smaller R?
>>
>> I don't think it's very necessary to understand the convolution
process.
>>  It's just a circular smoothing filter.  The convolution process is
>> config file).  That defines the convolution radius in grid units.
The
>> value at each grid point is just replaced by the average value of
all grid
>> points falling within the circle of that radius around
>> the point.  I do suggest playing around with it.  Keep the
threshold set
>> the same and see how the objects change as you increase/decrease
the
>>
>> Ultimately, you should play around with both the convolution
threshold
>> and radius to define objects that capture the phenomenon of
interest.  For
>> example, if you're interested in studying large MCS's,
>> you'd set the convolution radius high and the convolution threshold
low
>> (small number of large objects).  For small scale convection, you'd
set the
>> convolution radius low and the threshold high (large
>> number of small objects).
>>
>> >
>> > 7: If I want to verify the grid data from CMAQ output, like the
NO2
>> > concentration, can I do that with MET? How to set the 'field' in
the
>> config
>> > file?
>> >
>>
>> I'm not familiar with that data set.  If you have a gridded data
file
>> that MET supports and have questions about extracting data from it,
just
>> post a sample data file to our anonymous ftp site
>> following these instructions:
>>     http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>
>> Then send us a met-help ticket about it.
>>
>> >
>> > 9:My last question is regarding the ascii to nc tool. My obs data
is not
>> > bufr nor the standard ascii format for MET. I then used both
Fortran and
>> > Matlab to transfer my data to the standard ascii format for MET.
To the
>> > fortran one, it showed a lot of such warnings:
>> > WARNING:
>> > WARNING: process_little_r_obs() -> the number of data lines
specified in
>> > the header (10) does not match the number found in the data (1)
on line
>> > number 4087.
>> > WARNING:
>> > WARNING:
>> > WARNING: process_little_r_obs() -> the number of data lines
specified in
>> > the header (10) does not match the number found in the data (1)
on line
>> > number 4091.
>> > WARNING:
>> > WARNING:
>> > WARNING: process_little_r_obs() -> the number of data lines
specified in
>> > the header (10) does not match the number found in the data (1)
on line
>> > number 4095.
>> >
>> > But at last, the nc file can be produced. To the Matlab one, the
>> process is
>> > correct, could you please tell me the reason. Is that related to
the
>> data
>> > type written onto the file, like the string or the float? But the
>> format I
>> > set is the same in both scripts. I have also attached the data
>> transformed
>> > by fortran and matlab to this email.
>>
>> I ran the two data files you sent through ascii2nc and both ran
fine
>> without any warnings.  The warnings about "little_r" you're seeing
are odd.
>>  ascii2nc supports multiple ascii file formats, one of
>> which is named little_r.  So for some reason, it was not
interpreting the
>> format of the ascii data you passed it correctly.  You can
explicitly tell
>> it the file format with the "-format" command line
>> option.  I'd suggest passing the "-format met_point" option to
ascii2nc
>> to explicitly tell it to interpret your data using the MET point
format.
>>
>> >
>> > Also, since the data is not coming from bufr, to the Message_Type
I just
>> > write 'ADPUPA', whether this will influence the statistics
result? The
>> > height for different observation stations might be different, is
there
>> any
>> > method for me to compare the fcst and obs for different specific
heights
>> > instead of just setting a height value(e.g. 2m)?
>>
>> For surface data, you should set the message type to ADPSFC.  When
>> comparing 2-meter temperature to the ADPSFC message type, no
vertical
>> interpolation is done.  For upper-air verification at pressure
>> levels, vertical interpolation is done linear in the log of
pressure.
>>  When verifying a certain number of meters above/below ground (like
winds
>> at 30m or 40m), vertical interpolation is done linear in
>> height.
>>
>> >
>> >
>> > Sincerely,
>> >
>> > Jason
>> >
>>
>>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #63639] Several questions regarding MET application
From: John Halley Gotway
Time: Thu Nov 07 12:01:32 2013

Jason,

I'm not exactly sure how to address this issue.  But let me tell you
how Point-Stat handles verification of "surface" variables.  It
depends on the observation message type being used.  The ADPSFC and
SFCSHP message types are special cases.  Basically, any point
observation with an APDSFC or SFCSHP message type are assumed to be at
the surface - regardless of their actual elevation or height value.

When you're verifying forecasts with a vertical level type (such as 2-
meter temperature or 10-meter winds - any vertical level specified
using a "Z") and comparing it to a surface message type (ADPSFC
or SFCSHP), all point observations of those types will be used.  So
when verifying 2-m TMP and 0-m TMP against the ADPSFC message type, I
would expect that they would use the same set of point
observations.

This vertical level matching part can get a bit tricky.  It'd probably
be best to have you send me a sample forecast file, observation file,
and Point-Stat config file along with questions as to why
Point-Stat is producing the output that it is.  Usually working
through a specific example provides more answers than speaking more
generally.

that in the test data you send as well.  I'm having a difficult time
understanding exactly what the issue is.  I could take a
look at your config file and your data and perhaps offer some
suggestions.

You can send me data by posting it to our anonymous ftp site:
http://www.dtcenter.org/met/users/support/met_help.php#ftp

Thanks,
John

On 11/06/2013 07:07 AM, Xingcheng Lu via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>
> Dear John,
>
> I met another problem when I ran the MET. In my ascii observation
data, the
> height and elevation are the same. In the config file I set both
Z0(TMP)
> and Z2(TMP) and found that the RMSE of Z0 reached around 40 and Z2
only
> around 2. In theory, I think that my observation data should be the
> temperature near the ground(Not the soil temperature from wrf)
because
> elevation=height. So, I want to know if I set Z0(TMP), whether MET
will use
> the soil temperature from wrf to compare with the observation data?
>
> Also, if it is possible, hope that you can answer my question about
the
> pressure issue I asked one week ago at your convenience. Thank you
in
>
> Sincerely,
>
> Jason
>
>
> 2013/10/31 Xingcheng Lu <xingchenglu2011 at u.northwestern.edu>
>
>> Hi John,
>>
>>
>> I still not quite understand the neighborhood method, I know that
we first
>> need to set a threshold to enclose other points which are closed to
the
>> center point, but which factor decides whether the grid within the
>> searching radius is turn on or not?
>>
>> I ran the Ascii fortran one just now, and it worked! I don't know
why,
>> maybe it is due to my cluster issue. By the way, what kind of data
can I
>> use if I want to apply the little_r option?
>>
>> I just made a comparison for my observation data and forecast data
for Z0.
>> I made a test and found that for ADPUPA, only when the elevation is
zero
>> can the observation and forecast be matched. However, since the
observation
>> height and elevation is the same in my obs data, like if the
elevation is 5
>> meters, the observation height is also 5m. I don't know under such
>> condition whether the obs can be counted as  Z0? If yes, I don't
know why
>> it cannot be matched by MET. But if I set as ADPSFC, all the obs
can be
>> matched.
>>
>> My data has exact pressure value, and to the Z0, it ranges from
990-1014.
>> However, for both ADPUPA and ADPSFC, the results of P960-1013  and
Z0 are
>> not the same. This results seem like: The temperature related to
pressure
>> is not the same with that related to height at the same location. I
am
>> wondering whether there is any interpretation for the temp value
related to
>> the pressure?(I have attached one of my result to this email.)
>>
>> Also, I need to make a full comparison between point obs and
forecast on
>> surface, do you have any idea that which interpretation method is
more
>> reliable. Also, to the surface temperature, I wrote ADPSFC for the
first
>> column of obs-ascii, and set Z0 in the pointstat config file, am I
correct
>> or not? To the UW_Weight and DW_Weight method, I need to first set
the
>> width, any suggestion for that?
>>
>>
>> Regards,
>>
>> Jason
>>
>>
>> 2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>
>>
>>> Jason,
>>>
>>>
>>> Thanks,
>>> John
>>>
>>> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
>>>>
>>>> Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
>>>> Transaction: Ticket created by xingchenglu2011 at u.northwestern.edu
>>>>          Queue: met_help
>>>>        Subject: Several questions regarding MET application
>>>>          Owner: Nobody
>>>>     Requestors: xingchenglu2011 at u.northwestern.edu
>>>>         Status: new
>>>>    Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639>
>>>>
>>>>
>>>>
>>>> I have several questions regarding the application of MET:
>>>>
>>>> 1:The threshold setting for variable(e.g. >273) is frequent in
the
>>>> tutorial, whether the threshold will be invalid if I just
calculate and
>>>> compare the continuous statistics.(Like if MET will get rid of
the data
>>>> which is less than 273 for continuous verification?)
>>>
>>> The "cat_thresh" setting stands for "categorical threshold".  That
is
>>> used when computing contingency table counts and statistics (the
CTC and
>>> CTS output line types).  The "cat_thresh" is used to
>>> define what constitutes an "event" when computing a 2x2
contingency
>>> table.  It has no impact on the continuous statistics and partial
sums in
>>> the CNT and SL1L2 output line types.
>>>
>>> However, in the future we may add a parameter to filter the
matched pairs
>>> that go into the continuous statistics.  Some users have requested
the
>>> ability to do conditional verification like that -
>>> where you throw out some of the matched pairs before computing
continuous
>>> stats.  But that does not currently exist in the current METv4.1
release.
>>>
>>>>
>>>> 2:For the neighborhood method applied in gridded-gridded
comparison,
>>>> whether this method is just useful for the categorical variables?
Can
>>> it be
>>>> applied in the continuous statistics? I don't quite understand
that why
>>> the
>>>> width value for the square must be an odd integer. Also, in the
gridded
>>>> comparison, I don't quite understand why before comparison, fcst
and obs
>>>> fields needed to be smoothed first.
>>>
>>> To answer your second question first, they do not need to be
smoothed
>>> first.  Typically, grid_stat is run with no "interpolation", or
smoothing,
>>> done.  That's why the default looks like this:
>>> interp = {
>>>      field      = BOTH;
>>>      vld_thresh = 1.0;
>>>
>>>      type = [
>>>         {
>>>            method = UW_MEAN;
>>>            width  = 1;
>>>         }
>>>      ];
>>> };
>>>
>>> However, this provides an easy way to smooth the data before
computing
>>> statistics.  And that is called "upscaling".  So you could see how
the
>>> performance of your model improves the more you smooth it.
>>>    Typically, smoother forecast score much better than more
detailed ones.
>>>   But, as I mentioned, typically no smoothing it performed.
>>>
>>> The neighborhood methods implemented in Grid-Stat must be
performed using
>>> a threshold.  First, the raw fields are thresholded to create a
0/1 bitmap
>>> in each.  Then, for each neighborhood width, a
>>> "coverage" value is computed as the percentage of grid squares in
that
>>> box that are turned on.  The neighborhood stats are computed over
those
>>> coverage values.  The widths must be odd so that they're
>>> centered on each grid point.  A width of 5 means you have 2 grid
points
>>> to the left and right.  7 means there's 3 on each side.  A width
of 4
>>> wouldn't be centered on the grid box.
>>>
>>>>
>>>> 3:In both point-stat and grid-stat, the tutorial states that it
is not
>>>> recommended to use analysis field for comparison. I don't quite
get
>>>> the point what the analysis field means. If I compare two wrfout
by
>>> using
>>>> different physical schemes, is it counted as the situation the
tutorial
>>>> states?
>>>
>>> An analysis field is just the 0-hour forecast from a model.  Users
will
>>> often compare a 24-hour forecast from the previous day to the 0-
hour
>>> forecast of the current day.  They're assuming that the
>>> model analysis is "truth".  The problem is that the model analysis
is
>>> typically very far from truth.  The model analysis will contain
the same
>>> type of biases and errors that the forecast will.
>>> Verifying against a model analysis won't really tell you how good
your
>>> model is doing.
>>>
>>> However, we set up the MET tools in a general way to enable users
to
>>> perform whatever type of comparison they'd like.  As you mention,
you can
>>> compare the output of two different physical schemes.
>>> But the tough part will be interpreting the meaning of the
resulting
>>> statistics.
>>>
>>>>
>>>> 4: If I compare the grid fcst and grid obs for T2 in a specific
>>>> time(Setting beg/end=0),then I will get some statistics values,
such as
>>>> ME,MSE. I am not quite sure about the calculation process, for
example,
>>> in
>>>> the fcst field, whether MET first sum the T2 value from all grid
points
>>>> first, then compare with the obs? Or it compares the value
between fcst
>>> and
>>>> obs for each point and do the statistics calculation.
>>>
>>> For gridded verification, MET looks grid-point by grid-point.  For
each
>>> grid point, it considers the forecast value (f) and the
observation value
>>> (o).  If either of those contain bad data, it skips
>>> that point.  If both data values are good, it computes an error
value as
>>> f - o.  The mean error (ME) is the average error over all grid
points.  The
>>> mean squared error (MSE) is the average squared
>>> error over all grid points.
>>>
>>>>
>>>> 5: If I want to compare the variables value at the eta-level set
in the
>>> wrf
>>>> namelist, any method for me to do that instead of just setting
the
>>> specific
>>>> height?
>>>
>>> No.  MET assumes that you've post-processed your raw WRF output
for two
>>> reasons.  First, post-processing destaggers the data and puts it
on a
>>> regular grid.  MET doesn't support staggered grids.
>>> Second, post-processing interpolates the model output onto
pressure
>>> levels.  Point observations are defined at pressure levels, not
hybrid
>>> eta-levels.  In order to compare your model output to point
>>> data, it needs to be interpolated to pressure levels.
>>>
>>> For post-processing, we recommend using the Unified Post-Processor
which
>>> writes out GRIB files that MET supports very well.
>>>
>>>>
>>>> 6: For the MODE tool, I don't understand the convolution process.
The
>>>> expression written as: C(x,y)=∑a(u,v)f(x-u)(x-v), is it the same
with
>>>> C(x,y)=∑a(u,v)f(x-u,x-v)?  I know that we need to first set the R
and H
>>>> value, but I don't know the true meaning for setting them. If H
is
>>> large,
>>>> then R would be small, vice and versa.  However, to the value of
>>> C(x,y), it
>>>> is hard to compare (large area* lower height) versus (small area
*large
>>>> height). Could you explain to me a little bit more under what
condition
>>>> should I set larger H or smaller R?
>>>
>>> I don't think it's very necessary to understand the convolution
process.
>>>   It's just a circular smoothing filter.  The convolution process
is
>>> config file).  That defines the convolution radius in grid units.
The
>>> value at each grid point is just replaced by the average value of
all grid
>>> points falling within the circle of that radius around
>>> the point.  I do suggest playing around with it.  Keep the
threshold set
>>> the same and see how the objects change as you increase/decrease
the
>>>
>>> Ultimately, you should play around with both the convolution
threshold
>>> and radius to define objects that capture the phenomenon of
interest.  For
>>> example, if you're interested in studying large MCS's,
>>> you'd set the convolution radius high and the convolution
threshold low
>>> (small number of large objects).  For small scale convection,
you'd set the
>>> convolution radius low and the threshold high (large
>>> number of small objects).
>>>
>>>>
>>>> 7: If I want to verify the grid data from CMAQ output, like the
NO2
>>>> concentration, can I do that with MET? How to set the 'field' in
the
>>> config
>>>> file?
>>>>
>>>
>>> I'm not familiar with that data set.  If you have a gridded data
file
>>> that MET supports and have questions about extracting data from
it, just
>>> post a sample data file to our anonymous ftp site
>>> following these instructions:
>>>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>>
>>> Then send us a met-help ticket about it.
>>>
>>>>
>>>> 9:My last question is regarding the ascii to nc tool. My obs data
is not
>>>> bufr nor the standard ascii format for MET. I then used both
Fortran and
>>>> Matlab to transfer my data to the standard ascii format for MET.
To the
>>>> fortran one, it showed a lot of such warnings:
>>>> WARNING:
>>>> WARNING: process_little_r_obs() -> the number of data lines
specified in
>>>> the header (10) does not match the number found in the data (1)
on line
>>>> number 4087.
>>>> WARNING:
>>>> WARNING:
>>>> WARNING: process_little_r_obs() -> the number of data lines
specified in
>>>> the header (10) does not match the number found in the data (1)
on line
>>>> number 4091.
>>>> WARNING:
>>>> WARNING:
>>>> WARNING: process_little_r_obs() -> the number of data lines
specified in
>>>> the header (10) does not match the number found in the data (1)
on line
>>>> number 4095.
>>>>
>>>> But at last, the nc file can be produced. To the Matlab one, the
>>> process is
>>>> correct, could you please tell me the reason. Is that related to
the
>>> data
>>>> type written onto the file, like the string or the float? But the
>>> format I
>>>> set is the same in both scripts. I have also attached the data
>>> transformed
>>>> by fortran and matlab to this email.
>>>
>>> I ran the two data files you sent through ascii2nc and both ran
fine
>>> without any warnings.  The warnings about "little_r" you're seeing
are odd.
>>>   ascii2nc supports multiple ascii file formats, one of
>>> which is named little_r.  So for some reason, it was not
interpreting the
>>> format of the ascii data you passed it correctly.  You can
explicitly tell
>>> it the file format with the "-format" command line
>>> option.  I'd suggest passing the "-format met_point" option to
ascii2nc
>>> to explicitly tell it to interpret your data using the MET point
format.
>>>
>>>>
>>>> Also, since the data is not coming from bufr, to the Message_Type
I just
>>>> write 'ADPUPA', whether this will influence the statistics
result? The
>>>> height for different observation stations might be different, is
there
>>> any
>>>> method for me to compare the fcst and obs for different specific
heights
>>>> instead of just setting a height value(e.g. 2m)?
>>>
>>> For surface data, you should set the message type to ADPSFC.  When
>>> comparing 2-meter temperature to the ADPSFC message type, no
vertical
>>> interpolation is done.  For upper-air verification at pressure
>>> levels, vertical interpolation is done linear in the log of
pressure.
>>>   When verifying a certain number of meters above/below ground
(like winds
>>> at 30m or 40m), vertical interpolation is done linear in
>>> height.
>>>
>>>>
>>>>
>>>> Sincerely,
>>>>
>>>> Jason
>>>>
>>>
>>>
>>

------------------------------------------------
Subject: Several questions regarding MET application
From: Xingcheng Lu
Time: Wed Nov 13 05:44:25 2013

Dear John,

Thank you for your response, I tried to drop my files to the FTP,
however,
while I put my files, error message showed up:

227 Entering Passive Mode (128,117,192,211,192,15)
553 Could not determine cwdir: No such file or directory.

Any method to solve this? Thank you!

Sincerely,

Jason

2013/11/8 John Halley Gotway via RT <met_help at ucar.edu>

> Jason,
>
> I'm not exactly sure how to address this issue.  But let me tell you
how
> Point-Stat handles verification of "surface" variables.  It depends
on the
> observation message type being used.  The ADPSFC and
> SFCSHP message types are special cases.  Basically, any point
observation
> with an APDSFC or SFCSHP message type are assumed to be at the
surface -
> regardless of their actual elevation or height value.
>
> When you're verifying forecasts with a vertical level type (such as
> 2-meter temperature or 10-meter winds - any vertical level specified
using
> a "Z") and comparing it to a surface message type (ADPSFC
> or SFCSHP), all point observations of those types will be used.  So
when
> verifying 2-m TMP and 0-m TMP against the ADPSFC message type, I
would
> expect that they would use the same set of point
> observations.
>
> This vertical level matching part can get a bit tricky.  It'd
probably be
> best to have you send me a sample forecast file, observation file,
and
> Point-Stat config file along with questions as to why
> Point-Stat is producing the output that it is.  Usually working
through a
> specific example provides more answers than speaking more generally.
>
included
> that in the test data you send as well.  I'm having a difficult time
> understanding exactly what the issue is.  I could take a
> look at your config file and your data and perhaps offer some
suggestions.
>
> You can send me data by posting it to our anonymous ftp site:
>     http://www.dtcenter.org/met/users/support/met_help.php#ftp
>
> Thanks,
> John
>
> On 11/06/2013 07:07 AM, Xingcheng Lu via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
> >
> > Dear John,
> >
> > I met another problem when I ran the MET. In my ascii observation
data,
> the
> > height and elevation are the same. In the config file I set both
Z0(TMP)
> > and Z2(TMP) and found that the RMSE of Z0 reached around 40 and Z2
only
> > around 2. In theory, I think that my observation data should be
the
> > temperature near the ground(Not the soil temperature from wrf)
because
> > elevation=height. So, I want to know if I set Z0(TMP), whether MET
will
> use
> > the soil temperature from wrf to compare with the observation
data?
> >
> > Also, if it is possible, hope that you can answer my question
> > pressure issue I asked one week ago at your convenience. Thank you
in
> >
> > Sincerely,
> >
> > Jason
> >
> >
> > 2013/10/31 Xingcheng Lu <xingchenglu2011 at u.northwestern.edu>
> >
> >> Hi John,
> >>
> >>
> >> I still not quite understand the neighborhood method, I know that
we
> first
> >> need to set a threshold to enclose other points which are closed
to the
> >> center point, but which factor decides whether the grid within
the
> >> searching radius is turn on or not?
> >>
> >> I ran the Ascii fortran one just now, and it worked! I don't know
why,
> >> maybe it is due to my cluster issue. By the way, what kind of
data can I
> >> use if I want to apply the little_r option?
> >>
> >> I just made a comparison for my observation data and forecast
data for
> Z0.
> >> I made a test and found that for ADPUPA, only when the elevation
is zero
> >> can the observation and forecast be matched. However, since the
> observation
> >> height and elevation is the same in my obs data, like if the
elevation
> is 5
> >> meters, the observation height is also 5m. I don't know under
such
> >> condition whether the obs can be counted as  Z0? If yes, I don't
know
> why
> >> it cannot be matched by MET. But if I set as ADPSFC, all the obs
can be
> >> matched.
> >>
> >> My data has exact pressure value, and to the Z0, it ranges from
> 990-1014.
> >> However, for both ADPUPA and ADPSFC, the results of P960-1013
and Z0
> are
> >> not the same. This results seem like: The temperature related to
> pressure
> >> is not the same with that related to height at the same location.
I am
> >> wondering whether there is any interpretation for the temp value
> related to
> >> the pressure?(I have attached one of my result to this email.)
> >>
> >> Also, I need to make a full comparison between point obs and
forecast on
> >> surface, do you have any idea that which interpretation method is
more
> >> reliable. Also, to the surface temperature, I wrote ADPSFC for
the first
> >> column of obs-ascii, and set Z0 in the pointstat config file, am
I
> correct
> >> or not? To the UW_Weight and DW_Weight method, I need to first
set the
> >> width, any suggestion for that?
> >>
> >>
> >> Regards,
> >>
> >> Jason
> >>
> >>
> >> 2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>
> >>
> >>> Jason,
> >>>
> >>>
> >>> Thanks,
> >>> John
> >>>
> >>> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
> >>>>
> >>>> Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
> >>>> Transaction: Ticket created by
xingchenglu2011 at u.northwestern.edu
> >>>>          Queue: met_help
> >>>>        Subject: Several questions regarding MET application
> >>>>          Owner: Nobody
> >>>>     Requestors: xingchenglu2011 at u.northwestern.edu
> >>>>         Status: new
> >>>>    Ticket <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639>
> >>>>
> >>>>
> >>>>
> >>>> I have several questions regarding the application of MET:
> >>>>
> >>>> 1:The threshold setting for variable(e.g. >273) is frequent in
the
> >>>> tutorial, whether the threshold will be invalid if I just
calculate
> and
> >>>> compare the continuous statistics.(Like if MET will get rid of
the
> data
> >>>> which is less than 273 for continuous verification?)
> >>>
> >>> The "cat_thresh" setting stands for "categorical threshold".
That is
> >>> used when computing contingency table counts and statistics (the
CTC
> and
> >>> CTS output line types).  The "cat_thresh" is used to
> >>> define what constitutes an "event" when computing a 2x2
contingency
> >>> table.  It has no impact on the continuous statistics and
partial sums
> in
> >>> the CNT and SL1L2 output line types.
> >>>
> >>> However, in the future we may add a parameter to filter the
matched
> pairs
> >>> that go into the continuous statistics.  Some users have
requested the
> >>> ability to do conditional verification like that -
> >>> where you throw out some of the matched pairs before computing
> continuous
> >>> stats.  But that does not currently exist in the current METv4.1
> release.
> >>>
> >>>>
> >>>> 2:For the neighborhood method applied in gridded-gridded
comparison,
> >>>> whether this method is just useful for the categorical
variables? Can
> >>> it be
> >>>> applied in the continuous statistics? I don't quite understand
that
> why
> >>> the
> >>>> width value for the square must be an odd integer. Also, in the
> gridded
> >>>> comparison, I don't quite understand why before comparison,
fcst and
> obs
> >>>> fields needed to be smoothed first.
> >>>
> >>> To answer your second question first, they do not need to be
smoothed
> >>> first.  Typically, grid_stat is run with no "interpolation", or
> smoothing,
> >>> done.  That's why the default looks like this:
> >>> interp = {
> >>>      field      = BOTH;
> >>>      vld_thresh = 1.0;
> >>>
> >>>      type = [
> >>>         {
> >>>            method = UW_MEAN;
> >>>            width  = 1;
> >>>         }
> >>>      ];
> >>> };
> >>>
> >>> However, this provides an easy way to smooth the data before
computing
> >>> statistics.  And that is called "upscaling".  So you could see
how the
> >>> performance of your model improves the more you smooth it.
> >>>    Typically, smoother forecast score much better than more
detailed
> ones.
> >>>   But, as I mentioned, typically no smoothing it performed.
> >>>
> >>> The neighborhood methods implemented in Grid-Stat must be
performed
> using
> >>> a threshold.  First, the raw fields are thresholded to create a
0/1
> bitmap
> >>> in each.  Then, for each neighborhood width, a
> >>> "coverage" value is computed as the percentage of grid squares
in that
> >>> box that are turned on.  The neighborhood stats are computed
over those
> >>> coverage values.  The widths must be odd so that they're
> >>> centered on each grid point.  A width of 5 means you have 2 grid
points
> >>> to the left and right.  7 means there's 3 on each side.  A width
of 4
> >>> wouldn't be centered on the grid box.
> >>>
> >>>>
> >>>> 3:In both point-stat and grid-stat, the tutorial states that it
is not
> >>>> recommended to use analysis field for comparison. I don't quite
get
> >>>> the point what the analysis field means. If I compare two
wrfout by
> >>> using
> >>>> different physical schemes, is it counted as the situation the
> tutorial
> >>>> states?
> >>>
> >>> An analysis field is just the 0-hour forecast from a model.
Users will
> >>> often compare a 24-hour forecast from the previous day to the 0-
hour
> >>> forecast of the current day.  They're assuming that the
> >>> model analysis is "truth".  The problem is that the model
analysis is
> >>> typically very far from truth.  The model analysis will contain
the
> same
> >>> type of biases and errors that the forecast will.
> >>> Verifying against a model analysis won't really tell you how
good your
> >>> model is doing.
> >>>
> >>> However, we set up the MET tools in a general way to enable
users to
> >>> perform whatever type of comparison they'd like.  As you
mention, you
> can
> >>> compare the output of two different physical schemes.
> >>> But the tough part will be interpreting the meaning of the
resulting
> >>> statistics.
> >>>
> >>>>
> >>>> 4: If I compare the grid fcst and grid obs for T2 in a specific
> >>>> time(Setting beg/end=0),then I will get some statistics values,
such
> as
> >>>> ME,MSE. I am not quite sure about the calculation process, for
> example,
> >>> in
> >>>> the fcst field, whether MET first sum the T2 value from all
grid
> points
> >>>> first, then compare with the obs? Or it compares the value
between
> fcst
> >>> and
> >>>> obs for each point and do the statistics calculation.
> >>>
> >>> For gridded verification, MET looks grid-point by grid-point.
For each
> >>> grid point, it considers the forecast value (f) and the
observation
> value
> >>> (o).  If either of those contain bad data, it skips
> >>> that point.  If both data values are good, it computes an error
value
> as
> >>> f - o.  The mean error (ME) is the average error over all grid
points.
>  The
> >>> mean squared error (MSE) is the average squared
> >>> error over all grid points.
> >>>
> >>>>
> >>>> 5: If I want to compare the variables value at the eta-level
set in
> the
> >>> wrf
> >>>> namelist, any method for me to do that instead of just setting
the
> >>> specific
> >>>> height?
> >>>
> >>> No.  MET assumes that you've post-processed your raw WRF output
for two
> >>> reasons.  First, post-processing destaggers the data and puts it
on a
> >>> regular grid.  MET doesn't support staggered grids.
> >>> Second, post-processing interpolates the model output onto
pressure
> >>> levels.  Point observations are defined at pressure levels, not
hybrid
> >>> eta-levels.  In order to compare your model output to point
> >>> data, it needs to be interpolated to pressure levels.
> >>>
> >>> For post-processing, we recommend using the Unified Post-
Processor
> which
> >>> writes out GRIB files that MET supports very well.
> >>>
> >>>>
> >>>> 6: For the MODE tool, I don't understand the convolution
process. The
> >>>> expression written as: C(x,y)=∑a(u,v)f(x-u)(x-v), is it the
same with
> >>>> C(x,y)=∑a(u,v)f(x-u,x-v)?  I know that we need to first set the
R and
> H
> >>>> value, but I don't know the true meaning for setting them. If H
is
> >>> large,
> >>>> then R would be small, vice and versa.  However, to the value
of
> >>> C(x,y), it
> >>>> is hard to compare (large area* lower height) versus (small
area
> *large
> >>>> height). Could you explain to me a little bit more under what
> condition
> >>>> should I set larger H or smaller R?
> >>>
> >>> I don't think it's very necessary to understand the convolution
> process.
> >>>   It's just a circular smoothing filter.  The convolution
process is
> >>> config file).  That defines the convolution radius in grid
units.  The
> >>> value at each grid point is just replaced by the average value
of all
> grid
> >>> points falling within the circle of that radius around
> >>> the point.  I do suggest playing around with it.  Keep the
threshold
> set
> >>> the same and see how the objects change as you increase/decrease
the
> >>>
> >>> Ultimately, you should play around with both the convolution
threshold
> >>> and radius to define objects that capture the phenomenon of
interest.
>  For
> >>> example, if you're interested in studying large MCS's,
> >>> you'd set the convolution radius high and the convolution
threshold low
> >>> (small number of large objects).  For small scale convection,
you'd
> set the
> >>> convolution radius low and the threshold high (large
> >>> number of small objects).
> >>>
> >>>>
> >>>> 7: If I want to verify the grid data from CMAQ output, like the
NO2
> >>>> concentration, can I do that with MET? How to set the 'field'
in the
> >>> config
> >>>> file?
> >>>>
> >>>
> >>> I'm not familiar with that data set.  If you have a gridded data
file
> >>> that MET supports and have questions about extracting data from
it,
> just
> >>> post a sample data file to our anonymous ftp site
> >>> following these instructions:
> >>>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>>
> >>> Then send us a met-help ticket about it.
> >>>
> >>>>
> >>>> 9:My last question is regarding the ascii to nc tool. My obs
data is
> not
> >>>> bufr nor the standard ascii format for MET. I then used both
Fortran
> and
> >>>> Matlab to transfer my data to the standard ascii format for
MET. To
> the
> >>>> fortran one, it showed a lot of such warnings:
> >>>> WARNING:
> >>>> WARNING: process_little_r_obs() -> the number of data lines
specified
> in
> >>>> the header (10) does not match the number found in the data (1)
on
> line
> >>>> number 4087.
> >>>> WARNING:
> >>>> WARNING:
> >>>> WARNING: process_little_r_obs() -> the number of data lines
specified
> in
> >>>> the header (10) does not match the number found in the data (1)
on
> line
> >>>> number 4091.
> >>>> WARNING:
> >>>> WARNING:
> >>>> WARNING: process_little_r_obs() -> the number of data lines
specified
> in
> >>>> the header (10) does not match the number found in the data (1)
on
> line
> >>>> number 4095.
> >>>>
> >>>> But at last, the nc file can be produced. To the Matlab one,
the
> >>> process is
> >>>> correct, could you please tell me the reason. Is that related
to the
> >>> data
> >>>> type written onto the file, like the string or the float? But
the
> >>> format I
> >>>> set is the same in both scripts. I have also attached the data
> >>> transformed
> >>>> by fortran and matlab to this email.
> >>>
> >>> I ran the two data files you sent through ascii2nc and both ran
fine
> >>> without any warnings.  The warnings about "little_r" you're
seeing are
> odd.
> >>>   ascii2nc supports multiple ascii file formats, one of
> >>> which is named little_r.  So for some reason, it was not
interpreting
> the
> >>> format of the ascii data you passed it correctly.  You can
explicitly
> tell
> >>> it the file format with the "-format" command line
> >>> option.  I'd suggest passing the "-format met_point" option to
ascii2nc
> >>> to explicitly tell it to interpret your data using the MET point
> format.
> >>>
> >>>>
> >>>> Also, since the data is not coming from bufr, to the
Message_Type I
> just
> >>>> write 'ADPUPA', whether this will influence the statistics
result? The
> >>>> height for different observation stations might be different,
is there
> >>> any
> >>>> method for me to compare the fcst and obs for different
specific
> heights
> >>>> instead of just setting a height value(e.g. 2m)?
> >>>
> >>> For surface data, you should set the message type to ADPSFC.
When
> >>> comparing 2-meter temperature to the ADPSFC message type, no
vertical
> >>> interpolation is done.  For upper-air verification at pressure
> >>> levels, vertical interpolation is done linear in the log of
pressure.
> >>>   When verifying a certain number of meters above/below ground
(like
> winds
> >>> at 30m or 40m), vertical interpolation is done linear in
> >>> height.
> >>>
> >>>>
> >>>>
> >>>> Sincerely,
> >>>>
> >>>> Jason
> >>>>
> >>>
> >>>
> >>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #63639] Several questions regarding MET application
From: John Halley Gotway
Time: Wed Nov 13 09:56:21 2013

Jason,

Try these commands:

cd <directory containing the files you want to post>
ftp -p ftp.rap.ucar.edu
cd incoming/irap/met_help
mkdir xingcheng_data_20131113
cd xingcheng_data_20131113
put <file1>
put <file2>
...
bye

Do you still have problems?

Thanks,
John

On 11/13/2013 05:44 AM, Xingcheng Lu via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>
> Dear John,
>
> Thank you for your response, I tried to drop my files to the FTP,
however,
> while I put my files, error message showed up:
>
> 227 Entering Passive Mode (128,117,192,211,192,15)
> 553 Could not determine cwdir: No such file or directory.
>
> Any method to solve this? Thank you!
>
> Sincerely,
>
> Jason
>
>
> 2013/11/8 John Halley Gotway via RT <met_help at ucar.edu>
>
>> Jason,
>>
>> I'm not exactly sure how to address this issue.  But let me tell
you how
>> Point-Stat handles verification of "surface" variables.  It depends
on the
>> observation message type being used.  The ADPSFC and
>> SFCSHP message types are special cases.  Basically, any point
observation
>> with an APDSFC or SFCSHP message type are assumed to be at the
surface -
>> regardless of their actual elevation or height value.
>>
>> When you're verifying forecasts with a vertical level type (such as
>> 2-meter temperature or 10-meter winds - any vertical level
specified using
>> a "Z") and comparing it to a surface message type (ADPSFC
>> or SFCSHP), all point observations of those types will be used.  So
when
>> verifying 2-m TMP and 0-m TMP against the ADPSFC message type, I
would
>> expect that they would use the same set of point
>> observations.
>>
>> This vertical level matching part can get a bit tricky.  It'd
probably be
>> best to have you send me a sample forecast file, observation file,
and
>> Point-Stat config file along with questions as to why
>> Point-Stat is producing the output that it is.  Usually working
through a
>> specific example provides more answers than speaking more
generally.
>>
included
>> that in the test data you send as well.  I'm having a difficult
time
>> understanding exactly what the issue is.  I could take a
>> look at your config file and your data and perhaps offer some
suggestions.
>>
>> You can send me data by posting it to our anonymous ftp site:
>>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>
>> Thanks,
>> John
>>
>> On 11/06/2013 07:07 AM, Xingcheng Lu via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>>>
>>> Dear John,
>>>
>>> I met another problem when I ran the MET. In my ascii observation
data,
>> the
>>> height and elevation are the same. In the config file I set both
Z0(TMP)
>>> and Z2(TMP) and found that the RMSE of Z0 reached around 40 and Z2
only
>>> around 2. In theory, I think that my observation data should be
the
>>> temperature near the ground(Not the soil temperature from wrf)
because
>>> elevation=height. So, I want to know if I set Z0(TMP), whether MET
will
>> use
>>> the soil temperature from wrf to compare with the observation
data?
>>>
>>> Also, if it is possible, hope that you can answer my question
>>> pressure issue I asked one week ago at your convenience. Thank you
in
>>>
>>> Sincerely,
>>>
>>> Jason
>>>
>>>
>>> 2013/10/31 Xingcheng Lu <xingchenglu2011 at u.northwestern.edu>
>>>
>>>> Hi John,
>>>>
>>>>
>>>> I still not quite understand the neighborhood method, I know that
we
>> first
>>>> need to set a threshold to enclose other points which are closed
to the
>>>> center point, but which factor decides whether the grid within
the
>>>> searching radius is turn on or not?
>>>>
>>>> I ran the Ascii fortran one just now, and it worked! I don't know
why,
>>>> maybe it is due to my cluster issue. By the way, what kind of
data can I
>>>> use if I want to apply the little_r option?
>>>>
>>>> I just made a comparison for my observation data and forecast
data for
>> Z0.
>>>> I made a test and found that for ADPUPA, only when the elevation
is zero
>>>> can the observation and forecast be matched. However, since the
>> observation
>>>> height and elevation is the same in my obs data, like if the
elevation
>> is 5
>>>> meters, the observation height is also 5m. I don't know under
such
>>>> condition whether the obs can be counted as  Z0? If yes, I don't
know
>> why
>>>> it cannot be matched by MET. But if I set as ADPSFC, all the obs
can be
>>>> matched.
>>>>
>>>> My data has exact pressure value, and to the Z0, it ranges from
>> 990-1014.
and Z0
>> are
>>>> not the same. This results seem like: The temperature related to
>> pressure
>>>> is not the same with that related to height at the same location.
I am
>>>> wondering whether there is any interpretation for the temp value
>> related to
>>>> the pressure?(I have attached one of my result to this email.)
>>>>
>>>> Also, I need to make a full comparison between point obs and
forecast on
>>>> surface, do you have any idea that which interpretation method is
more
>>>> reliable. Also, to the surface temperature, I wrote ADPSFC for
the first
>>>> column of obs-ascii, and set Z0 in the pointstat config file, am
I
>> correct
>>>> or not? To the UW_Weight and DW_Weight method, I need to first
set the
>>>> width, any suggestion for that?
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Jason
>>>>
>>>>
>>>> 2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>
>>>>
>>>>> Jason,
>>>>>
>>>>>
>>>>> Thanks,
>>>>> John
>>>>>
>>>>> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
>>>>>>
>>>>>> Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
>>>>>> Transaction: Ticket created by
xingchenglu2011 at u.northwestern.edu
>>>>>>           Queue: met_help
>>>>>>         Subject: Several questions regarding MET application
>>>>>>           Owner: Nobody
>>>>>>      Requestors: xingchenglu2011 at u.northwestern.edu
>>>>>>          Status: new
>>>>>>     Ticket <URL:
>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639>
>>>>>>
>>>>>>
>>>>>>
>>>>>> I have several questions regarding the application of MET:
>>>>>>
>>>>>> 1:The threshold setting for variable(e.g. >273) is frequent in
the
>>>>>> tutorial, whether the threshold will be invalid if I just
calculate
>> and
>>>>>> compare the continuous statistics.(Like if MET will get rid of
the
>> data
>>>>>> which is less than 273 for continuous verification?)
>>>>>
>>>>> The "cat_thresh" setting stands for "categorical threshold".
That is
>>>>> used when computing contingency table counts and statistics (the
CTC
>> and
>>>>> CTS output line types).  The "cat_thresh" is used to
>>>>> define what constitutes an "event" when computing a 2x2
contingency
>>>>> table.  It has no impact on the continuous statistics and
partial sums
>> in
>>>>> the CNT and SL1L2 output line types.
>>>>>
>>>>> However, in the future we may add a parameter to filter the
matched
>> pairs
>>>>> that go into the continuous statistics.  Some users have
requested the
>>>>> ability to do conditional verification like that -
>>>>> where you throw out some of the matched pairs before computing
>> continuous
>>>>> stats.  But that does not currently exist in the current METv4.1
>> release.
>>>>>
>>>>>>
>>>>>> 2:For the neighborhood method applied in gridded-gridded
comparison,
>>>>>> whether this method is just useful for the categorical
variables? Can
>>>>> it be
>>>>>> applied in the continuous statistics? I don't quite understand
that
>> why
>>>>> the
>>>>>> width value for the square must be an odd integer. Also, in the
>> gridded
>>>>>> comparison, I don't quite understand why before comparison,
fcst and
>> obs
>>>>>> fields needed to be smoothed first.
>>>>>
>>>>> To answer your second question first, they do not need to be
smoothed
>>>>> first.  Typically, grid_stat is run with no "interpolation", or
>> smoothing,
>>>>> done.  That's why the default looks like this:
>>>>> interp = {
>>>>>       field      = BOTH;
>>>>>       vld_thresh = 1.0;
>>>>>
>>>>>       type = [
>>>>>          {
>>>>>             method = UW_MEAN;
>>>>>             width  = 1;
>>>>>          }
>>>>>       ];
>>>>> };
>>>>>
>>>>> However, this provides an easy way to smooth the data before
computing
>>>>> statistics.  And that is called "upscaling".  So you could see
how the
>>>>> performance of your model improves the more you smooth it.
>>>>>     Typically, smoother forecast score much better than more
detailed
>> ones.
>>>>>    But, as I mentioned, typically no smoothing it performed.
>>>>>
>>>>> The neighborhood methods implemented in Grid-Stat must be
performed
>> using
>>>>> a threshold.  First, the raw fields are thresholded to create a
0/1
>> bitmap
>>>>> in each.  Then, for each neighborhood width, a
>>>>> "coverage" value is computed as the percentage of grid squares
in that
>>>>> box that are turned on.  The neighborhood stats are computed
over those
>>>>> coverage values.  The widths must be odd so that they're
>>>>> centered on each grid point.  A width of 5 means you have 2 grid
points
>>>>> to the left and right.  7 means there's 3 on each side.  A width
of 4
>>>>> wouldn't be centered on the grid box.
>>>>>
>>>>>>
>>>>>> 3:In both point-stat and grid-stat, the tutorial states that it
is not
>>>>>> recommended to use analysis field for comparison. I don't quite
get
>>>>>> the point what the analysis field means. If I compare two
wrfout by
>>>>> using
>>>>>> different physical schemes, is it counted as the situation the
>> tutorial
>>>>>> states?
>>>>>
>>>>> An analysis field is just the 0-hour forecast from a model.
Users will
>>>>> often compare a 24-hour forecast from the previous day to the 0-
hour
>>>>> forecast of the current day.  They're assuming that the
>>>>> model analysis is "truth".  The problem is that the model
analysis is
>>>>> typically very far from truth.  The model analysis will contain
the
>> same
>>>>> type of biases and errors that the forecast will.
>>>>> Verifying against a model analysis won't really tell you how
good your
>>>>> model is doing.
>>>>>
>>>>> However, we set up the MET tools in a general way to enable
users to
>>>>> perform whatever type of comparison they'd like.  As you
mention, you
>> can
>>>>> compare the output of two different physical schemes.
>>>>> But the tough part will be interpreting the meaning of the
resulting
>>>>> statistics.
>>>>>
>>>>>>
>>>>>> 4: If I compare the grid fcst and grid obs for T2 in a specific
>>>>>> time(Setting beg/end=0),then I will get some statistics values,
such
>> as
>>>>>> ME,MSE. I am not quite sure about the calculation process, for
>> example,
>>>>> in
>>>>>> the fcst field, whether MET first sum the T2 value from all
grid
>> points
>>>>>> first, then compare with the obs? Or it compares the value
between
>> fcst
>>>>> and
>>>>>> obs for each point and do the statistics calculation.
>>>>>
>>>>> For gridded verification, MET looks grid-point by grid-point.
For each
>>>>> grid point, it considers the forecast value (f) and the
observation
>> value
>>>>> (o).  If either of those contain bad data, it skips
>>>>> that point.  If both data values are good, it computes an error
value
>> as
>>>>> f - o.  The mean error (ME) is the average error over all grid
points.
>>   The
>>>>> mean squared error (MSE) is the average squared
>>>>> error over all grid points.
>>>>>
>>>>>>
>>>>>> 5: If I want to compare the variables value at the eta-level
set in
>> the
>>>>> wrf
>>>>>> namelist, any method for me to do that instead of just setting
the
>>>>> specific
>>>>>> height?
>>>>>
>>>>> No.  MET assumes that you've post-processed your raw WRF output
for two
>>>>> reasons.  First, post-processing destaggers the data and puts it
on a
>>>>> regular grid.  MET doesn't support staggered grids.
>>>>> Second, post-processing interpolates the model output onto
pressure
>>>>> levels.  Point observations are defined at pressure levels, not
hybrid
>>>>> eta-levels.  In order to compare your model output to point
>>>>> data, it needs to be interpolated to pressure levels.
>>>>>
>>>>> For post-processing, we recommend using the Unified Post-
Processor
>> which
>>>>> writes out GRIB files that MET supports very well.
>>>>>
>>>>>>
>>>>>> 6: For the MODE tool, I don't understand the convolution
process. The
>>>>>> expression written as: C(x,y)=∑a(u,v)f(x-u)(x-v), is it the
same with
>>>>>> C(x,y)=∑a(u,v)f(x-u,x-v)?  I know that we need to first set the
R and
>> H
>>>>>> value, but I don't know the true meaning for setting them. If H
is
>>>>> large,
>>>>>> then R would be small, vice and versa.  However, to the value
of
>>>>> C(x,y), it
>>>>>> is hard to compare (large area* lower height) versus (small
area
>> *large
>>>>>> height). Could you explain to me a little bit more under what
>> condition
>>>>>> should I set larger H or smaller R?
>>>>>
>>>>> I don't think it's very necessary to understand the convolution
>> process.
>>>>>    It's just a circular smoothing filter.  The convolution
process is
>>>>> config file).  That defines the convolution radius in grid
units.  The
>>>>> value at each grid point is just replaced by the average value
of all
>> grid
>>>>> points falling within the circle of that radius around
>>>>> the point.  I do suggest playing around with it.  Keep the
threshold
>> set
>>>>> the same and see how the objects change as you increase/decrease
the
>>>>>
>>>>> Ultimately, you should play around with both the convolution
threshold
>>>>> and radius to define objects that capture the phenomenon of
interest.
>>   For
>>>>> example, if you're interested in studying large MCS's,
>>>>> you'd set the convolution radius high and the convolution
threshold low
>>>>> (small number of large objects).  For small scale convection,
you'd
>> set the
>>>>> convolution radius low and the threshold high (large
>>>>> number of small objects).
>>>>>
>>>>>>
>>>>>> 7: If I want to verify the grid data from CMAQ output, like the
NO2
>>>>>> concentration, can I do that with MET? How to set the 'field'
in the
>>>>> config
>>>>>> file?
>>>>>>
>>>>>
>>>>> I'm not familiar with that data set.  If you have a gridded data
file
>>>>> that MET supports and have questions about extracting data from
it,
>> just
>>>>> post a sample data file to our anonymous ftp site
>>>>> following these instructions:
>>>>>       http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>>>>
>>>>> Then send us a met-help ticket about it.
>>>>>
>>>>>>
>>>>>> 9:My last question is regarding the ascii to nc tool. My obs
data is
>> not
>>>>>> bufr nor the standard ascii format for MET. I then used both
Fortran
>> and
>>>>>> Matlab to transfer my data to the standard ascii format for
MET. To
>> the
>>>>>> fortran one, it showed a lot of such warnings:
>>>>>> WARNING:
>>>>>> WARNING: process_little_r_obs() -> the number of data lines
specified
>> in
>>>>>> the header (10) does not match the number found in the data (1)
on
>> line
>>>>>> number 4087.
>>>>>> WARNING:
>>>>>> WARNING:
>>>>>> WARNING: process_little_r_obs() -> the number of data lines
specified
>> in
>>>>>> the header (10) does not match the number found in the data (1)
on
>> line
>>>>>> number 4091.
>>>>>> WARNING:
>>>>>> WARNING:
>>>>>> WARNING: process_little_r_obs() -> the number of data lines
specified
>> in
>>>>>> the header (10) does not match the number found in the data (1)
on
>> line
>>>>>> number 4095.
>>>>>>
>>>>>> But at last, the nc file can be produced. To the Matlab one,
the
>>>>> process is
>>>>>> correct, could you please tell me the reason. Is that related
to the
>>>>> data
>>>>>> type written onto the file, like the string or the float? But
the
>>>>> format I
>>>>>> set is the same in both scripts. I have also attached the data
>>>>> transformed
>>>>>> by fortran and matlab to this email.
>>>>>
>>>>> I ran the two data files you sent through ascii2nc and both ran
fine
>>>>> without any warnings.  The warnings about "little_r" you're
seeing are
>> odd.
>>>>>    ascii2nc supports multiple ascii file formats, one of
>>>>> which is named little_r.  So for some reason, it was not
interpreting
>> the
>>>>> format of the ascii data you passed it correctly.  You can
explicitly
>> tell
>>>>> it the file format with the "-format" command line
>>>>> option.  I'd suggest passing the "-format met_point" option to
ascii2nc
>>>>> to explicitly tell it to interpret your data using the MET point
>> format.
>>>>>
>>>>>>
>>>>>> Also, since the data is not coming from bufr, to the
Message_Type I
>> just
>>>>>> write 'ADPUPA', whether this will influence the statistics
result? The
>>>>>> height for different observation stations might be different,
is there
>>>>> any
>>>>>> method for me to compare the fcst and obs for different
specific
>> heights
>>>>>> instead of just setting a height value(e.g. 2m)?
>>>>>
>>>>> For surface data, you should set the message type to ADPSFC.
When
>>>>> comparing 2-meter temperature to the ADPSFC message type, no
vertical
>>>>> interpolation is done.  For upper-air verification at pressure
>>>>> levels, vertical interpolation is done linear in the log of
pressure.
>>>>>    When verifying a certain number of meters above/below ground
(like
>> winds
>>>>> at 30m or 40m), vertical interpolation is done linear in
>>>>> height.
>>>>>
>>>>>>
>>>>>>
>>>>>> Sincerely,
>>>>>>
>>>>>> Jason
>>>>>>
>>>>>
>>>>>
>>>>
>>
>>

------------------------------------------------
Subject: Several questions regarding MET application
From: Xingcheng Lu
Time: Thu Nov 14 07:38:03 2013

Dear John,

Thank you for your help and it works now. I have uploaded a file
called
Jason.zip to the ftp. Inside it, there are two folders called pressure
and
height respectively which include observation file, wrfout, config and
the
result I got. The pressure folder is related to the pressure issue I
mentioned to you before and height folder is related to the T0 and T2
issues. Thank you!

Sincerely,

Jason

2013/11/14 John Halley Gotway via RT <met_help at ucar.edu>

> Jason,
>
> Try these commands:
>
>    cd <directory containing the files you want to post>
>    ftp -p ftp.rap.ucar.edu
>    cd incoming/irap/met_help
>    mkdir xingcheng_data_20131113
>    cd xingcheng_data_20131113
>    put <file1>
>    put <file2>
>    ...
>    bye
>
> Do you still have problems?
>
> Thanks,
> John
>
>
> On 11/13/2013 05:44 AM, Xingcheng Lu via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
> >
> > Dear John,
> >
> > Thank you for your response, I tried to drop my files to the FTP,
> however,
> > while I put my files, error message showed up:
> >
> > 227 Entering Passive Mode (128,117,192,211,192,15)
> > 553 Could not determine cwdir: No such file or directory.
> >
> > Any method to solve this? Thank you!
> >
> > Sincerely,
> >
> > Jason
> >
> >
> > 2013/11/8 John Halley Gotway via RT <met_help at ucar.edu>
> >
> >> Jason,
> >>
> >> I'm not exactly sure how to address this issue.  But let me tell
you how
> >> Point-Stat handles verification of "surface" variables.  It
depends on
> the
> >> observation message type being used.  The ADPSFC and
> >> SFCSHP message types are special cases.  Basically, any point
> observation
> >> with an APDSFC or SFCSHP message type are assumed to be at the
surface -
> >> regardless of their actual elevation or height value.
> >>
> >> When you're verifying forecasts with a vertical level type (such
as
> >> 2-meter temperature or 10-meter winds - any vertical level
specified
> using
> >> a "Z") and comparing it to a surface message type (ADPSFC
> >> or SFCSHP), all point observations of those types will be used.
So when
> >> verifying 2-m TMP and 0-m TMP against the ADPSFC message type, I
would
> >> expect that they would use the same set of point
> >> observations.
> >>
> >> This vertical level matching part can get a bit tricky.  It'd
probably
> be
> >> best to have you send me a sample forecast file, observation
file, and
> >> Point-Stat config file along with questions as to why
> >> Point-Stat is producing the output that it is.  Usually working
through
> a
> >> specific example provides more answers than speaking more
generally.
> >>
> >> You also asked a question about pressure.  Perhaps, you could
included
> >> that in the test data you send as well.  I'm having a difficult
time
> >> understanding exactly what the issue is.  I could take a
> >> look at your config file and your data and perhaps offer some
> suggestions.
> >>
> >> You can send me data by posting it to our anonymous ftp site:
> >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>
> >> Thanks,
> >> John
> >>
> >> On 11/06/2013 07:07 AM, Xingcheng Lu via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
> >>>
> >>> Dear John,
> >>>
> >>> I met another problem when I ran the MET. In my ascii
observation data,
> >> the
> >>> height and elevation are the same. In the config file I set both
> Z0(TMP)
> >>> and Z2(TMP) and found that the RMSE of Z0 reached around 40 and
Z2 only
> >>> around 2. In theory, I think that my observation data should be
the
> >>> temperature near the ground(Not the soil temperature from wrf)
because
> >>> elevation=height. So, I want to know if I set Z0(TMP), whether
MET will
> >> use
> >>> the soil temperature from wrf to compare with the observation
data?
> >>>
> >>> Also, if it is possible, hope that you can answer my question
> >>> pressure issue I asked one week ago at your convenience. Thank
you in
> >>>
> >>> Sincerely,
> >>>
> >>> Jason
> >>>
> >>>
> >>> 2013/10/31 Xingcheng Lu <xingchenglu2011 at u.northwestern.edu>
> >>>
> >>>> Hi John,
> >>>>
> >>>>
> >>>> I still not quite understand the neighborhood method, I know
that we
> >> first
> >>>> need to set a threshold to enclose other points which are
closed to
> the
> >>>> center point, but which factor decides whether the grid within
the
> >>>> searching radius is turn on or not?
> >>>>
> >>>> I ran the Ascii fortran one just now, and it worked! I don't
know why,
> >>>> maybe it is due to my cluster issue. By the way, what kind of
data
> can I
> >>>> use if I want to apply the little_r option?
> >>>>
> >>>> I just made a comparison for my observation data and forecast
data for
> >> Z0.
> >>>> I made a test and found that for ADPUPA, only when the
elevation is
> zero
> >>>> can the observation and forecast be matched. However, since the
> >> observation
> >>>> height and elevation is the same in my obs data, like if the
elevation
> >> is 5
> >>>> meters, the observation height is also 5m. I don't know under
such
> >>>> condition whether the obs can be counted as  Z0? If yes, I
don't know
> >> why
> >>>> it cannot be matched by MET. But if I set as ADPSFC, all the
obs can
> be
> >>>> matched.
> >>>>
> >>>> My data has exact pressure value, and to the Z0, it ranges from
> >> 990-1014.
> >>>> However, for both ADPUPA and ADPSFC, the results of P960-1013
and Z0
> >> are
> >>>> not the same. This results seem like: The temperature related
to
> >> pressure
> >>>> is not the same with that related to height at the same
location. I am
> >>>> wondering whether there is any interpretation for the temp
value
> >> related to
> >>>> the pressure?(I have attached one of my result to this email.)
> >>>>
> >>>> Also, I need to make a full comparison between point obs and
forecast
> on
> >>>> surface, do you have any idea that which interpretation method
is more
> >>>> reliable. Also, to the surface temperature, I wrote ADPSFC for
the
> first
> >>>> column of obs-ascii, and set Z0 in the pointstat config file,
am I
> >> correct
> >>>> or not? To the UW_Weight and DW_Weight method, I need to first
set the
> >>>> width, any suggestion for that?
> >>>>
> >>>>
> >>>> Regards,
> >>>>
> >>>> Jason
> >>>>
> >>>>
> >>>> 2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>
> >>>>
> >>>>> Jason,
> >>>>>
> >>>>>
> >>>>> Thanks,
> >>>>> John
> >>>>>
> >>>>> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
> >>>>>>
> >>>>>> Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
> >>>>>> Transaction: Ticket created by
xingchenglu2011 at u.northwestern.edu
> >>>>>>           Queue: met_help
> >>>>>>         Subject: Several questions regarding MET application
> >>>>>>           Owner: Nobody
> >>>>>>      Requestors: xingchenglu2011 at u.northwestern.edu
> >>>>>>          Status: new
> >>>>>>     Ticket <URL:
> >> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> I have several questions regarding the application of MET:
> >>>>>>
> >>>>>> 1:The threshold setting for variable(e.g. >273) is frequent
in the
> >>>>>> tutorial, whether the threshold will be invalid if I just
calculate
> >> and
> >>>>>> compare the continuous statistics.(Like if MET will get rid
of the
> >> data
> >>>>>> which is less than 273 for continuous verification?)
> >>>>>
> >>>>> The "cat_thresh" setting stands for "categorical threshold".
That is
> >>>>> used when computing contingency table counts and statistics
(the CTC
> >> and
> >>>>> CTS output line types).  The "cat_thresh" is used to
> >>>>> define what constitutes an "event" when computing a 2x2
contingency
> >>>>> table.  It has no impact on the continuous statistics and
partial
> sums
> >> in
> >>>>> the CNT and SL1L2 output line types.
> >>>>>
> >>>>> However, in the future we may add a parameter to filter the
matched
> >> pairs
> >>>>> that go into the continuous statistics.  Some users have
requested
> the
> >>>>> ability to do conditional verification like that -
> >>>>> where you throw out some of the matched pairs before computing
> >> continuous
> >>>>> stats.  But that does not currently exist in the current
METv4.1
> >> release.
> >>>>>
> >>>>>>
> >>>>>> 2:For the neighborhood method applied in gridded-gridded
comparison,
> >>>>>> whether this method is just useful for the categorical
variables?
> Can
> >>>>> it be
> >>>>>> applied in the continuous statistics? I don't quite
understand that
> >> why
> >>>>> the
> >>>>>> width value for the square must be an odd integer. Also, in
the
> >> gridded
> >>>>>> comparison, I don't quite understand why before comparison,
fcst and
> >> obs
> >>>>>> fields needed to be smoothed first.
> >>>>>
> >>>>> To answer your second question first, they do not need to be
smoothed
> >>>>> first.  Typically, grid_stat is run with no "interpolation",
or
> >> smoothing,
> >>>>> done.  That's why the default looks like this:
> >>>>> interp = {
> >>>>>       field      = BOTH;
> >>>>>       vld_thresh = 1.0;
> >>>>>
> >>>>>       type = [
> >>>>>          {
> >>>>>             method = UW_MEAN;
> >>>>>             width  = 1;
> >>>>>          }
> >>>>>       ];
> >>>>> };
> >>>>>
> >>>>> However, this provides an easy way to smooth the data before
> computing
> >>>>> statistics.  And that is called "upscaling".  So you could see
how
> the
> >>>>> performance of your model improves the more you smooth it.
> >>>>>     Typically, smoother forecast score much better than more
detailed
> >> ones.
> >>>>>    But, as I mentioned, typically no smoothing it performed.
> >>>>>
> >>>>> The neighborhood methods implemented in Grid-Stat must be
performed
> >> using
> >>>>> a threshold.  First, the raw fields are thresholded to create
a 0/1
> >> bitmap
> >>>>> in each.  Then, for each neighborhood width, a
> >>>>> "coverage" value is computed as the percentage of grid squares
in
> that
> >>>>> box that are turned on.  The neighborhood stats are computed
over
> those
> >>>>> coverage values.  The widths must be odd so that they're
> >>>>> centered on each grid point.  A width of 5 means you have 2
grid
> points
> >>>>> to the left and right.  7 means there's 3 on each side.  A
width of 4
> >>>>> wouldn't be centered on the grid box.
> >>>>>
> >>>>>>
> >>>>>> 3:In both point-stat and grid-stat, the tutorial states that
it is
> not
> >>>>>> recommended to use analysis field for comparison. I don't
quite get
> >>>>>> the point what the analysis field means. If I compare two
wrfout by
> >>>>> using
> >>>>>> different physical schemes, is it counted as the situation
the
> >> tutorial
> >>>>>> states?
> >>>>>
> >>>>> An analysis field is just the 0-hour forecast from a model.
Users
> will
> >>>>> often compare a 24-hour forecast from the previous day to the
0-hour
> >>>>> forecast of the current day.  They're assuming that the
> >>>>> model analysis is "truth".  The problem is that the model
analysis is
> >>>>> typically very far from truth.  The model analysis will
contain the
> >> same
> >>>>> type of biases and errors that the forecast will.
> >>>>> Verifying against a model analysis won't really tell you how
good
> your
> >>>>> model is doing.
> >>>>>
> >>>>> However, we set up the MET tools in a general way to enable
users to
> >>>>> perform whatever type of comparison they'd like.  As you
mention, you
> >> can
> >>>>> compare the output of two different physical schemes.
> >>>>> But the tough part will be interpreting the meaning of the
resulting
> >>>>> statistics.
> >>>>>
> >>>>>>
> >>>>>> 4: If I compare the grid fcst and grid obs for T2 in a
specific
> >>>>>> time(Setting beg/end=0),then I will get some statistics
values, such
> >> as
> >>>>>> ME,MSE. I am not quite sure about the calculation process,
for
> >> example,
> >>>>> in
> >>>>>> the fcst field, whether MET first sum the T2 value from all
grid
> >> points
> >>>>>> first, then compare with the obs? Or it compares the value
between
> >> fcst
> >>>>> and
> >>>>>> obs for each point and do the statistics calculation.
> >>>>>
> >>>>> For gridded verification, MET looks grid-point by grid-point.
For
> each
> >>>>> grid point, it considers the forecast value (f) and the
observation
> >> value
> >>>>> (o).  If either of those contain bad data, it skips
> >>>>> that point.  If both data values are good, it computes an
error value
> >> as
> >>>>> f - o.  The mean error (ME) is the average error over all grid
> points.
> >>   The
> >>>>> mean squared error (MSE) is the average squared
> >>>>> error over all grid points.
> >>>>>
> >>>>>>
> >>>>>> 5: If I want to compare the variables value at the eta-level
set in
> >> the
> >>>>> wrf
> >>>>>> namelist, any method for me to do that instead of just
setting the
> >>>>> specific
> >>>>>> height?
> >>>>>
> >>>>> No.  MET assumes that you've post-processed your raw WRF
output for
> two
> >>>>> reasons.  First, post-processing destaggers the data and puts
it on a
> >>>>> regular grid.  MET doesn't support staggered grids.
> >>>>> Second, post-processing interpolates the model output onto
pressure
> >>>>> levels.  Point observations are defined at pressure levels,
not
> hybrid
> >>>>> eta-levels.  In order to compare your model output to point
> >>>>> data, it needs to be interpolated to pressure levels.
> >>>>>
> >>>>> For post-processing, we recommend using the Unified Post-
Processor
> >> which
> >>>>> writes out GRIB files that MET supports very well.
> >>>>>
> >>>>>>
> >>>>>> 6: For the MODE tool, I don't understand the convolution
process.
> The
> >>>>>> expression written as: C(x,y)=∑a(u,v)f(x-u)(x-v), is it the
same
> with
> >>>>>> C(x,y)=∑a(u,v)f(x-u,x-v)?  I know that we need to first set
the R
> and
> >> H
> >>>>>> value, but I don't know the true meaning for setting them. If
H is
> >>>>> large,
> >>>>>> then R would be small, vice and versa.  However, to the value
of
> >>>>> C(x,y), it
> >>>>>> is hard to compare (large area* lower height) versus (small
area
> >> *large
> >>>>>> height). Could you explain to me a little bit more under what
> >> condition
> >>>>>> should I set larger H or smaller R?
> >>>>>
> >>>>> I don't think it's very necessary to understand the
convolution
> >> process.
> >>>>>    It's just a circular smoothing filter.  The convolution
process is
> >>>>> config file).  That defines the convolution radius in grid
units.
>  The
> >>>>> value at each grid point is just replaced by the average value
of all
> >> grid
> >>>>> points falling within the circle of that radius around
> >>>>> the point.  I do suggest playing around with it.  Keep the
threshold
> >> set
> >>>>> the same and see how the objects change as you
increase/decrease the
> >>>>>
> >>>>> Ultimately, you should play around with both the convolution
> threshold
> >>>>> and radius to define objects that capture the phenomenon of
interest.
> >>   For
> >>>>> example, if you're interested in studying large MCS's,
> >>>>> you'd set the convolution radius high and the convolution
threshold
> low
> >>>>> (small number of large objects).  For small scale convection,
you'd
> >> set the
> >>>>> convolution radius low and the threshold high (large
> >>>>> number of small objects).
> >>>>>
> >>>>>>
> >>>>>> 7: If I want to verify the grid data from CMAQ output, like
the NO2
> >>>>>> concentration, can I do that with MET? How to set the 'field'
in the
> >>>>> config
> >>>>>> file?
> >>>>>>
> >>>>>
> >>>>> I'm not familiar with that data set.  If you have a gridded
data file
> >>>>> that MET supports and have questions about extracting data
from it,
> >> just
> >>>>> post a sample data file to our anonymous ftp site
> >>>>> following these instructions:
> >>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>>>>
> >>>>> Then send us a met-help ticket about it.
> >>>>>
> >>>>>>
> >>>>>> 9:My last question is regarding the ascii to nc tool. My obs
data is
> >> not
> >>>>>> bufr nor the standard ascii format for MET. I then used both
Fortran
> >> and
> >>>>>> Matlab to transfer my data to the standard ascii format for
MET. To
> >> the
> >>>>>> fortran one, it showed a lot of such warnings:
> >>>>>> WARNING:
> >>>>>> WARNING: process_little_r_obs() -> the number of data lines
> specified
> >> in
> >>>>>> the header (10) does not match the number found in the data
(1) on
> >> line
> >>>>>> number 4087.
> >>>>>> WARNING:
> >>>>>> WARNING:
> >>>>>> WARNING: process_little_r_obs() -> the number of data lines
> specified
> >> in
> >>>>>> the header (10) does not match the number found in the data
(1) on
> >> line
> >>>>>> number 4091.
> >>>>>> WARNING:
> >>>>>> WARNING:
> >>>>>> WARNING: process_little_r_obs() -> the number of data lines
> specified
> >> in
> >>>>>> the header (10) does not match the number found in the data
(1) on
> >> line
> >>>>>> number 4095.
> >>>>>>
> >>>>>> But at last, the nc file can be produced. To the Matlab one,
the
> >>>>> process is
> >>>>>> correct, could you please tell me the reason. Is that related
to the
> >>>>> data
> >>>>>> type written onto the file, like the string or the float? But
the
> >>>>> format I
> >>>>>> set is the same in both scripts. I have also attached the
data
> >>>>> transformed
> >>>>>> by fortran and matlab to this email.
> >>>>>
> >>>>> I ran the two data files you sent through ascii2nc and both
ran fine
> >>>>> without any warnings.  The warnings about "little_r" you're
seeing
> are
> >> odd.
> >>>>>    ascii2nc supports multiple ascii file formats, one of
> >>>>> which is named little_r.  So for some reason, it was not
interpreting
> >> the
> >>>>> format of the ascii data you passed it correctly.  You can
explicitly
> >> tell
> >>>>> it the file format with the "-format" command line
> >>>>> option.  I'd suggest passing the "-format met_point" option to
> ascii2nc
> >>>>> to explicitly tell it to interpret your data using the MET
point
> >> format.
> >>>>>
> >>>>>>
> >>>>>> Also, since the data is not coming from bufr, to the
Message_Type I
> >> just
> >>>>>> write 'ADPUPA', whether this will influence the statistics
result?
> The
> >>>>>> height for different observation stations might be different,
is
> there
> >>>>> any
> >>>>>> method for me to compare the fcst and obs for different
specific
> >> heights
> >>>>>> instead of just setting a height value(e.g. 2m)?
> >>>>>
> >>>>> For surface data, you should set the message type to ADPSFC.
When
> >>>>> comparing 2-meter temperature to the ADPSFC message type, no
vertical
> >>>>> interpolation is done.  For upper-air verification at pressure
> >>>>> levels, vertical interpolation is done linear in the log of
pressure.
> >>>>>    When verifying a certain number of meters above/below
ground (like
> >> winds
> >>>>> at 30m or 40m), vertical interpolation is done linear in
> >>>>> height.
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> Sincerely,
> >>>>>>
> >>>>>> Jason
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>
> >>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #63639] Several questions regarding MET application
From: John Halley Gotway
Time: Mon Nov 18 11:25:37 2013

Jason,

Sorry for the delay in getting back to you.  I ran Point-Stat using
the data you sent me (for Height) and a verbosity level of v (-v 4),
and I see the following...

For TMP/Z0, Point-Stat is using GRIB record 251 from your forecast
file:
251:8577024:d=11070100:TMP:kpds5=11:kpds6=1:kpds7=0:TR=10:P1=1:P2=180:TimeU=1:sfc:436hr
fcst:NAve=0

For TMP/Z2, Point-Stat is using GRIB record 271 from your forecast
file:
271:9000782:d=11070100:TMP:kpds5=11:kpds6=105:kpds7=2:TR=10:P1=1:P2=180:TimeU=1:2

Since these are both vertical level forecast types being compared to
the ADPSFC message type, all of the point observations are being used
for both comparisons.  Notice that the OBAR (or mean
observation value) is the same for Z0 and Z2 comparisons: 301.04693.
That's because the same set of observations (all 914 of them) are
being used for both comparisons.  Now, what sort of behavior
were you expecting from Point-Stat?  Were you expecting it to take the
height of the observation minus the elevation of the station to
determine the height above ground level?  And then only use the
point observation if it's height above ground level matches the
forecast level?

As I mentioned in the past I believe, vertical level matching for
Point-Stat is rather simple.  It is not doing the checking I just
described.  Instead, it is all controlled by the "message type".
When verifying vertical level forecast fields (like Z0, Z2, or Z10)
against "surface" message type (like ADPSFC or SFCSHP), all point
observations will be used regardless of their height.  So really
it's up to you decide if these point observations of temperature
should be compared to a 2-meter temperature forecast or a surface
temperature forecast.

Next, I ran Point-Stat using the data in the "Pressure" directory.
message type.  And you're verifying TMP/Z0 and TMP/P1014-990.
Again Point-Stat finds TMP/Z0 in GRIB record number 251.  For
TMP/P1014-990, it only finds a single GRIB record in that range;
record 238 contains temperature of 1000mb.
Again, all of the point observations are used for the verification
tasks.  But this time the reason is different.  When comparing TMP/Z0
to the ADPSFC message type, all point observations are used
because of my explanation above.  When comparing TMP/P1014-990, Point-
Stat checks the pressure level for each point observation and only
uses it if it falls between 1014 and 990.  All of your point
observation do fall in that range, so they are all used.

Next, I tried running Point-Stat to verify TMP/P900-1000.  This
results in only 19 matched pairs being found.  Point-Stat searches
your forecast file for TMP records falling between 900 and 1000mb,
and it finds 5 of them:
203:10841294:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=900:TR=0:P1=92:P2=0:TimeU=1:900
mb:92hr fcst:NAve=0
212:11358896:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=925:TR=0:P1=92:P2=0:TimeU=1:925
mb:92hr fcst:NAve=0
221:11885126:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=950:TR=0:P1=92:P2=0:TimeU=1:950
mb:92hr fcst:NAve=0
230:12394008:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=975:TR=0:P1=92:P2=0:TimeU=1:975
mb:92hr fcst:NAve=0
238:12820842:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=1000:TR=0:P1=92:P2=0:TimeU=1:1000
mb:92hr fcst:NAve=0

For each point observation that falls in that pressure range, it
computes a forecast value by doing vertical interpolation between for
forecast levels above and below the observation.  So for a
temperature observation at 994mb, it takes the forecast values at
1000mb and 975mb and interpolates between them to the observation
level.

Hope that helps clarify.

Thanks,
John Halley Gotway
met_help at ucar.edu

On 11/14/2013 07:38 AM, Xingcheng Lu via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>
> Dear John,
>
> Thank you for your help and it works now. I have uploaded a file
called
> Jason.zip to the ftp. Inside it, there are two folders called
pressure and
> height respectively which include observation file, wrfout, config
and the
> result I got. The pressure folder is related to the pressure issue I
> mentioned to you before and height folder is related to the T0 and
T2
> issues. Thank you!
>
> Sincerely,
>
> Jason
>
>
> 2013/11/14 John Halley Gotway via RT <met_help at ucar.edu>
>
>> Jason,
>>
>> Try these commands:
>>
>>     cd <directory containing the files you want to post>
>>     ftp -p ftp.rap.ucar.edu
>>     cd incoming/irap/met_help
>>     mkdir xingcheng_data_20131113
>>     cd xingcheng_data_20131113
>>     put <file1>
>>     put <file2>
>>     ...
>>     bye
>>
>> Do you still have problems?
>>
>> Thanks,
>> John
>>
>>
>> On 11/13/2013 05:44 AM, Xingcheng Lu via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>>>
>>> Dear John,
>>>
>>> Thank you for your response, I tried to drop my files to the FTP,
>> however,
>>> while I put my files, error message showed up:
>>>
>>> 227 Entering Passive Mode (128,117,192,211,192,15)
>>> 553 Could not determine cwdir: No such file or directory.
>>>
>>> Any method to solve this? Thank you!
>>>
>>> Sincerely,
>>>
>>> Jason
>>>
>>>
>>> 2013/11/8 John Halley Gotway via RT <met_help at ucar.edu>
>>>
>>>> Jason,
>>>>
>>>> I'm not exactly sure how to address this issue.  But let me tell
you how
>>>> Point-Stat handles verification of "surface" variables.  It
depends on
>> the
>>>> observation message type being used.  The ADPSFC and
>>>> SFCSHP message types are special cases.  Basically, any point
>> observation
>>>> with an APDSFC or SFCSHP message type are assumed to be at the
surface -
>>>> regardless of their actual elevation or height value.
>>>>
>>>> When you're verifying forecasts with a vertical level type (such
as
>>>> 2-meter temperature or 10-meter winds - any vertical level
specified
>> using
>>>> a "Z") and comparing it to a surface message type (ADPSFC
>>>> or SFCSHP), all point observations of those types will be used.
So when
>>>> verifying 2-m TMP and 0-m TMP against the ADPSFC message type, I
would
>>>> expect that they would use the same set of point
>>>> observations.
>>>>
>>>> This vertical level matching part can get a bit tricky.  It'd
probably
>> be
>>>> best to have you send me a sample forecast file, observation
file, and
>>>> Point-Stat config file along with questions as to why
>>>> Point-Stat is producing the output that it is.  Usually working
through
>> a
>>>> specific example provides more answers than speaking more
generally.
>>>>
included
>>>> that in the test data you send as well.  I'm having a difficult
time
>>>> understanding exactly what the issue is.  I could take a
>>>> look at your config file and your data and perhaps offer some
>> suggestions.
>>>>
>>>> You can send me data by posting it to our anonymous ftp site:
>>>>       http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>> On 11/06/2013 07:07 AM, Xingcheng Lu via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>>>>>
>>>>> Dear John,
>>>>>
>>>>> I met another problem when I ran the MET. In my ascii
observation data,
>>>> the
>>>>> height and elevation are the same. In the config file I set both
>> Z0(TMP)
>>>>> and Z2(TMP) and found that the RMSE of Z0 reached around 40 and
Z2 only
>>>>> around 2. In theory, I think that my observation data should be
the
>>>>> temperature near the ground(Not the soil temperature from wrf)
because
>>>>> elevation=height. So, I want to know if I set Z0(TMP), whether
MET will
>>>> use
>>>>> the soil temperature from wrf to compare with the observation
data?
>>>>>
>>>>> Also, if it is possible, hope that you can answer my question
>>>>> pressure issue I asked one week ago at your convenience. Thank
you in
>>>>>
>>>>> Sincerely,
>>>>>
>>>>> Jason
>>>>>
>>>>>
>>>>> 2013/10/31 Xingcheng Lu <xingchenglu2011 at u.northwestern.edu>
>>>>>
>>>>>> Hi John,
>>>>>>
>>>>>>
>>>>>> I still not quite understand the neighborhood method, I know
that we
>>>> first
>>>>>> need to set a threshold to enclose other points which are
closed to
>> the
>>>>>> center point, but which factor decides whether the grid within
the
>>>>>> searching radius is turn on or not?
>>>>>>
>>>>>> I ran the Ascii fortran one just now, and it worked! I don't
know why,
>>>>>> maybe it is due to my cluster issue. By the way, what kind of
data
>> can I
>>>>>> use if I want to apply the little_r option?
>>>>>>
>>>>>> I just made a comparison for my observation data and forecast
data for
>>>> Z0.
>>>>>> I made a test and found that for ADPUPA, only when the
elevation is
>> zero
>>>>>> can the observation and forecast be matched. However, since the
>>>> observation
>>>>>> height and elevation is the same in my obs data, like if the
elevation
>>>> is 5
>>>>>> meters, the observation height is also 5m. I don't know under
such
>>>>>> condition whether the obs can be counted as  Z0? If yes, I
don't know
>>>> why
>>>>>> it cannot be matched by MET. But if I set as ADPSFC, all the
obs can
>> be
>>>>>> matched.
>>>>>>
>>>>>> My data has exact pressure value, and to the Z0, it ranges from
>>>> 990-1014.
and Z0
>>>> are
>>>>>> not the same. This results seem like: The temperature related
to
>>>> pressure
>>>>>> is not the same with that related to height at the same
location. I am
>>>>>> wondering whether there is any interpretation for the temp
value
>>>> related to
>>>>>> the pressure?(I have attached one of my result to this email.)
>>>>>>
>>>>>> Also, I need to make a full comparison between point obs and
forecast
>> on
>>>>>> surface, do you have any idea that which interpretation method
is more
>>>>>> reliable. Also, to the surface temperature, I wrote ADPSFC for
the
>> first
>>>>>> column of obs-ascii, and set Z0 in the pointstat config file,
am I
>>>> correct
>>>>>> or not? To the UW_Weight and DW_Weight method, I need to first
set the
>>>>>> width, any suggestion for that?
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Jason
>>>>>>
>>>>>>
>>>>>> 2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>
>>>>>>
>>>>>>> Jason,
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> John
>>>>>>>
>>>>>>> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
>>>>>>>>
>>>>>>>> Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
>>>>>>>> Transaction: Ticket created by
xingchenglu2011 at u.northwestern.edu
>>>>>>>>            Queue: met_help
>>>>>>>>          Subject: Several questions regarding MET application
>>>>>>>>            Owner: Nobody
>>>>>>>>       Requestors: xingchenglu2011 at u.northwestern.edu
>>>>>>>>           Status: new
>>>>>>>>      Ticket <URL:
>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I have several questions regarding the application of MET:
>>>>>>>>
>>>>>>>> 1:The threshold setting for variable(e.g. >273) is frequent
in the
>>>>>>>> tutorial, whether the threshold will be invalid if I just
calculate
>>>> and
>>>>>>>> compare the continuous statistics.(Like if MET will get rid
of the
>>>> data
>>>>>>>> which is less than 273 for continuous verification?)
>>>>>>>
>>>>>>> The "cat_thresh" setting stands for "categorical threshold".
That is
>>>>>>> used when computing contingency table counts and statistics
(the CTC
>>>> and
>>>>>>> CTS output line types).  The "cat_thresh" is used to
>>>>>>> define what constitutes an "event" when computing a 2x2
contingency
>>>>>>> table.  It has no impact on the continuous statistics and
partial
>> sums
>>>> in
>>>>>>> the CNT and SL1L2 output line types.
>>>>>>>
>>>>>>> However, in the future we may add a parameter to filter the
matched
>>>> pairs
>>>>>>> that go into the continuous statistics.  Some users have
requested
>> the
>>>>>>> ability to do conditional verification like that -
>>>>>>> where you throw out some of the matched pairs before computing
>>>> continuous
>>>>>>> stats.  But that does not currently exist in the current
METv4.1
>>>> release.
>>>>>>>
>>>>>>>>
>>>>>>>> 2:For the neighborhood method applied in gridded-gridded
comparison,
>>>>>>>> whether this method is just useful for the categorical
variables?
>> Can
>>>>>>> it be
>>>>>>>> applied in the continuous statistics? I don't quite
understand that
>>>> why
>>>>>>> the
>>>>>>>> width value for the square must be an odd integer. Also, in
the
>>>> gridded
>>>>>>>> comparison, I don't quite understand why before comparison,
fcst and
>>>> obs
>>>>>>>> fields needed to be smoothed first.
>>>>>>>
>>>>>>> To answer your second question first, they do not need to be
smoothed
>>>>>>> first.  Typically, grid_stat is run with no "interpolation",
or
>>>> smoothing,
>>>>>>> done.  That's why the default looks like this:
>>>>>>> interp = {
>>>>>>>        field      = BOTH;
>>>>>>>        vld_thresh = 1.0;
>>>>>>>
>>>>>>>        type = [
>>>>>>>           {
>>>>>>>              method = UW_MEAN;
>>>>>>>              width  = 1;
>>>>>>>           }
>>>>>>>        ];
>>>>>>> };
>>>>>>>
>>>>>>> However, this provides an easy way to smooth the data before
>> computing
>>>>>>> statistics.  And that is called "upscaling".  So you could see
how
>> the
>>>>>>> performance of your model improves the more you smooth it.
>>>>>>>      Typically, smoother forecast score much better than more
detailed
>>>> ones.
>>>>>>>     But, as I mentioned, typically no smoothing it performed.
>>>>>>>
>>>>>>> The neighborhood methods implemented in Grid-Stat must be
performed
>>>> using
>>>>>>> a threshold.  First, the raw fields are thresholded to create
a 0/1
>>>> bitmap
>>>>>>> in each.  Then, for each neighborhood width, a
>>>>>>> "coverage" value is computed as the percentage of grid squares
in
>> that
>>>>>>> box that are turned on.  The neighborhood stats are computed
over
>> those
>>>>>>> coverage values.  The widths must be odd so that they're
>>>>>>> centered on each grid point.  A width of 5 means you have 2
grid
>> points
>>>>>>> to the left and right.  7 means there's 3 on each side.  A
width of 4
>>>>>>> wouldn't be centered on the grid box.
>>>>>>>
>>>>>>>>
>>>>>>>> 3:In both point-stat and grid-stat, the tutorial states that
it is
>> not
>>>>>>>> recommended to use analysis field for comparison. I don't
quite get
>>>>>>>> the point what the analysis field means. If I compare two
wrfout by
>>>>>>> using
>>>>>>>> different physical schemes, is it counted as the situation
the
>>>> tutorial
>>>>>>>> states?
>>>>>>>
>>>>>>> An analysis field is just the 0-hour forecast from a model.
Users
>> will
>>>>>>> often compare a 24-hour forecast from the previous day to the
0-hour
>>>>>>> forecast of the current day.  They're assuming that the
>>>>>>> model analysis is "truth".  The problem is that the model
analysis is
>>>>>>> typically very far from truth.  The model analysis will
contain the
>>>> same
>>>>>>> type of biases and errors that the forecast will.
>>>>>>> Verifying against a model analysis won't really tell you how
good
>> your
>>>>>>> model is doing.
>>>>>>>
>>>>>>> However, we set up the MET tools in a general way to enable
users to
>>>>>>> perform whatever type of comparison they'd like.  As you
mention, you
>>>> can
>>>>>>> compare the output of two different physical schemes.
>>>>>>> But the tough part will be interpreting the meaning of the
resulting
>>>>>>> statistics.
>>>>>>>
>>>>>>>>
>>>>>>>> 4: If I compare the grid fcst and grid obs for T2 in a
specific
>>>>>>>> time(Setting beg/end=0),then I will get some statistics
values, such
>>>> as
>>>>>>>> ME,MSE. I am not quite sure about the calculation process,
for
>>>> example,
>>>>>>> in
>>>>>>>> the fcst field, whether MET first sum the T2 value from all
grid
>>>> points
>>>>>>>> first, then compare with the obs? Or it compares the value
between
>>>> fcst
>>>>>>> and
>>>>>>>> obs for each point and do the statistics calculation.
>>>>>>>
>>>>>>> For gridded verification, MET looks grid-point by grid-point.
For
>> each
>>>>>>> grid point, it considers the forecast value (f) and the
observation
>>>> value
>>>>>>> (o).  If either of those contain bad data, it skips
>>>>>>> that point.  If both data values are good, it computes an
error value
>>>> as
>>>>>>> f - o.  The mean error (ME) is the average error over all grid
>> points.
>>>>    The
>>>>>>> mean squared error (MSE) is the average squared
>>>>>>> error over all grid points.
>>>>>>>
>>>>>>>>
>>>>>>>> 5: If I want to compare the variables value at the eta-level
set in
>>>> the
>>>>>>> wrf
>>>>>>>> namelist, any method for me to do that instead of just
setting the
>>>>>>> specific
>>>>>>>> height?
>>>>>>>
>>>>>>> No.  MET assumes that you've post-processed your raw WRF
output for
>> two
>>>>>>> reasons.  First, post-processing destaggers the data and puts
it on a
>>>>>>> regular grid.  MET doesn't support staggered grids.
>>>>>>> Second, post-processing interpolates the model output onto
pressure
>>>>>>> levels.  Point observations are defined at pressure levels,
not
>> hybrid
>>>>>>> eta-levels.  In order to compare your model output to point
>>>>>>> data, it needs to be interpolated to pressure levels.
>>>>>>>
>>>>>>> For post-processing, we recommend using the Unified Post-
Processor
>>>> which
>>>>>>> writes out GRIB files that MET supports very well.
>>>>>>>
>>>>>>>>
>>>>>>>> 6: For the MODE tool, I don't understand the convolution
process.
>> The
>>>>>>>> expression written as: C(x,y)=∑a(u,v)f(x-u)(x-v), is it the
same
>> with
>>>>>>>> C(x,y)=∑a(u,v)f(x-u,x-v)?  I know that we need to first set
the R
>> and
>>>> H
>>>>>>>> value, but I don't know the true meaning for setting them. If
H is
>>>>>>> large,
>>>>>>>> then R would be small, vice and versa.  However, to the value
of
>>>>>>> C(x,y), it
>>>>>>>> is hard to compare (large area* lower height) versus (small
area
>>>> *large
>>>>>>>> height). Could you explain to me a little bit more under what
>>>> condition
>>>>>>>> should I set larger H or smaller R?
>>>>>>>
>>>>>>> I don't think it's very necessary to understand the
convolution
>>>> process.
>>>>>>>     It's just a circular smoothing filter.  The convolution
process is
>>>>>>> config file).  That defines the convolution radius in grid
units.
>>   The
>>>>>>> value at each grid point is just replaced by the average value
of all
>>>> grid
>>>>>>> points falling within the circle of that radius around
>>>>>>> the point.  I do suggest playing around with it.  Keep the
threshold
>>>> set
>>>>>>> the same and see how the objects change as you
increase/decrease the
>>>>>>>
>>>>>>> Ultimately, you should play around with both the convolution
>> threshold
>>>>>>> and radius to define objects that capture the phenomenon of
interest.
>>>>    For
>>>>>>> example, if you're interested in studying large MCS's,
>>>>>>> you'd set the convolution radius high and the convolution
threshold
>> low
>>>>>>> (small number of large objects).  For small scale convection,
you'd
>>>> set the
>>>>>>> convolution radius low and the threshold high (large
>>>>>>> number of small objects).
>>>>>>>
>>>>>>>>
>>>>>>>> 7: If I want to verify the grid data from CMAQ output, like
the NO2
>>>>>>>> concentration, can I do that with MET? How to set the 'field'
in the
>>>>>>> config
>>>>>>>> file?
>>>>>>>>
>>>>>>>
>>>>>>> I'm not familiar with that data set.  If you have a gridded
data file
>>>>>>> that MET supports and have questions about extracting data
from it,
>>>> just
>>>>>>> post a sample data file to our anonymous ftp site
>>>>>>> following these instructions:
>>>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>>>>>>
>>>>>>> Then send us a met-help ticket about it.
>>>>>>>
>>>>>>>>
>>>>>>>> 9:My last question is regarding the ascii to nc tool. My obs
data is
>>>> not
>>>>>>>> bufr nor the standard ascii format for MET. I then used both
Fortran
>>>> and
>>>>>>>> Matlab to transfer my data to the standard ascii format for
MET. To
>>>> the
>>>>>>>> fortran one, it showed a lot of such warnings:
>>>>>>>> WARNING:
>>>>>>>> WARNING: process_little_r_obs() -> the number of data lines
>> specified
>>>> in
>>>>>>>> the header (10) does not match the number found in the data
(1) on
>>>> line
>>>>>>>> number 4087.
>>>>>>>> WARNING:
>>>>>>>> WARNING:
>>>>>>>> WARNING: process_little_r_obs() -> the number of data lines
>> specified
>>>> in
>>>>>>>> the header (10) does not match the number found in the data
(1) on
>>>> line
>>>>>>>> number 4091.
>>>>>>>> WARNING:
>>>>>>>> WARNING:
>>>>>>>> WARNING: process_little_r_obs() -> the number of data lines
>> specified
>>>> in
>>>>>>>> the header (10) does not match the number found in the data
(1) on
>>>> line
>>>>>>>> number 4095.
>>>>>>>>
>>>>>>>> But at last, the nc file can be produced. To the Matlab one,
the
>>>>>>> process is
>>>>>>>> correct, could you please tell me the reason. Is that related
to the
>>>>>>> data
>>>>>>>> type written onto the file, like the string or the float? But
the
>>>>>>> format I
>>>>>>>> set is the same in both scripts. I have also attached the
data
>>>>>>> transformed
>>>>>>>> by fortran and matlab to this email.
>>>>>>>
>>>>>>> I ran the two data files you sent through ascii2nc and both
ran fine
>>>>>>> without any warnings.  The warnings about "little_r" you're
seeing
>> are
>>>> odd.
>>>>>>>     ascii2nc supports multiple ascii file formats, one of
>>>>>>> which is named little_r.  So for some reason, it was not
interpreting
>>>> the
>>>>>>> format of the ascii data you passed it correctly.  You can
explicitly
>>>> tell
>>>>>>> it the file format with the "-format" command line
>>>>>>> option.  I'd suggest passing the "-format met_point" option to
>> ascii2nc
>>>>>>> to explicitly tell it to interpret your data using the MET
point
>>>> format.
>>>>>>>
>>>>>>>>
>>>>>>>> Also, since the data is not coming from bufr, to the
Message_Type I
>>>> just
>>>>>>>> write 'ADPUPA', whether this will influence the statistics
result?
>> The
>>>>>>>> height for different observation stations might be different,
is
>> there
>>>>>>> any
>>>>>>>> method for me to compare the fcst and obs for different
specific
>>>> heights
>>>>>>>> instead of just setting a height value(e.g. 2m)?
>>>>>>>
>>>>>>> For surface data, you should set the message type to ADPSFC.
When
>>>>>>> comparing 2-meter temperature to the ADPSFC message type, no
vertical
>>>>>>> interpolation is done.  For upper-air verification at pressure
>>>>>>> levels, vertical interpolation is done linear in the log of
pressure.
>>>>>>>     When verifying a certain number of meters above/below
ground (like
>>>> winds
>>>>>>> at 30m or 40m), vertical interpolation is done linear in
>>>>>>> height.
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Sincerely,
>>>>>>>>
>>>>>>>> Jason
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>
>>>>
>>
>>

------------------------------------------------
Subject: Several questions regarding MET application
From: Xingcheng Lu
Time: Wed Nov 20 07:29:03 2013

Dear John,

Thank you for your help and detailed explanation. To the pressure
part, now
I understand, interpretation will be done for the FCST. However, what
I am
still confused about is the Z0 and Z2. According to your explanation,
I
know that the FCST will be compared to the OBS directly without doing
any
interpretation. However, I don't understand why the error between OBS
and
Z0 will be larger than Z2, since my OBS data should be at height
0(Height-Elevation). So, I am wondering, if I set Z0, whether MET will
extract the soil temperature from the MET? Thank you again for your
help!

Sincerely,

Jason

2013/11/19 John Halley Gotway via RT <met_help at ucar.edu>

> Jason,
>
> Sorry for the delay in getting back to you.  I ran Point-Stat using
the
> data you sent me (for Height) and a verbosity level of v (-v 4), and
I see
> the following...
>
> For TMP/Z0, Point-Stat is using GRIB record 251 from your forecast
file:
>
>
251:8577024:d=11070100:TMP:kpds5=11:kpds6=1:kpds7=0:TR=10:P1=1:P2=180:TimeU=1:sfc:436hr
> fcst:NAve=0
>
> For TMP/Z2, Point-Stat is using GRIB record 271 from your forecast
file:
>
>
271:9000782:d=11070100:TMP:kpds5=11:kpds6=105:kpds7=2:TR=10:P1=1:P2=180:TimeU=1:2
>
> Since these are both vertical level forecast types being compared to
the
> ADPSFC message type, all of the point observations are being used
for both
> comparisons.  Notice that the OBAR (or mean
> observation value) is the same for Z0 and Z2 comparisons: 301.04693.
>  That's because the same set of observations (all 914 of them) are
being
> used for both comparisons.  Now, what sort of behavior
> were you expecting from Point-Stat?  Were you expecting it to take
the
> height of the observation minus the elevation of the station to
determine
> the height above ground level?  And then only use the
> point observation if it's height above ground level matches the
forecast
> level?
>
> As I mentioned in the past I believe, vertical level matching for
> Point-Stat is rather simple.  It is not doing the checking I just
> described.  Instead, it is all controlled by the "message type".
> When verifying vertical level forecast fields (like Z0, Z2, or Z10)
> against "surface" message type (like ADPSFC or SFCSHP), all point
> observations will be used regardless of their height.  So really
> it's up to you decide if these point observations of temperature
should be
> compared to a 2-meter temperature forecast or a surface temperature
> forecast.
>
> Next, I ran Point-Stat using the data in the "Pressure" directory.
All of
type.
>  And you're verifying TMP/Z0 and TMP/P1014-990.
> Again Point-Stat finds TMP/Z0 in GRIB record number 251.  For
> TMP/P1014-990, it only finds a single GRIB record in that range;
record 238
> contains temperature of 1000mb.
> Again, all of the point observations are used for the verification
>  But this time the reason is different.  When comparing TMP/Z0 to
the
> ADPSFC message type, all point observations are used
> because of my explanation above.  When comparing TMP/P1014-990,
Point-Stat
> checks the pressure level for each point observation and only uses
it if it
> falls between 1014 and 990.  All of your point
> observation do fall in that range, so they are all used.
>
> Next, I tried running Point-Stat to verify TMP/P900-1000.  This
results in
> only 19 matched pairs being found.  Point-Stat searches your
forecast file
> for TMP records falling between 900 and 1000mb,
> and it finds 5 of them:
>
>
203:10841294:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=900:TR=0:P1=92:P2=0:TimeU=1:900
> mb:92hr fcst:NAve=0
>
>
212:11358896:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=925:TR=0:P1=92:P2=0:TimeU=1:925
> mb:92hr fcst:NAve=0
>
>
221:11885126:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=950:TR=0:P1=92:P2=0:TimeU=1:950
> mb:92hr fcst:NAve=0
>
>
230:12394008:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=975:TR=0:P1=92:P2=0:TimeU=1:975
> mb:92hr fcst:NAve=0
>
>
238:12820842:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=1000:TR=0:P1=92:P2=0:TimeU=1:1000
> mb:92hr fcst:NAve=0
>
> For each point observation that falls in that pressure range, it
computes
> a forecast value by doing vertical interpolation between for
forecast
> levels above and below the observation.  So for a
> temperature observation at 994mb, it takes the forecast values at
1000mb
> and 975mb and interpolates between them to the observation level.
>
> Hope that helps clarify.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
> On 11/14/2013 07:38 AM, Xingcheng Lu via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
> >
> > Dear John,
> >
> > Thank you for your help and it works now. I have uploaded a file
called
> > Jason.zip to the ftp. Inside it, there are two folders called
pressure
> and
> > height respectively which include observation file, wrfout, config
and
> the
> > result I got. The pressure folder is related to the pressure issue
I
> > mentioned to you before and height folder is related to the T0 and
T2
> > issues. Thank you!
> >
> > Sincerely,
> >
> > Jason
> >
> >
> > 2013/11/14 John Halley Gotway via RT <met_help at ucar.edu>
> >
> >> Jason,
> >>
> >> Try these commands:
> >>
> >>     cd <directory containing the files you want to post>
> >>     ftp -p ftp.rap.ucar.edu
> >>     cd incoming/irap/met_help
> >>     mkdir xingcheng_data_20131113
> >>     cd xingcheng_data_20131113
> >>     put <file1>
> >>     put <file2>
> >>     ...
> >>     bye
> >>
> >> Do you still have problems?
> >>
> >> Thanks,
> >> John
> >>
> >>
> >> On 11/13/2013 05:44 AM, Xingcheng Lu via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
> >>>
> >>> Dear John,
> >>>
> >>> Thank you for your response, I tried to drop my files to the
FTP,
> >> however,
> >>> while I put my files, error message showed up:
> >>>
> >>> 227 Entering Passive Mode (128,117,192,211,192,15)
> >>> 553 Could not determine cwdir: No such file or directory.
> >>>
> >>> Any method to solve this? Thank you!
> >>>
> >>> Sincerely,
> >>>
> >>> Jason
> >>>
> >>>
> >>> 2013/11/8 John Halley Gotway via RT <met_help at ucar.edu>
> >>>
> >>>> Jason,
> >>>>
> >>>> I'm not exactly sure how to address this issue.  But let me
tell you
> how
> >>>> Point-Stat handles verification of "surface" variables.  It
depends on
> >> the
> >>>> observation message type being used.  The ADPSFC and
> >>>> SFCSHP message types are special cases.  Basically, any point
> >> observation
> >>>> with an APDSFC or SFCSHP message type are assumed to be at the
> surface -
> >>>> regardless of their actual elevation or height value.
> >>>>
> >>>> When you're verifying forecasts with a vertical level type
(such as
> >>>> 2-meter temperature or 10-meter winds - any vertical level
specified
> >> using
> >>>> a "Z") and comparing it to a surface message type (ADPSFC
> >>>> or SFCSHP), all point observations of those types will be used.
So
> when
> >>>> verifying 2-m TMP and 0-m TMP against the ADPSFC message type,
I would
> >>>> expect that they would use the same set of point
> >>>> observations.
> >>>>
> >>>> This vertical level matching part can get a bit tricky.  It'd
probably
> >> be
> >>>> best to have you send me a sample forecast file, observation
file, and
> >>>> Point-Stat config file along with questions as to why
> >>>> Point-Stat is producing the output that it is.  Usually working
> through
> >> a
> >>>> specific example provides more answers than speaking more
generally.
> >>>>
> >>>> You also asked a question about pressure.  Perhaps, you could
included
> >>>> that in the test data you send as well.  I'm having a difficult
time
> >>>> understanding exactly what the issue is.  I could take a
> >>>> look at your config file and your data and perhaps offer some
> >> suggestions.
> >>>>
> >>>> You can send me data by posting it to our anonymous ftp site:
> >>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On 11/06/2013 07:07 AM, Xingcheng Lu via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639
>
> >>>>>
> >>>>> Dear John,
> >>>>>
> >>>>> I met another problem when I ran the MET. In my ascii
observation
> data,
> >>>> the
> >>>>> height and elevation are the same. In the config file I set
both
> >> Z0(TMP)
> >>>>> and Z2(TMP) and found that the RMSE of Z0 reached around 40
and Z2
> only
> >>>>> around 2. In theory, I think that my observation data should
be the
> >>>>> temperature near the ground(Not the soil temperature from wrf)
> because
> >>>>> elevation=height. So, I want to know if I set Z0(TMP), whether
MET
> will
> >>>> use
> >>>>> the soil temperature from wrf to compare with the observation
data?
> >>>>>
> >>>>> Also, if it is possible, hope that you can answer my question
> the
> >>>>> pressure issue I asked one week ago at your convenience. Thank
you in
> >>>>>
> >>>>> Sincerely,
> >>>>>
> >>>>> Jason
> >>>>>
> >>>>>
> >>>>> 2013/10/31 Xingcheng Lu <xingchenglu2011 at u.northwestern.edu>
> >>>>>
> >>>>>> Hi John,
> >>>>>>
> >>>>>>
> >>>>>> I still not quite understand the neighborhood method, I know
that we
> >>>> first
> >>>>>> need to set a threshold to enclose other points which are
closed to
> >> the
> >>>>>> center point, but which factor decides whether the grid
within the
> >>>>>> searching radius is turn on or not?
> >>>>>>
> >>>>>> I ran the Ascii fortran one just now, and it worked! I don't
know
> why,
> >>>>>> maybe it is due to my cluster issue. By the way, what kind of
data
> >> can I
> >>>>>> use if I want to apply the little_r option?
> >>>>>>
> >>>>>> I just made a comparison for my observation data and forecast
data
> for
> >>>> Z0.
> >>>>>> I made a test and found that for ADPUPA, only when the
elevation is
> >> zero
> >>>>>> can the observation and forecast be matched. However, since
the
> >>>> observation
> >>>>>> height and elevation is the same in my obs data, like if the
> elevation
> >>>> is 5
> >>>>>> meters, the observation height is also 5m. I don't know under
such
> >>>>>> condition whether the obs can be counted as  Z0? If yes, I
don't
> know
> >>>> why
> >>>>>> it cannot be matched by MET. But if I set as ADPSFC, all the
obs can
> >> be
> >>>>>> matched.
> >>>>>>
> >>>>>> My data has exact pressure value, and to the Z0, it ranges
from
> >>>> 990-1014.
> >>>>>> However, for both ADPUPA and ADPSFC, the results of P960-1013
and
> Z0
> >>>> are
> >>>>>> not the same. This results seem like: The temperature related
to
> >>>> pressure
> >>>>>> is not the same with that related to height at the same
location. I
> am
> >>>>>> wondering whether there is any interpretation for the temp
value
> >>>> related to
> >>>>>> the pressure?(I have attached one of my result to this
email.)
> >>>>>>
> >>>>>> Also, I need to make a full comparison between point obs and
> forecast
> >> on
> >>>>>> surface, do you have any idea that which interpretation
method is
> more
> >>>>>> reliable. Also, to the surface temperature, I wrote ADPSFC
for the
> >> first
> >>>>>> column of obs-ascii, and set Z0 in the pointstat config file,
am I
> >>>> correct
> >>>>>> or not? To the UW_Weight and DW_Weight method, I need to
first set
> the
> >>>>>> width, any suggestion for that?
> >>>>>>
> >>>>>>
> >>>>>> Regards,
> >>>>>>
> >>>>>> Jason
> >>>>>>
> >>>>>>
> >>>>>> 2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>
> >>>>>>
> >>>>>>> Jason,
> >>>>>>>
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>> John
> >>>>>>>
> >>>>>>> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
> >>>>>>>>
> >>>>>>>> Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
> >>>>>>>> Transaction: Ticket created by
xingchenglu2011 at u.northwestern.edu
> >>>>>>>>            Queue: met_help
> >>>>>>>>          Subject: Several questions regarding MET
application
> >>>>>>>>            Owner: Nobody
> >>>>>>>>       Requestors: xingchenglu2011 at u.northwestern.edu
> >>>>>>>>           Status: new
> >>>>>>>>      Ticket <URL:
> >>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> I have several questions regarding the application of MET:
> >>>>>>>>
> >>>>>>>> 1:The threshold setting for variable(e.g. >273) is frequent
in the
> >>>>>>>> tutorial, whether the threshold will be invalid if I just
> calculate
> >>>> and
> >>>>>>>> compare the continuous statistics.(Like if MET will get rid
of the
> >>>> data
> >>>>>>>> which is less than 273 for continuous verification?)
> >>>>>>>
> >>>>>>> The "cat_thresh" setting stands for "categorical threshold".
That
> is
> >>>>>>> used when computing contingency table counts and statistics
(the
> CTC
> >>>> and
> >>>>>>> CTS output line types).  The "cat_thresh" is used to
> >>>>>>> define what constitutes an "event" when computing a 2x2
contingency
> >>>>>>> table.  It has no impact on the continuous statistics and
partial
> >> sums
> >>>> in
> >>>>>>> the CNT and SL1L2 output line types.
> >>>>>>>
> >>>>>>> However, in the future we may add a parameter to filter the
matched
> >>>> pairs
> >>>>>>> that go into the continuous statistics.  Some users have
requested
> >> the
> >>>>>>> ability to do conditional verification like that -
> >>>>>>> where you throw out some of the matched pairs before
computing
> >>>> continuous
> >>>>>>> stats.  But that does not currently exist in the current
METv4.1
> >>>> release.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> 2:For the neighborhood method applied in gridded-gridded
> comparison,
> >>>>>>>> whether this method is just useful for the categorical
variables?
> >> Can
> >>>>>>> it be
> >>>>>>>> applied in the continuous statistics? I don't quite
understand
> that
> >>>> why
> >>>>>>> the
> >>>>>>>> width value for the square must be an odd integer. Also, in
the
> >>>> gridded
> >>>>>>>> comparison, I don't quite understand why before comparison,
fcst
> and
> >>>> obs
> >>>>>>>> fields needed to be smoothed first.
> >>>>>>>
> >>>>>>> To answer your second question first, they do not need to be
> smoothed
> >>>>>>> first.  Typically, grid_stat is run with no "interpolation",
or
> >>>> smoothing,
> >>>>>>> done.  That's why the default looks like this:
> >>>>>>> interp = {
> >>>>>>>        field      = BOTH;
> >>>>>>>        vld_thresh = 1.0;
> >>>>>>>
> >>>>>>>        type = [
> >>>>>>>           {
> >>>>>>>              method = UW_MEAN;
> >>>>>>>              width  = 1;
> >>>>>>>           }
> >>>>>>>        ];
> >>>>>>> };
> >>>>>>>
> >>>>>>> However, this provides an easy way to smooth the data before
> >> computing
> >>>>>>> statistics.  And that is called "upscaling".  So you could
see how
> >> the
> >>>>>>> performance of your model improves the more you smooth it.
> >>>>>>>      Typically, smoother forecast score much better than
more
> detailed
> >>>> ones.
> >>>>>>>     But, as I mentioned, typically no smoothing it
performed.
> >>>>>>>
> >>>>>>> The neighborhood methods implemented in Grid-Stat must be
performed
> >>>> using
> >>>>>>> a threshold.  First, the raw fields are thresholded to
create a 0/1
> >>>> bitmap
> >>>>>>> in each.  Then, for each neighborhood width, a
> >>>>>>> "coverage" value is computed as the percentage of grid
squares in
> >> that
> >>>>>>> box that are turned on.  The neighborhood stats are computed
over
> >> those
> >>>>>>> coverage values.  The widths must be odd so that they're
> >>>>>>> centered on each grid point.  A width of 5 means you have 2
grid
> >> points
> >>>>>>> to the left and right.  7 means there's 3 on each side.  A
width
> of 4
> >>>>>>> wouldn't be centered on the grid box.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> 3:In both point-stat and grid-stat, the tutorial states
that it is
> >> not
> >>>>>>>> recommended to use analysis field for comparison. I don't
quite
> get
> >>>>>>>> the point what the analysis field means. If I compare two
wrfout
> by
> >>>>>>> using
> >>>>>>>> different physical schemes, is it counted as the situation
the
> >>>> tutorial
> >>>>>>>> states?
> >>>>>>>
> >>>>>>> An analysis field is just the 0-hour forecast from a model.
Users
> >> will
> >>>>>>> often compare a 24-hour forecast from the previous day to
the
> 0-hour
> >>>>>>> forecast of the current day.  They're assuming that the
> >>>>>>> model analysis is "truth".  The problem is that the model
analysis
> is
> >>>>>>> typically very far from truth.  The model analysis will
contain the
> >>>> same
> >>>>>>> type of biases and errors that the forecast will.
> >>>>>>> Verifying against a model analysis won't really tell you how
good
> >> your
> >>>>>>> model is doing.
> >>>>>>>
> >>>>>>> However, we set up the MET tools in a general way to enable
users
> to
> >>>>>>> perform whatever type of comparison they'd like.  As you
mention,
> you
> >>>> can
> >>>>>>> compare the output of two different physical schemes.
> >>>>>>> But the tough part will be interpreting the meaning of the
> resulting
> >>>>>>> statistics.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> 4: If I compare the grid fcst and grid obs for T2 in a
specific
> >>>>>>>> time(Setting beg/end=0),then I will get some statistics
values,
> such
> >>>> as
> >>>>>>>> ME,MSE. I am not quite sure about the calculation process,
for
> >>>> example,
> >>>>>>> in
> >>>>>>>> the fcst field, whether MET first sum the T2 value from all
grid
> >>>> points
> >>>>>>>> first, then compare with the obs? Or it compares the value
between
> >>>> fcst
> >>>>>>> and
> >>>>>>>> obs for each point and do the statistics calculation.
> >>>>>>>
> >>>>>>> For gridded verification, MET looks grid-point by grid-
point.  For
> >> each
> >>>>>>> grid point, it considers the forecast value (f) and the
observation
> >>>> value
> >>>>>>> (o).  If either of those contain bad data, it skips
> >>>>>>> that point.  If both data values are good, it computes an
error
> value
> >>>> as
> >>>>>>> f - o.  The mean error (ME) is the average error over all
grid
> >> points.
> >>>>    The
> >>>>>>> mean squared error (MSE) is the average squared
> >>>>>>> error over all grid points.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> 5: If I want to compare the variables value at the eta-
level set
> in
> >>>> the
> >>>>>>> wrf
> >>>>>>>> namelist, any method for me to do that instead of just
setting the
> >>>>>>> specific
> >>>>>>>> height?
> >>>>>>>
> >>>>>>> No.  MET assumes that you've post-processed your raw WRF
output for
> >> two
> >>>>>>> reasons.  First, post-processing destaggers the data and
puts it
> on a
> >>>>>>> regular grid.  MET doesn't support staggered grids.
> >>>>>>> Second, post-processing interpolates the model output onto
pressure
> >>>>>>> levels.  Point observations are defined at pressure levels,
not
> >> hybrid
> >>>>>>> eta-levels.  In order to compare your model output to point
> >>>>>>> data, it needs to be interpolated to pressure levels.
> >>>>>>>
> >>>>>>> For post-processing, we recommend using the Unified Post-
Processor
> >>>> which
> >>>>>>> writes out GRIB files that MET supports very well.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> 6: For the MODE tool, I don't understand the convolution
process.
> >> The
> >>>>>>>> expression written as: C(x,y)=∑a(u,v)f(x-u)(x-v), is it the
same
> >> with
> >>>>>>>> C(x,y)=∑a(u,v)f(x-u,x-v)?  I know that we need to first set
the R
> >> and
> >>>> H
> >>>>>>>> value, but I don't know the true meaning for setting them.
If H is
> >>>>>>> large,
> >>>>>>>> then R would be small, vice and versa.  However, to the
value of
> >>>>>>> C(x,y), it
> >>>>>>>> is hard to compare (large area* lower height) versus (small
area
> >>>> *large
> >>>>>>>> height). Could you explain to me a little bit more under
what
> >>>> condition
> >>>>>>>> should I set larger H or smaller R?
> >>>>>>>
> >>>>>>> I don't think it's very necessary to understand the
convolution
> >>>> process.
> >>>>>>>     It's just a circular smoothing filter.  The convolution
> process is
> >>>>>>> config file).  That defines the convolution radius in grid
units.
> >>   The
> >>>>>>> value at each grid point is just replaced by the average
value of
> all
> >>>> grid
> >>>>>>> points falling within the circle of that radius around
> >>>>>>> the point.  I do suggest playing around with it.  Keep the
> threshold
> >>>> set
> >>>>>>> the same and see how the objects change as you
increase/decrease
> the
> >>>>>>>
> >>>>>>> Ultimately, you should play around with both the convolution
> >> threshold
> >>>>>>> and radius to define objects that capture the phenomenon of
> interest.
> >>>>    For
> >>>>>>> example, if you're interested in studying large MCS's,
> >>>>>>> you'd set the convolution radius high and the convolution
threshold
> >> low
> >>>>>>> (small number of large objects).  For small scale
convection, you'd
> >>>> set the
> >>>>>>> convolution radius low and the threshold high (large
> >>>>>>> number of small objects).
> >>>>>>>
> >>>>>>>>
> >>>>>>>> 7: If I want to verify the grid data from CMAQ output, like
the
> NO2
> >>>>>>>> concentration, can I do that with MET? How to set the
'field' in
> the
> >>>>>>> config
> >>>>>>>> file?
> >>>>>>>>
> >>>>>>>
> >>>>>>> I'm not familiar with that data set.  If you have a gridded
data
> file
> >>>>>>> that MET supports and have questions about extracting data
from it,
> >>>> just
> >>>>>>> post a sample data file to our anonymous ftp site
> >>>>>>> following these instructions:
> >>>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>>>>>>
> >>>>>>> Then send us a met-help ticket about it.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> 9:My last question is regarding the ascii to nc tool. My
obs data
> is
> >>>> not
> >>>>>>>> bufr nor the standard ascii format for MET. I then used
both
> Fortran
> >>>> and
> >>>>>>>> Matlab to transfer my data to the standard ascii format for
MET.
> To
> >>>> the
> >>>>>>>> fortran one, it showed a lot of such warnings:
> >>>>>>>> WARNING:
> >>>>>>>> WARNING: process_little_r_obs() -> the number of data lines
> >> specified
> >>>> in
> >>>>>>>> the header (10) does not match the number found in the data
(1) on
> >>>> line
> >>>>>>>> number 4087.
> >>>>>>>> WARNING:
> >>>>>>>> WARNING:
> >>>>>>>> WARNING: process_little_r_obs() -> the number of data lines
> >> specified
> >>>> in
> >>>>>>>> the header (10) does not match the number found in the data
(1) on
> >>>> line
> >>>>>>>> number 4091.
> >>>>>>>> WARNING:
> >>>>>>>> WARNING:
> >>>>>>>> WARNING: process_little_r_obs() -> the number of data lines
> >> specified
> >>>> in
> >>>>>>>> the header (10) does not match the number found in the data
(1) on
> >>>> line
> >>>>>>>> number 4095.
> >>>>>>>>
> >>>>>>>> But at last, the nc file can be produced. To the Matlab
one, the
> >>>>>>> process is
> >>>>>>>> correct, could you please tell me the reason. Is that
related to
> the
> >>>>>>> data
> >>>>>>>> type written onto the file, like the string or the float?
But the
> >>>>>>> format I
> >>>>>>>> set is the same in both scripts. I have also attached the
data
> >>>>>>> transformed
> >>>>>>>> by fortran and matlab to this email.
> >>>>>>>
> >>>>>>> I ran the two data files you sent through ascii2nc and both
ran
> fine
> >>>>>>> without any warnings.  The warnings about "little_r" you're
seeing
> >> are
> >>>> odd.
> >>>>>>>     ascii2nc supports multiple ascii file formats, one of
> >>>>>>> which is named little_r.  So for some reason, it was not
> interpreting
> >>>> the
> >>>>>>> format of the ascii data you passed it correctly.  You can
> explicitly
> >>>> tell
> >>>>>>> it the file format with the "-format" command line
> >>>>>>> option.  I'd suggest passing the "-format met_point" option
to
> >> ascii2nc
> >>>>>>> to explicitly tell it to interpret your data using the MET
point
> >>>> format.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> Also, since the data is not coming from bufr, to the
Message_Type
> I
> >>>> just
> >>>>>>>> write 'ADPUPA', whether this will influence the statistics
result?
> >> The
> >>>>>>>> height for different observation stations might be
different, is
> >> there
> >>>>>>> any
> >>>>>>>> method for me to compare the fcst and obs for different
specific
> >>>> heights
> >>>>>>>> instead of just setting a height value(e.g. 2m)?
> >>>>>>>
> >>>>>>> For surface data, you should set the message type to ADPSFC.
When
> >>>>>>> comparing 2-meter temperature to the ADPSFC message type, no
> vertical
> >>>>>>> interpolation is done.  For upper-air verification at
pressure
> >>>>>>> levels, vertical interpolation is done linear in the log of
> pressure.
> >>>>>>>     When verifying a certain number of meters above/below
ground
> (like
> >>>> winds
> >>>>>>> at 30m or 40m), vertical interpolation is done linear in
> >>>>>>> height.
> >>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Sincerely,
> >>>>>>>>
> >>>>>>>> Jason
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>
> >>>>
> >>
> >>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #63639] Several questions regarding MET application
From: John Halley Gotway
Time: Wed Nov 20 14:11:52 2013

Jason,

No, MET will not "extract the soil temperature" from the GRIB files
you've
passed it.  It's really pretty simple... your GRIB files contain
several
records.  How you set the "fcst" parameter in the config file tells
Point-Stat which record(s) to use.  Setting "fcst" to TMP at Z0 tells
Point-Stat to select GRIB record number 251 in the data you sent and
compare it to the observations.  Setting it to TMP at Z2 tells Point-
Stat
to select GRIB record number 271 in the data you sent instead.

TMP at Z0 should be surface temperature, and TMP at Z2 should be the
temperature at 2-meters.  There are separate GRIB records for soil
temperature and soil moisture, but we're not telling Point-Stat to use
them, so they are not involved here.  And you wouldn't compare
forecasts
of soil temperature to observations of temperature at the surface
anyway.

As for why your temperature errors are greater at Z0 than Z2, I really
don't know.  It all depends on the source of those observations.
Perhaps
they really are being taken at 2-meters?

Hope that helps clarify.

Thanks,
John

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>
> Dear John,
>
> Thank you for your help and detailed explanation. To the pressure
part,
> now
> I understand, interpretation will be done for the FCST. However,
what I am
> still confused about is the Z0 and Z2. According to your
explanation, I
> know that the FCST will be compared to the OBS directly without
doing any
> interpretation. However, I don't understand why the error between
OBS and
> Z0 will be larger than Z2, since my OBS data should be at height
> 0(Height-Elevation). So, I am wondering, if I set Z0, whether MET
will
> extract the soil temperature from the MET? Thank you again for your
help!
>
> Sincerely,
>
> Jason
>
>
>
>
> 2013/11/19 John Halley Gotway via RT <met_help at ucar.edu>
>
>> Jason,
>>
>> Sorry for the delay in getting back to you.  I ran Point-Stat using
the
>> data you sent me (for Height) and a verbosity level of v (-v 4),
and I
>> see
>> the following...
>>
>> For TMP/Z0, Point-Stat is using GRIB record 251 from your forecast
file:
>>
>>
251:8577024:d=11070100:TMP:kpds5=11:kpds6=1:kpds7=0:TR=10:P1=1:P2=180:TimeU=1:sfc:436hr
>> fcst:NAve=0
>>
>> For TMP/Z2, Point-Stat is using GRIB record 271 from your forecast
file:
>>
>>
271:9000782:d=11070100:TMP:kpds5=11:kpds6=105:kpds7=2:TR=10:P1=1:P2=180:TimeU=1:2
>>
>> Since these are both vertical level forecast types being compared
to the
>> ADPSFC message type, all of the point observations are being used
for
>> both
>> comparisons.  Notice that the OBAR (or mean
>> observation value) is the same for Z0 and Z2 comparisons:
301.04693.
>>  That's because the same set of observations (all 914 of them) are
being
>> used for both comparisons.  Now, what sort of behavior
>> were you expecting from Point-Stat?  Were you expecting it to take
the
>> height of the observation minus the elevation of the station to
>> determine
>> the height above ground level?  And then only use the
>> point observation if it's height above ground level matches the
forecast
>> level?
>>
>> As I mentioned in the past I believe, vertical level matching for
>> Point-Stat is rather simple.  It is not doing the checking I just
>> described.  Instead, it is all controlled by the "message type".
>> When verifying vertical level forecast fields (like Z0, Z2, or Z10)
>> against "surface" message type (like ADPSFC or SFCSHP), all point
>> observations will be used regardless of their height.  So really
>> it's up to you decide if these point observations of temperature
should
>> be
>> compared to a 2-meter temperature forecast or a surface temperature
>> forecast.
>>
>> Next, I ran Point-Stat using the data in the "Pressure" directory.
All
>> of
>> type.
>>  And you're verifying TMP/Z0 and TMP/P1014-990.
>> Again Point-Stat finds TMP/Z0 in GRIB record number 251.  For
>> TMP/P1014-990, it only finds a single GRIB record in that range;
record
>> 238
>> contains temperature of 1000mb.
>> Again, all of the point observations are used for the verification
>>  But this time the reason is different.  When comparing TMP/Z0 to
the
>> ADPSFC message type, all point observations are used
>> because of my explanation above.  When comparing TMP/P1014-990,
>> Point-Stat
>> checks the pressure level for each point observation and only uses
it if
>> it
>> falls between 1014 and 990.  All of your point
>> observation do fall in that range, so they are all used.
>>
>> Next, I tried running Point-Stat to verify TMP/P900-1000.  This
results
>> in
>> only 19 matched pairs being found.  Point-Stat searches your
forecast
>> file
>> for TMP records falling between 900 and 1000mb,
>> and it finds 5 of them:
>>
>>
203:10841294:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=900:TR=0:P1=92:P2=0:TimeU=1:900
>> mb:92hr fcst:NAve=0
>>
>>
212:11358896:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=925:TR=0:P1=92:P2=0:TimeU=1:925
>> mb:92hr fcst:NAve=0
>>
>>
221:11885126:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=950:TR=0:P1=92:P2=0:TimeU=1:950
>> mb:92hr fcst:NAve=0
>>
>>
230:12394008:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=975:TR=0:P1=92:P2=0:TimeU=1:975
>> mb:92hr fcst:NAve=0
>>
>>
238:12820842:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=1000:TR=0:P1=92:P2=0:TimeU=1:1000
>> mb:92hr fcst:NAve=0
>>
>> For each point observation that falls in that pressure range, it
>> computes
>> a forecast value by doing vertical interpolation between for
forecast
>> levels above and below the observation.  So for a
>> temperature observation at 994mb, it takes the forecast values at
1000mb
>> and 975mb and interpolates between them to the observation level.
>>
>> Hope that helps clarify.
>>
>> Thanks,
>> John Halley Gotway
>> met_help at ucar.edu
>>
>> On 11/14/2013 07:38 AM, Xingcheng Lu via RT wrote:
>> >
>> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>> >
>> > Dear John,
>> >
>> > Thank you for your help and it works now. I have uploaded a file
>> called
>> > Jason.zip to the ftp. Inside it, there are two folders called
pressure
>> and
>> > height respectively which include observation file, wrfout,
config and
>> the
>> > result I got. The pressure folder is related to the pressure
issue I
>> > mentioned to you before and height folder is related to the T0
and T2
>> > issues. Thank you!
>> >
>> > Sincerely,
>> >
>> > Jason
>> >
>> >
>> > 2013/11/14 John Halley Gotway via RT <met_help at ucar.edu>
>> >
>> >> Jason,
>> >>
>> >> Try these commands:
>> >>
>> >>     cd <directory containing the files you want to post>
>> >>     ftp -p ftp.rap.ucar.edu
>> >>     cd incoming/irap/met_help
>> >>     mkdir xingcheng_data_20131113
>> >>     cd xingcheng_data_20131113
>> >>     put <file1>
>> >>     put <file2>
>> >>     ...
>> >>     bye
>> >>
>> >> Do you still have problems?
>> >>
>> >> Thanks,
>> >> John
>> >>
>> >>
>> >> On 11/13/2013 05:44 AM, Xingcheng Lu via RT wrote:
>> >>>
>> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>> >>>
>> >>> Dear John,
>> >>>
>> >>> Thank you for your response, I tried to drop my files to the
FTP,
>> >> however,
>> >>> while I put my files, error message showed up:
>> >>>
>> >>> 227 Entering Passive Mode (128,117,192,211,192,15)
>> >>> 553 Could not determine cwdir: No such file or directory.
>> >>>
>> >>> Any method to solve this? Thank you!
>> >>>
>> >>> Sincerely,
>> >>>
>> >>> Jason
>> >>>
>> >>>
>> >>> 2013/11/8 John Halley Gotway via RT <met_help at ucar.edu>
>> >>>
>> >>>> Jason,
>> >>>>
>> >>>> I'm not exactly sure how to address this issue.  But let me
tell
>> you
>> how
>> >>>> Point-Stat handles verification of "surface" variables.  It
depends
>> on
>> >> the
>> >>>> observation message type being used.  The ADPSFC and
>> >>>> SFCSHP message types are special cases.  Basically, any point
>> >> observation
>> >>>> with an APDSFC or SFCSHP message type are assumed to be at the
>> surface -
>> >>>> regardless of their actual elevation or height value.
>> >>>>
>> >>>> When you're verifying forecasts with a vertical level type
(such as
>> >>>> 2-meter temperature or 10-meter winds - any vertical level
>> specified
>> >> using
>> >>>> a "Z") and comparing it to a surface message type (ADPSFC
>> >>>> or SFCSHP), all point observations of those types will be
used.  So
>> when
>> >>>> verifying 2-m TMP and 0-m TMP against the ADPSFC message type,
I
>> would
>> >>>> expect that they would use the same set of point
>> >>>> observations.
>> >>>>
>> >>>> This vertical level matching part can get a bit tricky.  It'd
>> probably
>> >> be
>> >>>> best to have you send me a sample forecast file, observation
file,
>> and
>> >>>> Point-Stat config file along with questions as to why
>> >>>> Point-Stat is producing the output that it is.  Usually
working
>> through
>> >> a
>> >>>> specific example provides more answers than speaking more
>> generally.
>> >>>>
>> >>>> You also asked a question about pressure.  Perhaps, you could
>> included
>> >>>> that in the test data you send as well.  I'm having a
difficult
>> time
>> >>>> understanding exactly what the issue is.  I could take a
>> >>>> look at your config file and your data and perhaps offer some
>> >> suggestions.
>> >>>>
>> >>>> You can send me data by posting it to our anonymous ftp site:
>> >>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
>> >>>>
>> >>>> Thanks,
>> >>>> John
>> >>>>
>> >>>> On 11/06/2013 07:07 AM, Xingcheng Lu via RT wrote:
>> >>>>>
>> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639
>
>> >>>>>
>> >>>>> Dear John,
>> >>>>>
>> >>>>> I met another problem when I ran the MET. In my ascii
observation
>> data,
>> >>>> the
>> >>>>> height and elevation are the same. In the config file I set
both
>> >> Z0(TMP)
>> >>>>> and Z2(TMP) and found that the RMSE of Z0 reached around 40
and Z2
>> only
>> >>>>> around 2. In theory, I think that my observation data should
be
>> the
>> >>>>> temperature near the ground(Not the soil temperature from
wrf)
>> because
>> >>>>> elevation=height. So, I want to know if I set Z0(TMP),
whether MET
>> will
>> >>>> use
>> >>>>> the soil temperature from wrf to compare with the observation
>> data?
>> >>>>>
>> >>>>> Also, if it is possible, hope that you can answer my question
>> the
>> >>>>> pressure issue I asked one week ago at your convenience.
Thank you
>> in
>> >>>>>
>> >>>>> Sincerely,
>> >>>>>
>> >>>>> Jason
>> >>>>>
>> >>>>>
>> >>>>> 2013/10/31 Xingcheng Lu <xingchenglu2011 at u.northwestern.edu>
>> >>>>>
>> >>>>>> Hi John,
>> >>>>>>
>> >>>>>>
>> >>>>>> I still not quite understand the neighborhood method, I know
that
>> we
>> >>>> first
>> >>>>>> need to set a threshold to enclose other points which are
closed
>> to
>> >> the
>> >>>>>> center point, but which factor decides whether the grid
within
>> the
>> >>>>>> searching radius is turn on or not?
>> >>>>>>
>> >>>>>> I ran the Ascii fortran one just now, and it worked! I don't
know
>> why,
>> >>>>>> maybe it is due to my cluster issue. By the way, what kind
of
>> data
>> >> can I
>> >>>>>> use if I want to apply the little_r option?
>> >>>>>>
>> >>>>>> I just made a comparison for my observation data and
forecast
>> data
>> for
>> >>>> Z0.
>> >>>>>> I made a test and found that for ADPUPA, only when the
elevation
>> is
>> >> zero
>> >>>>>> can the observation and forecast be matched. However, since
the
>> >>>> observation
>> >>>>>> height and elevation is the same in my obs data, like if the
>> elevation
>> >>>> is 5
>> >>>>>> meters, the observation height is also 5m. I don't know
under
>> such
>> >>>>>> condition whether the obs can be counted as  Z0? If yes, I
don't
>> know
>> >>>> why
>> >>>>>> it cannot be matched by MET. But if I set as ADPSFC, all the
obs
>> can
>> >> be
>> >>>>>> matched.
>> >>>>>>
>> >>>>>> My data has exact pressure value, and to the Z0, it ranges
from
>> >>>> 990-1014.
>> >>>>>> However, for both ADPUPA and ADPSFC, the results of P960-
1013
>> and
>> Z0
>> >>>> are
>> >>>>>> not the same. This results seem like: The temperature
related to
>> >>>> pressure
>> >>>>>> is not the same with that related to height at the same
location.
>> I
>> am
>> >>>>>> wondering whether there is any interpretation for the temp
value
>> >>>> related to
>> >>>>>> the pressure?(I have attached one of my result to this
email.)
>> >>>>>>
>> >>>>>> Also, I need to make a full comparison between point obs and
>> forecast
>> >> on
>> >>>>>> surface, do you have any idea that which interpretation
method is
>> more
>> >>>>>> reliable. Also, to the surface temperature, I wrote ADPSFC
for
>> the
>> >> first
>> >>>>>> column of obs-ascii, and set Z0 in the pointstat config
file, am
>> I
>> >>>> correct
>> >>>>>> or not? To the UW_Weight and DW_Weight method, I need to
first
>> set
>> the
>> >>>>>> width, any suggestion for that?
>> >>>>>>
>> >>>>>>
>> >>>>>> Regards,
>> >>>>>>
>> >>>>>> Jason
>> >>>>>>
>> >>>>>>
>> >>>>>> 2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>
>> >>>>>>
>> >>>>>>> Jason,
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Thanks,
>> >>>>>>> John
>> >>>>>>>
>> >>>>>>> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
>> >>>>>>>>
>> >>>>>>>> Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
>> >>>>>>>> Transaction: Ticket created by
>> xingchenglu2011 at u.northwestern.edu
>> >>>>>>>>            Queue: met_help
>> >>>>>>>>          Subject: Several questions regarding MET
application
>> >>>>>>>>            Owner: Nobody
>> >>>>>>>>       Requestors: xingchenglu2011 at u.northwestern.edu
>> >>>>>>>>           Status: new
>> >>>>>>>>      Ticket <URL:
>> >>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> I have several questions regarding the application of MET:
>> >>>>>>>>
>> >>>>>>>> 1:The threshold setting for variable(e.g. >273) is
frequent in
>> the
>> >>>>>>>> tutorial, whether the threshold will be invalid if I just
>> calculate
>> >>>> and
>> >>>>>>>> compare the continuous statistics.(Like if MET will get
rid of
>> the
>> >>>> data
>> >>>>>>>> which is less than 273 for continuous verification?)
>> >>>>>>>
>> >>>>>>> The "cat_thresh" setting stands for "categorical
threshold".
>> That
>> is
>> >>>>>>> used when computing contingency table counts and statistics
(the
>> CTC
>> >>>> and
>> >>>>>>> CTS output line types).  The "cat_thresh" is used to
>> >>>>>>> define what constitutes an "event" when computing a 2x2
>> contingency
>> >>>>>>> table.  It has no impact on the continuous statistics and
>> partial
>> >> sums
>> >>>> in
>> >>>>>>> the CNT and SL1L2 output line types.
>> >>>>>>>
>> >>>>>>> However, in the future we may add a parameter to filter the
>> matched
>> >>>> pairs
>> >>>>>>> that go into the continuous statistics.  Some users have
>> requested
>> >> the
>> >>>>>>> ability to do conditional verification like that -
>> >>>>>>> where you throw out some of the matched pairs before
computing
>> >>>> continuous
>> >>>>>>> stats.  But that does not currently exist in the current
METv4.1
>> >>>> release.
>> >>>>>>>
>> >>>>>>>>
>> >>>>>>>> 2:For the neighborhood method applied in gridded-gridded
>> comparison,
>> >>>>>>>> whether this method is just useful for the categorical
>> variables?
>> >> Can
>> >>>>>>> it be
>> >>>>>>>> applied in the continuous statistics? I don't quite
understand
>> that
>> >>>> why
>> >>>>>>> the
>> >>>>>>>> width value for the square must be an odd integer. Also,
in the
>> >>>> gridded
>> >>>>>>>> comparison, I don't quite understand why before
comparison,
>> fcst
>> and
>> >>>> obs
>> >>>>>>>> fields needed to be smoothed first.
>> >>>>>>>
>> >>>>>>> To answer your second question first, they do not need to
be
>> smoothed
>> >>>>>>> first.  Typically, grid_stat is run with no
"interpolation", or
>> >>>> smoothing,
>> >>>>>>> done.  That's why the default looks like this:
>> >>>>>>> interp = {
>> >>>>>>>        field      = BOTH;
>> >>>>>>>        vld_thresh = 1.0;
>> >>>>>>>
>> >>>>>>>        type = [
>> >>>>>>>           {
>> >>>>>>>              method = UW_MEAN;
>> >>>>>>>              width  = 1;
>> >>>>>>>           }
>> >>>>>>>        ];
>> >>>>>>> };
>> >>>>>>>
>> >>>>>>> However, this provides an easy way to smooth the data
before
>> >> computing
>> >>>>>>> statistics.  And that is called "upscaling".  So you could
see
>> how
>> >> the
>> >>>>>>> performance of your model improves the more you smooth it.
>> >>>>>>>      Typically, smoother forecast score much better than
more
>> detailed
>> >>>> ones.
>> >>>>>>>     But, as I mentioned, typically no smoothing it
performed.
>> >>>>>>>
>> >>>>>>> The neighborhood methods implemented in Grid-Stat must be
>> performed
>> >>>> using
>> >>>>>>> a threshold.  First, the raw fields are thresholded to
create a
>> 0/1
>> >>>> bitmap
>> >>>>>>> in each.  Then, for each neighborhood width, a
>> >>>>>>> "coverage" value is computed as the percentage of grid
squares
>> in
>> >> that
>> >>>>>>> box that are turned on.  The neighborhood stats are
computed
>> over
>> >> those
>> >>>>>>> coverage values.  The widths must be odd so that they're
>> >>>>>>> centered on each grid point.  A width of 5 means you have 2
grid
>> >> points
>> >>>>>>> to the left and right.  7 means there's 3 on each side.  A
width
>> of 4
>> >>>>>>> wouldn't be centered on the grid box.
>> >>>>>>>
>> >>>>>>>>
>> >>>>>>>> 3:In both point-stat and grid-stat, the tutorial states
that it
>> is
>> >> not
>> >>>>>>>> recommended to use analysis field for comparison. I don't
quite
>> get
>> >>>>>>>> the point what the analysis field means. If I compare two
>> wrfout
>> by
>> >>>>>>> using
>> >>>>>>>> different physical schemes, is it counted as the situation
the
>> >>>> tutorial
>> >>>>>>>> states?
>> >>>>>>>
>> >>>>>>> An analysis field is just the 0-hour forecast from a model.
>> Users
>> >> will
>> >>>>>>> often compare a 24-hour forecast from the previous day to
the
>> 0-hour
>> >>>>>>> forecast of the current day.  They're assuming that the
>> >>>>>>> model analysis is "truth".  The problem is that the model
>> analysis
>> is
>> >>>>>>> typically very far from truth.  The model analysis will
contain
>> the
>> >>>> same
>> >>>>>>> type of biases and errors that the forecast will.
>> >>>>>>> Verifying against a model analysis won't really tell you
how
>> good
>> >> your
>> >>>>>>> model is doing.
>> >>>>>>>
>> >>>>>>> However, we set up the MET tools in a general way to enable
>> users
>> to
>> >>>>>>> perform whatever type of comparison they'd like.  As you
>> mention,
>> you
>> >>>> can
>> >>>>>>> compare the output of two different physical schemes.
>> >>>>>>> But the tough part will be interpreting the meaning of the
>> resulting
>> >>>>>>> statistics.
>> >>>>>>>
>> >>>>>>>>
>> >>>>>>>> 4: If I compare the grid fcst and grid obs for T2 in a
specific
>> >>>>>>>> time(Setting beg/end=0),then I will get some statistics
values,
>> such
>> >>>> as
>> >>>>>>>> ME,MSE. I am not quite sure about the calculation process,
for
>> >>>> example,
>> >>>>>>> in
>> >>>>>>>> the fcst field, whether MET first sum the T2 value from
all
>> grid
>> >>>> points
>> >>>>>>>> first, then compare with the obs? Or it compares the value
>> between
>> >>>> fcst
>> >>>>>>> and
>> >>>>>>>> obs for each point and do the statistics calculation.
>> >>>>>>>
>> >>>>>>> For gridded verification, MET looks grid-point by grid-
point.
>> For
>> >> each
>> >>>>>>> grid point, it considers the forecast value (f) and the
>> observation
>> >>>> value
>> >>>>>>> (o).  If either of those contain bad data, it skips
>> >>>>>>> that point.  If both data values are good, it computes an
error
>> value
>> >>>> as
>> >>>>>>> f - o.  The mean error (ME) is the average error over all
grid
>> >> points.
>> >>>>    The
>> >>>>>>> mean squared error (MSE) is the average squared
>> >>>>>>> error over all grid points.
>> >>>>>>>
>> >>>>>>>>
>> >>>>>>>> 5: If I want to compare the variables value at the eta-
level
>> set
>> in
>> >>>> the
>> >>>>>>> wrf
>> >>>>>>>> namelist, any method for me to do that instead of just
setting
>> the
>> >>>>>>> specific
>> >>>>>>>> height?
>> >>>>>>>
>> >>>>>>> No.  MET assumes that you've post-processed your raw WRF
output
>> for
>> >> two
>> >>>>>>> reasons.  First, post-processing destaggers the data and
puts it
>> on a
>> >>>>>>> regular grid.  MET doesn't support staggered grids.
>> >>>>>>> Second, post-processing interpolates the model output onto
>> pressure
>> >>>>>>> levels.  Point observations are defined at pressure levels,
not
>> >> hybrid
>> >>>>>>> eta-levels.  In order to compare your model output to point
>> >>>>>>> data, it needs to be interpolated to pressure levels.
>> >>>>>>>
>> >>>>>>> For post-processing, we recommend using the Unified
>> Post-Processor
>> >>>> which
>> >>>>>>> writes out GRIB files that MET supports very well.
>> >>>>>>>
>> >>>>>>>>
>> >>>>>>>> 6: For the MODE tool, I don't understand the convolution
>> process.
>> >> The
>> >>>>>>>> expression written as: C(x,y)=âˆ‘a(u,v)f(x-u)(x-v), is it
the
>> same
>> >> with
>> >>>>>>>> C(x,y)=âˆ‘a(u,v)f(x-u,x-v)?  I know that we need to first
set
>> the R
>> >> and
>> >>>> H
>> >>>>>>>> value, but I don't know the true meaning for setting them.
If H
>> is
>> >>>>>>> large,
>> >>>>>>>> then R would be small, vice and versa.  However, to the
value
>> of
>> >>>>>>> C(x,y), it
>> >>>>>>>> is hard to compare (large area* lower height) versus
(small
>> area
>> >>>> *large
>> >>>>>>>> height). Could you explain to me a little bit more under
what
>> >>>> condition
>> >>>>>>>> should I set larger H or smaller R?
>> >>>>>>>
>> >>>>>>> I don't think it's very necessary to understand the
convolution
>> >>>> process.
>> >>>>>>>     It's just a circular smoothing filter.  The convolution
>> process is
>> >>>>>>> config file).  That defines the convolution radius in grid
>> units.
>> >>   The
>> >>>>>>> value at each grid point is just replaced by the average
value
>> of
>> all
>> >>>> grid
>> >>>>>>> points falling within the circle of that radius around
>> >>>>>>> the point.  I do suggest playing around with it.  Keep the
>> threshold
>> >>>> set
>> >>>>>>> the same and see how the objects change as you
increase/decrease
>> the
>> >>>>>>>
>> >>>>>>> Ultimately, you should play around with both the
convolution
>> >> threshold
>> >>>>>>> and radius to define objects that capture the phenomenon of
>> interest.
>> >>>>    For
>> >>>>>>> example, if you're interested in studying large MCS's,
>> >>>>>>> you'd set the convolution radius high and the convolution
>> threshold
>> >> low
>> >>>>>>> (small number of large objects).  For small scale
convection,
>> you'd
>> >>>> set the
>> >>>>>>> convolution radius low and the threshold high (large
>> >>>>>>> number of small objects).
>> >>>>>>>
>> >>>>>>>>
>> >>>>>>>> 7: If I want to verify the grid data from CMAQ output,
like the
>> NO2
>> >>>>>>>> concentration, can I do that with MET? How to set the
'field'
>> in
>> the
>> >>>>>>> config
>> >>>>>>>> file?
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>> I'm not familiar with that data set.  If you have a gridded
data
>> file
>> >>>>>>> that MET supports and have questions about extracting data
from
>> it,
>> >>>> just
>> >>>>>>> post a sample data file to our anonymous ftp site
>> >>>>>>> following these instructions:
>> >>>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
>> >>>>>>>
>> >>>>>>> Then send us a met-help ticket about it.
>> >>>>>>>
>> >>>>>>>>
>> >>>>>>>> 9:My last question is regarding the ascii to nc tool. My
obs
>> data
>> is
>> >>>> not
>> >>>>>>>> bufr nor the standard ascii format for MET. I then used
both
>> Fortran
>> >>>> and
>> >>>>>>>> Matlab to transfer my data to the standard ascii format
for
>> MET.
>> To
>> >>>> the
>> >>>>>>>> fortran one, it showed a lot of such warnings:
>> >>>>>>>> WARNING:
>> >>>>>>>> WARNING: process_little_r_obs() -> the number of data
lines
>> >> specified
>> >>>> in
>> >>>>>>>> the header (10) does not match the number found in the
data (1)
>> on
>> >>>> line
>> >>>>>>>> number 4087.
>> >>>>>>>> WARNING:
>> >>>>>>>> WARNING:
>> >>>>>>>> WARNING: process_little_r_obs() -> the number of data
lines
>> >> specified
>> >>>> in
>> >>>>>>>> the header (10) does not match the number found in the
data (1)
>> on
>> >>>> line
>> >>>>>>>> number 4091.
>> >>>>>>>> WARNING:
>> >>>>>>>> WARNING:
>> >>>>>>>> WARNING: process_little_r_obs() -> the number of data
lines
>> >> specified
>> >>>> in
>> >>>>>>>> the header (10) does not match the number found in the
data (1)
>> on
>> >>>> line
>> >>>>>>>> number 4095.
>> >>>>>>>>
>> >>>>>>>> But at last, the nc file can be produced. To the Matlab
one,
>> the
>> >>>>>>> process is
>> >>>>>>>> correct, could you please tell me the reason. Is that
related
>> to
>> the
>> >>>>>>> data
>> >>>>>>>> type written onto the file, like the string or the float?
But
>> the
>> >>>>>>> format I
>> >>>>>>>> set is the same in both scripts. I have also attached the
data
>> >>>>>>> transformed
>> >>>>>>>> by fortran and matlab to this email.
>> >>>>>>>
>> >>>>>>> I ran the two data files you sent through ascii2nc and both
ran
>> fine
>> >>>>>>> without any warnings.  The warnings about "little_r" you're
>> seeing
>> >> are
>> >>>> odd.
>> >>>>>>>     ascii2nc supports multiple ascii file formats, one of
>> >>>>>>> which is named little_r.  So for some reason, it was not
>> interpreting
>> >>>> the
>> >>>>>>> format of the ascii data you passed it correctly.  You can
>> explicitly
>> >>>> tell
>> >>>>>>> it the file format with the "-format" command line
>> >>>>>>> option.  I'd suggest passing the "-format met_point" option
to
>> >> ascii2nc
>> >>>>>>> to explicitly tell it to interpret your data using the MET
point
>> >>>> format.
>> >>>>>>>
>> >>>>>>>>
>> >>>>>>>> Also, since the data is not coming from bufr, to the
>> Message_Type
>> I
>> >>>> just
>> >>>>>>>> write 'ADPUPA', whether this will influence the statistics
>> result?
>> >> The
>> >>>>>>>> height for different observation stations might be
different,
>> is
>> >> there
>> >>>>>>> any
>> >>>>>>>> method for me to compare the fcst and obs for different
>> specific
>> >>>> heights
>> >>>>>>>> instead of just setting a height value(e.g. 2m)?
>> >>>>>>>
>> >>>>>>> For surface data, you should set the message type to
>> When
>> >>>>>>> comparing 2-meter temperature to the ADPSFC message type,
no
>> vertical
>> >>>>>>> interpolation is done.  For upper-air verification at
pressure
>> >>>>>>> levels, vertical interpolation is done linear in the log of
>> pressure.
>> >>>>>>>     When verifying a certain number of meters above/below
ground
>> (like
>> >>>> winds
>> >>>>>>> at 30m or 40m), vertical interpolation is done linear in
>> >>>>>>> height.
>> >>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> Sincerely,
>> >>>>>>>>
>> >>>>>>>> Jason
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>
>> >>>>
>> >>
>> >>
>>
>>
>

------------------------------------------------
Subject: Several questions regarding MET application
From: Xingcheng Lu
Time: Thu Nov 21 08:00:40 2013

Dear John,

Thank you for your response, yes, I agree with you now and I doubt a
little
bit for my obs data. By the way, I have another question regarding the
obs
points, if I have a lot of observation point within a small area,
whether
there is any tool in MET can help me to interpolate them into grid
format,
which can be used to do the grid-stat? Thanks-

Sincerely,

Jason

2013/11/21 John Halley Gotway via RT <met_help at ucar.edu>

> Jason,
>
> No, MET will not "extract the soil temperature" from the GRIB files
you've
> passed it.  It's really pretty simple... your GRIB files contain
several
> records.  How you set the "fcst" parameter in the config file tells
> Point-Stat which record(s) to use.  Setting "fcst" to TMP at Z0
tells
> Point-Stat to select GRIB record number 251 in the data you sent and
> compare it to the observations.  Setting it to TMP at Z2 tells
Point-Stat
> to select GRIB record number 271 in the data you sent instead.
>
> TMP at Z0 should be surface temperature, and TMP at Z2 should be the
> temperature at 2-meters.  There are separate GRIB records for soil
> temperature and soil moisture, but we're not telling Point-Stat to
use
> them, so they are not involved here.  And you wouldn't compare
forecasts
> of soil temperature to observations of temperature at the surface
anyway.
>
> As for why your temperature errors are greater at Z0 than Z2, I
really
> don't know.  It all depends on the source of those observations.
Perhaps
> they really are being taken at 2-meters?
>
> Hope that helps clarify.
>
> Thanks,
> John
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
> >
> > Dear John,
> >
> > Thank you for your help and detailed explanation. To the pressure
part,
> > now
> > I understand, interpretation will be done for the FCST. However,
what I
> am
> > still confused about is the Z0 and Z2. According to your
explanation, I
> > know that the FCST will be compared to the OBS directly without
doing any
> > interpretation. However, I don't understand why the error between
OBS and
> > Z0 will be larger than Z2, since my OBS data should be at height
> > 0(Height-Elevation). So, I am wondering, if I set Z0, whether MET
will
> > extract the soil temperature from the MET? Thank you again for
> >
> > Sincerely,
> >
> > Jason
> >
> >
> >
> >
> > 2013/11/19 John Halley Gotway via RT <met_help at ucar.edu>
> >
> >> Jason,
> >>
> >> Sorry for the delay in getting back to you.  I ran Point-Stat
using the
> >> data you sent me (for Height) and a verbosity level of v (-v 4),
and I
> >> see
> >> the following...
> >>
> >> For TMP/Z0, Point-Stat is using GRIB record 251 from your
forecast file:
> >>
> >>
>
251:8577024:d=11070100:TMP:kpds5=11:kpds6=1:kpds7=0:TR=10:P1=1:P2=180:TimeU=1:sfc:436hr
> >> fcst:NAve=0
> >>
> >> For TMP/Z2, Point-Stat is using GRIB record 271 from your
forecast file:
> >>
> >>
>
271:9000782:d=11070100:TMP:kpds5=11:kpds6=105:kpds7=2:TR=10:P1=1:P2=180:TimeU=1:2
> >>
> >> Since these are both vertical level forecast types being compared
to the
> >> ADPSFC message type, all of the point observations are being used
for
> >> both
> >> comparisons.  Notice that the OBAR (or mean
> >> observation value) is the same for Z0 and Z2 comparisons:
301.04693.
> >>  That's because the same set of observations (all 914 of them)
are being
> >> used for both comparisons.  Now, what sort of behavior
> >> were you expecting from Point-Stat?  Were you expecting it to
take the
> >> height of the observation minus the elevation of the station to
> >> determine
> >> the height above ground level?  And then only use the
> >> point observation if it's height above ground level matches the
forecast
> >> level?
> >>
> >> As I mentioned in the past I believe, vertical level matching for
> >> Point-Stat is rather simple.  It is not doing the checking I just
> >> described.  Instead, it is all controlled by the "message type".
> >> When verifying vertical level forecast fields (like Z0, Z2, or
Z10)
> >> against "surface" message type (like ADPSFC or SFCSHP), all point
> >> observations will be used regardless of their height.  So really
> >> it's up to you decide if these point observations of temperature
should
> >> be
> >> compared to a 2-meter temperature forecast or a surface
temperature
> >> forecast.
> >>
> >> Next, I ran Point-Stat using the data in the "Pressure"
directory.  All
> >> of
message
> >> type.
> >>  And you're verifying TMP/Z0 and TMP/P1014-990.
> >> Again Point-Stat finds TMP/Z0 in GRIB record number 251.  For
> >> TMP/P1014-990, it only finds a single GRIB record in that range;
record
> >> 238
> >> contains temperature of 1000mb.
> >> Again, all of the point observations are used for the
verification
> >>  But this time the reason is different.  When comparing TMP/Z0 to
the
> >> ADPSFC message type, all point observations are used
> >> because of my explanation above.  When comparing TMP/P1014-990,
> >> Point-Stat
> >> checks the pressure level for each point observation and only
uses it if
> >> it
> >> falls between 1014 and 990.  All of your point
> >> observation do fall in that range, so they are all used.
> >>
> >> Next, I tried running Point-Stat to verify TMP/P900-1000.  This
results
> >> in
> >> only 19 matched pairs being found.  Point-Stat searches your
forecast
> >> file
> >> for TMP records falling between 900 and 1000mb,
> >> and it finds 5 of them:
> >>
> >>
>
203:10841294:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=900:TR=0:P1=92:P2=0:TimeU=1:900
> >> mb:92hr fcst:NAve=0
> >>
> >>
>
212:11358896:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=925:TR=0:P1=92:P2=0:TimeU=1:925
> >> mb:92hr fcst:NAve=0
> >>
> >>
>
221:11885126:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=950:TR=0:P1=92:P2=0:TimeU=1:950
> >> mb:92hr fcst:NAve=0
> >>
> >>
>
230:12394008:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=975:TR=0:P1=92:P2=0:TimeU=1:975
> >> mb:92hr fcst:NAve=0
> >>
> >>
>
238:12820842:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=1000:TR=0:P1=92:P2=0:TimeU=1:1000
> >> mb:92hr fcst:NAve=0
> >>
> >> For each point observation that falls in that pressure range, it
> >> computes
> >> a forecast value by doing vertical interpolation between for
forecast
> >> levels above and below the observation.  So for a
> >> temperature observation at 994mb, it takes the forecast values at
1000mb
> >> and 975mb and interpolates between them to the observation level.
> >>
> >> Hope that helps clarify.
> >>
> >> Thanks,
> >> John Halley Gotway
> >> met_help at ucar.edu
> >>
> >> On 11/14/2013 07:38 AM, Xingcheng Lu via RT wrote:
> >> >
> >> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
> >> >
> >> > Dear John,
> >> >
> >> > Thank you for your help and it works now. I have uploaded a
file
> >> called
> >> > Jason.zip to the ftp. Inside it, there are two folders called
pressure
> >> and
> >> > height respectively which include observation file, wrfout,
config and
> >> the
> >> > result I got. The pressure folder is related to the pressure
issue I
> >> > mentioned to you before and height folder is related to the T0
and T2
> >> > issues. Thank you!
> >> >
> >> > Sincerely,
> >> >
> >> > Jason
> >> >
> >> >
> >> > 2013/11/14 John Halley Gotway via RT <met_help at ucar.edu>
> >> >
> >> >> Jason,
> >> >>
> >> >> Try these commands:
> >> >>
> >> >>     cd <directory containing the files you want to post>
> >> >>     ftp -p ftp.rap.ucar.edu
> >> >>     username = anonymous
> >> >>     cd incoming/irap/met_help
> >> >>     mkdir xingcheng_data_20131113
> >> >>     cd xingcheng_data_20131113
> >> >>     put <file1>
> >> >>     put <file2>
> >> >>     ...
> >> >>     bye
> >> >>
> >> >> Do you still have problems?
> >> >>
> >> >> Thanks,
> >> >> John
> >> >>
> >> >>
> >> >> On 11/13/2013 05:44 AM, Xingcheng Lu via RT wrote:
> >> >>>
> >> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639
>
> >> >>>
> >> >>> Dear John,
> >> >>>
> >> >>> Thank you for your response, I tried to drop my files to the
FTP,
> >> >> however,
> >> >>> while I put my files, error message showed up:
> >> >>>
> >> >>> 227 Entering Passive Mode (128,117,192,211,192,15)
> >> >>> 553 Could not determine cwdir: No such file or directory.
> >> >>>
> >> >>> Any method to solve this? Thank you!
> >> >>>
> >> >>> Sincerely,
> >> >>>
> >> >>> Jason
> >> >>>
> >> >>>
> >> >>> 2013/11/8 John Halley Gotway via RT <met_help at ucar.edu>
> >> >>>
> >> >>>> Jason,
> >> >>>>
> >> >>>> I'm not exactly sure how to address this issue.  But let me
tell
> >> you
> >> how
> >> >>>> Point-Stat handles verification of "surface" variables.  It
depends
> >> on
> >> >> the
> >> >>>> observation message type being used.  The ADPSFC and
> >> >>>> SFCSHP message types are special cases.  Basically, any
point
> >> >> observation
> >> >>>> with an APDSFC or SFCSHP message type are assumed to be at
the
> >> surface -
> >> >>>> regardless of their actual elevation or height value.
> >> >>>>
> >> >>>> When you're verifying forecasts with a vertical level type
(such as
> >> >>>> 2-meter temperature or 10-meter winds - any vertical level
> >> specified
> >> >> using
> >> >>>> a "Z") and comparing it to a surface message type (ADPSFC
> >> >>>> or SFCSHP), all point observations of those types will be
used.  So
> >> when
> >> >>>> verifying 2-m TMP and 0-m TMP against the ADPSFC message
type, I
> >> would
> >> >>>> expect that they would use the same set of point
> >> >>>> observations.
> >> >>>>
> >> >>>> This vertical level matching part can get a bit tricky.
It'd
> >> probably
> >> >> be
> >> >>>> best to have you send me a sample forecast file, observation
file,
> >> and
> >> >>>> Point-Stat config file along with questions as to why
> >> >>>> Point-Stat is producing the output that it is.  Usually
working
> >> through
> >> >> a
> >> >>>> specific example provides more answers than speaking more
> >> generally.
> >> >>>>
> >> >>>> You also asked a question about pressure.  Perhaps, you
could
> >> included
> >> >>>> that in the test data you send as well.  I'm having a
difficult
> >> time
> >> >>>> understanding exactly what the issue is.  I could take a
> >> >>>> look at your config file and your data and perhaps offer
some
> >> >> suggestions.
> >> >>>>
> >> >>>> You can send me data by posting it to our anonymous ftp
site:
> >> >>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >> >>>>
> >> >>>> Thanks,
> >> >>>> John
> >> >>>>
> >> >>>> On 11/06/2013 07:07 AM, Xingcheng Lu via RT wrote:
> >> >>>>>
> >> >>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
> >> >>>>>
> >> >>>>> Dear John,
> >> >>>>>
> >> >>>>> I met another problem when I ran the MET. In my ascii
observation
> >> data,
> >> >>>> the
> >> >>>>> height and elevation are the same. In the config file I set
both
> >> >> Z0(TMP)
> >> >>>>> and Z2(TMP) and found that the RMSE of Z0 reached around 40
and Z2
> >> only
> >> >>>>> around 2. In theory, I think that my observation data
should be
> >> the
> >> >>>>> temperature near the ground(Not the soil temperature from
wrf)
> >> because
> >> >>>>> elevation=height. So, I want to know if I set Z0(TMP),
whether MET
> >> will
> >> >>>> use
> >> >>>>> the soil temperature from wrf to compare with the
observation
> >> data?
> >> >>>>>
> >> >>>>> Also, if it is possible, hope that you can answer my
question
> >> the
> >> >>>>> pressure issue I asked one week ago at your convenience.
Thank you
> >> in
> >> >>>>>
> >> >>>>> Sincerely,
> >> >>>>>
> >> >>>>> Jason
> >> >>>>>
> >> >>>>>
> >> >>>>> 2013/10/31 Xingcheng Lu
<xingchenglu2011 at u.northwestern.edu>
> >> >>>>>
> >> >>>>>> Hi John,
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> I still not quite understand the neighborhood method, I
know that
> >> we
> >> >>>> first
> >> >>>>>> need to set a threshold to enclose other points which are
closed
> >> to
> >> >> the
> >> >>>>>> center point, but which factor decides whether the grid
within
> >> the
> >> >>>>>> searching radius is turn on or not?
> >> >>>>>>
> >> >>>>>> I ran the Ascii fortran one just now, and it worked! I
don't know
> >> why,
> >> >>>>>> maybe it is due to my cluster issue. By the way, what kind
of
> >> data
> >> >> can I
> >> >>>>>> use if I want to apply the little_r option?
> >> >>>>>>
> >> >>>>>> I just made a comparison for my observation data and
forecast
> >> data
> >> for
> >> >>>> Z0.
> >> >>>>>> I made a test and found that for ADPUPA, only when the
elevation
> >> is
> >> >> zero
> >> >>>>>> can the observation and forecast be matched. However,
since the
> >> >>>> observation
> >> >>>>>> height and elevation is the same in my obs data, like if
the
> >> elevation
> >> >>>> is 5
> >> >>>>>> meters, the observation height is also 5m. I don't know
under
> >> such
> >> >>>>>> condition whether the obs can be counted as  Z0? If yes, I
don't
> >> know
> >> >>>> why
> >> >>>>>> it cannot be matched by MET. But if I set as ADPSFC, all
the obs
> >> can
> >> >> be
> >> >>>>>> matched.
> >> >>>>>>
> >> >>>>>> My data has exact pressure value, and to the Z0, it ranges
from
> >> >>>> 990-1014.
> >> >>>>>> However, for both ADPUPA and ADPSFC, the results of P960-
1013
> >> and
> >> Z0
> >> >>>> are
> >> >>>>>> not the same. This results seem like: The temperature
related to
> >> >>>> pressure
> >> >>>>>> is not the same with that related to height at the same
location.
> >> I
> >> am
> >> >>>>>> wondering whether there is any interpretation for the temp
value
> >> >>>> related to
> >> >>>>>> the pressure?(I have attached one of my result to this
email.)
> >> >>>>>>
> >> >>>>>> Also, I need to make a full comparison between point obs
and
> >> forecast
> >> >> on
> >> >>>>>> surface, do you have any idea that which interpretation
method is
> >> more
> >> >>>>>> reliable. Also, to the surface temperature, I wrote ADPSFC
for
> >> the
> >> >> first
> >> >>>>>> column of obs-ascii, and set Z0 in the pointstat config
file, am
> >> I
> >> >>>> correct
> >> >>>>>> or not? To the UW_Weight and DW_Weight method, I need to
first
> >> set
> >> the
> >> >>>>>> width, any suggestion for that?
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Regards,
> >> >>>>>>
> >> >>>>>> Jason
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> 2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>
> >> >>>>>>
> >> >>>>>>> Jason,
> >> >>>>>>>
> >> >>>>>>> Answers are inline...
> >> >>>>>>>
> >> >>>>>>> Thanks,
> >> >>>>>>> John
> >> >>>>>>>
> >> >>>>>>> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
> >> >>>>>>>>
> >> >>>>>>>> Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
> >> >>>>>>>> Transaction: Ticket created by
> >> xingchenglu2011 at u.northwestern.edu
> >> >>>>>>>>            Queue: met_help
> >> >>>>>>>>          Subject: Several questions regarding MET
application
> >> >>>>>>>>            Owner: Nobody
> >> >>>>>>>>       Requestors: xingchenglu2011 at u.northwestern.edu
> >> >>>>>>>>           Status: new
> >> >>>>>>>>      Ticket <URL:
> >> >>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639>
> >> >>>>>>>>
> >> >>>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> I have several questions regarding the application of
MET:
> >> >>>>>>>>
> >> >>>>>>>> 1:The threshold setting for variable(e.g. >273) is
frequent in
> >> the
> >> >>>>>>>> tutorial, whether the threshold will be invalid if I
just
> >> calculate
> >> >>>> and
> >> >>>>>>>> compare the continuous statistics.(Like if MET will get
rid of
> >> the
> >> >>>> data
> >> >>>>>>>> which is less than 273 for continuous verification?)
> >> >>>>>>>
> >> >>>>>>> The "cat_thresh" setting stands for "categorical
threshold".
> >> That
> >> is
> >> >>>>>>> used when computing contingency table counts and
statistics (the
> >> CTC
> >> >>>> and
> >> >>>>>>> CTS output line types).  The "cat_thresh" is used to
> >> >>>>>>> define what constitutes an "event" when computing a 2x2
> >> contingency
> >> >>>>>>> table.  It has no impact on the continuous statistics and
> >> partial
> >> >> sums
> >> >>>> in
> >> >>>>>>> the CNT and SL1L2 output line types.
> >> >>>>>>>
> >> >>>>>>> However, in the future we may add a parameter to filter
the
> >> matched
> >> >>>> pairs
> >> >>>>>>> that go into the continuous statistics.  Some users have
> >> requested
> >> >> the
> >> >>>>>>> ability to do conditional verification like that -
> >> >>>>>>> where you throw out some of the matched pairs before
computing
> >> >>>> continuous
> >> >>>>>>> stats.  But that does not currently exist in the current
METv4.1
> >> >>>> release.
> >> >>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> 2:For the neighborhood method applied in gridded-gridded
> >> comparison,
> >> >>>>>>>> whether this method is just useful for the categorical
> >> variables?
> >> >> Can
> >> >>>>>>> it be
> >> >>>>>>>> applied in the continuous statistics? I don't quite
understand
> >> that
> >> >>>> why
> >> >>>>>>> the
> >> >>>>>>>> width value for the square must be an odd integer. Also,
in the
> >> >>>> gridded
> >> >>>>>>>> comparison, I don't quite understand why before
comparison,
> >> fcst
> >> and
> >> >>>> obs
> >> >>>>>>>> fields needed to be smoothed first.
> >> >>>>>>>
> >> >>>>>>> To answer your second question first, they do not need to
be
> >> smoothed
> >> >>>>>>> first.  Typically, grid_stat is run with no
"interpolation", or
> >> >>>> smoothing,
> >> >>>>>>> done.  That's why the default looks like this:
> >> >>>>>>> interp = {
> >> >>>>>>>        field      = BOTH;
> >> >>>>>>>        vld_thresh = 1.0;
> >> >>>>>>>
> >> >>>>>>>        type = [
> >> >>>>>>>           {
> >> >>>>>>>              method = UW_MEAN;
> >> >>>>>>>              width  = 1;
> >> >>>>>>>           }
> >> >>>>>>>        ];
> >> >>>>>>> };
> >> >>>>>>>
> >> >>>>>>> However, this provides an easy way to smooth the data
before
> >> >> computing
> >> >>>>>>> statistics.  And that is called "upscaling".  So you
could see
> >> how
> >> >> the
> >> >>>>>>> performance of your model improves the more you smooth
it.
> >> >>>>>>>      Typically, smoother forecast score much better than
more
> >> detailed
> >> >>>> ones.
> >> >>>>>>>     But, as I mentioned, typically no smoothing it
performed.
> >> >>>>>>>
> >> >>>>>>> The neighborhood methods implemented in Grid-Stat must be
> >> performed
> >> >>>> using
> >> >>>>>>> a threshold.  First, the raw fields are thresholded to
create a
> >> 0/1
> >> >>>> bitmap
> >> >>>>>>> in each.  Then, for each neighborhood width, a
> >> >>>>>>> "coverage" value is computed as the percentage of grid
squares
> >> in
> >> >> that
> >> >>>>>>> box that are turned on.  The neighborhood stats are
computed
> >> over
> >> >> those
> >> >>>>>>> coverage values.  The widths must be odd so that they're
> >> >>>>>>> centered on each grid point.  A width of 5 means you have
2 grid
> >> >> points
> >> >>>>>>> to the left and right.  7 means there's 3 on each side.
A width
> >> of 4
> >> >>>>>>> wouldn't be centered on the grid box.
> >> >>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> 3:In both point-stat and grid-stat, the tutorial states
that it
> >> is
> >> >> not
> >> >>>>>>>> recommended to use analysis field for comparison. I
don't quite
> >> get
> >> >>>>>>>> the point what the analysis field means. If I compare
two
> >> wrfout
> >> by
> >> >>>>>>> using
> >> >>>>>>>> different physical schemes, is it counted as the
situation the
> >> >>>> tutorial
> >> >>>>>>>> states?
> >> >>>>>>>
> >> >>>>>>> An analysis field is just the 0-hour forecast from a
model.
> >> Users
> >> >> will
> >> >>>>>>> often compare a 24-hour forecast from the previous day to
the
> >> 0-hour
> >> >>>>>>> forecast of the current day.  They're assuming that the
> >> >>>>>>> model analysis is "truth".  The problem is that the model
> >> analysis
> >> is
> >> >>>>>>> typically very far from truth.  The model analysis will
contain
> >> the
> >> >>>> same
> >> >>>>>>> type of biases and errors that the forecast will.
> >> >>>>>>> Verifying against a model analysis won't really tell you
how
> >> good
> >> >> your
> >> >>>>>>> model is doing.
> >> >>>>>>>
> >> >>>>>>> However, we set up the MET tools in a general way to
enable
> >> users
> >> to
> >> >>>>>>> perform whatever type of comparison they'd like.  As you
> >> mention,
> >> you
> >> >>>> can
> >> >>>>>>> compare the output of two different physical schemes.
> >> >>>>>>> But the tough part will be interpreting the meaning of
the
> >> resulting
> >> >>>>>>> statistics.
> >> >>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> 4: If I compare the grid fcst and grid obs for T2 in a
specific
> >> >>>>>>>> time(Setting beg/end=0),then I will get some statistics
values,
> >> such
> >> >>>> as
> >> >>>>>>>> ME,MSE. I am not quite sure about the calculation
process, for
> >> >>>> example,
> >> >>>>>>> in
> >> >>>>>>>> the fcst field, whether MET first sum the T2 value from
all
> >> grid
> >> >>>> points
> >> >>>>>>>> first, then compare with the obs? Or it compares the
value
> >> between
> >> >>>> fcst
> >> >>>>>>> and
> >> >>>>>>>> obs for each point and do the statistics calculation.
> >> >>>>>>>
> >> >>>>>>> For gridded verification, MET looks grid-point by grid-
point.
> >> For
> >> >> each
> >> >>>>>>> grid point, it considers the forecast value (f) and the
> >> observation
> >> >>>> value
> >> >>>>>>> (o).  If either of those contain bad data, it skips
> >> >>>>>>> that point.  If both data values are good, it computes an
error
> >> value
> >> >>>> as
> >> >>>>>>> f - o.  The mean error (ME) is the average error over all
grid
> >> >> points.
> >> >>>>    The
> >> >>>>>>> mean squared error (MSE) is the average squared
> >> >>>>>>> error over all grid points.
> >> >>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> 5: If I want to compare the variables value at the eta-
level
> >> set
> >> in
> >> >>>> the
> >> >>>>>>> wrf
> >> >>>>>>>> namelist, any method for me to do that instead of just
setting
> >> the
> >> >>>>>>> specific
> >> >>>>>>>> height?
> >> >>>>>>>
> >> >>>>>>> No.  MET assumes that you've post-processed your raw WRF
output
> >> for
> >> >> two
> >> >>>>>>> reasons.  First, post-processing destaggers the data and
puts it
> >> on a
> >> >>>>>>> regular grid.  MET doesn't support staggered grids.
> >> >>>>>>> Second, post-processing interpolates the model output
onto
> >> pressure
> >> >>>>>>> levels.  Point observations are defined at pressure
levels, not
> >> >> hybrid
> >> >>>>>>> eta-levels.  In order to compare your model output to
point
> >> >>>>>>> data, it needs to be interpolated to pressure levels.
> >> >>>>>>>
> >> >>>>>>> For post-processing, we recommend using the Unified
> >> Post-Processor
> >> >>>> which
> >> >>>>>>> writes out GRIB files that MET supports very well.
> >> >>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> 6: For the MODE tool, I don't understand the convolution
> >> process.
> >> >> The
> >> >>>>>>>> expression written as: C(x,y)=âˆ‘a(u,v)f(x-u)(x-v), is
it the
> >> same
> >> >> with
> >> >>>>>>>> C(x,y)=âˆ‘a(u,v)f(x-u,x-v)?  I know that we need to
first set
> >> the R
> >> >> and
> >> >>>> H
> >> >>>>>>>> value, but I don't know the true meaning for setting
them. If H
> >> is
> >> >>>>>>> large,
> >> >>>>>>>> then R would be small, vice and versa.  However, to the
value
> >> of
> >> >>>>>>> C(x,y), it
> >> >>>>>>>> is hard to compare (large area* lower height) versus
(small
> >> area
> >> >>>> *large
> >> >>>>>>>> height). Could you explain to me a little bit more under
what
> >> >>>> condition
> >> >>>>>>>> should I set larger H or smaller R?
> >> >>>>>>>
> >> >>>>>>> I don't think it's very necessary to understand the
convolution
> >> >>>> process.
> >> >>>>>>>     It's just a circular smoothing filter.  The
convolution
> >> process is
> >> >>>>>>> config file).  That defines the convolution radius in
grid
> >> units.
> >> >>   The
> >> >>>>>>> value at each grid point is just replaced by the average
value
> >> of
> >> all
> >> >>>> grid
> >> >>>>>>> points falling within the circle of that radius around
> >> >>>>>>> the point.  I do suggest playing around with it.  Keep
the
> >> threshold
> >> >>>> set
> >> >>>>>>> the same and see how the objects change as you
increase/decrease
> >> the
> >> >>>>>>>
> >> >>>>>>> Ultimately, you should play around with both the
convolution
> >> >> threshold
> >> >>>>>>> and radius to define objects that capture the phenomenon
of
> >> interest.
> >> >>>>    For
> >> >>>>>>> example, if you're interested in studying large MCS's,
> >> >>>>>>> you'd set the convolution radius high and the convolution
> >> threshold
> >> >> low
> >> >>>>>>> (small number of large objects).  For small scale
convection,
> >> you'd
> >> >>>> set the
> >> >>>>>>> convolution radius low and the threshold high (large
> >> >>>>>>> number of small objects).
> >> >>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> 7: If I want to verify the grid data from CMAQ output,
like the
> >> NO2
> >> >>>>>>>> concentration, can I do that with MET? How to set the
'field'
> >> in
> >> the
> >> >>>>>>> config
> >> >>>>>>>> file?
> >> >>>>>>>>
> >> >>>>>>>
> >> >>>>>>> I'm not familiar with that data set.  If you have a
gridded data
> >> file
> >> >>>>>>> that MET supports and have questions about extracting
data from
> >> it,
> >> >>>> just
> >> >>>>>>> post a sample data file to our anonymous ftp site
> >> >>>>>>> following these instructions:
> >> >>>>>>>
> http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >> >>>>>>>
> >> >>>>>>> Then send us a met-help ticket about it.
> >> >>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> 9:My last question is regarding the ascii to nc tool. My
obs
> >> data
> >> is
> >> >>>> not
> >> >>>>>>>> bufr nor the standard ascii format for MET. I then used
both
> >> Fortran
> >> >>>> and
> >> >>>>>>>> Matlab to transfer my data to the standard ascii format
for
> >> MET.
> >> To
> >> >>>> the
> >> >>>>>>>> fortran one, it showed a lot of such warnings:
> >> >>>>>>>> WARNING:
> >> >>>>>>>> WARNING: process_little_r_obs() -> the number of data
lines
> >> >> specified
> >> >>>> in
> >> >>>>>>>> the header (10) does not match the number found in the
data (1)
> >> on
> >> >>>> line
> >> >>>>>>>> number 4087.
> >> >>>>>>>> WARNING:
> >> >>>>>>>> WARNING:
> >> >>>>>>>> WARNING: process_little_r_obs() -> the number of data
lines
> >> >> specified
> >> >>>> in
> >> >>>>>>>> the header (10) does not match the number found in the
data (1)
> >> on
> >> >>>> line
> >> >>>>>>>> number 4091.
> >> >>>>>>>> WARNING:
> >> >>>>>>>> WARNING:
> >> >>>>>>>> WARNING: process_little_r_obs() -> the number of data
lines
> >> >> specified
> >> >>>> in
> >> >>>>>>>> the header (10) does not match the number found in the
data (1)
> >> on
> >> >>>> line
> >> >>>>>>>> number 4095.
> >> >>>>>>>>
> >> >>>>>>>> But at last, the nc file can be produced. To the Matlab
one,
> >> the
> >> >>>>>>> process is
> >> >>>>>>>> correct, could you please tell me the reason. Is that
related
> >> to
> >> the
> >> >>>>>>> data
> >> >>>>>>>> type written onto the file, like the string or the
float? But
> >> the
> >> >>>>>>> format I
> >> >>>>>>>> set is the same in both scripts. I have also attached
the data
> >> >>>>>>> transformed
> >> >>>>>>>> by fortran and matlab to this email.
> >> >>>>>>>
> >> >>>>>>> I ran the two data files you sent through ascii2nc and
both ran
> >> fine
> >> >>>>>>> without any warnings.  The warnings about "little_r"
you're
> >> seeing
> >> >> are
> >> >>>> odd.
> >> >>>>>>>     ascii2nc supports multiple ascii file formats, one of
> >> >>>>>>> which is named little_r.  So for some reason, it was not
> >> interpreting
> >> >>>> the
> >> >>>>>>> format of the ascii data you passed it correctly.  You
can
> >> explicitly
> >> >>>> tell
> >> >>>>>>> it the file format with the "-format" command line
> >> >>>>>>> option.  I'd suggest passing the "-format met_point"
option to
> >> >> ascii2nc
> >> >>>>>>> to explicitly tell it to interpret your data using the
MET point
> >> >>>> format.
> >> >>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> Also, since the data is not coming from bufr, to the
> >> Message_Type
> >> I
> >> >>>> just
> >> >>>>>>>> write 'ADPUPA', whether this will influence the
statistics
> >> result?
> >> >> The
> >> >>>>>>>> height for different observation stations might be
different,
> >> is
> >> >> there
> >> >>>>>>> any
> >> >>>>>>>> method for me to compare the fcst and obs for different
> >> specific
> >> >>>> heights
> >> >>>>>>>> instead of just setting a height value(e.g. 2m)?
> >> >>>>>>>
> >> >>>>>>> For surface data, you should set the message type to
> >> When
> >> >>>>>>> comparing 2-meter temperature to the ADPSFC message type,
no
> >> vertical
> >> >>>>>>> interpolation is done.  For upper-air verification at
pressure
> >> >>>>>>> levels, vertical interpolation is done linear in the log
of
> >> pressure.
> >> >>>>>>>     When verifying a certain number of meters above/below
ground
> >> (like
> >> >>>> winds
> >> >>>>>>> at 30m or 40m), vertical interpolation is done linear in
> >> >>>>>>> height.
> >> >>>>>>>
> >> >>>>>>>>
> >> >>>>>>>> Thank you in advance for your time and help!
> >> >>>>>>>>
> >> >>>>>>>> Sincerely,
> >> >>>>>>>>
> >> >>>>>>>> Jason
> >> >>>>>>>>
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>
> >> >>>>
> >> >>>>
> >> >>
> >> >>
> >>
> >>
> >
>
>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #63639] Several questions regarding MET application
From: John Halley Gotway
Time: Thu Nov 21 10:33:31 2013

Jason,

No, there is no tool within MET that would facilitate the conversion
of point data to gridded data.  I believe we've only gotten this
request once in the past, and it seems like the method of gridding
the data would be pretty specific to the dataset you're using.  So it
would be difficult to provide a general-purpose tool that would
actually do a good job.

Thanks,
John

On 11/21/2013 08:00 AM, Xingcheng Lu via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>
> Dear John,
>
> Thank you for your response, yes, I agree with you now and I doubt a
little
> bit for my obs data. By the way, I have another question regarding
the  obs
> points, if I have a lot of observation point within a small area,
whether
> there is any tool in MET can help me to interpolate them into grid
format,
> which can be used to do the grid-stat? Thanks-
>
> Sincerely,
>
> Jason
>
>
> 2013/11/21 John Halley Gotway via RT <met_help at ucar.edu>
>
>> Jason,
>>
>> No, MET will not "extract the soil temperature" from the GRIB files
you've
>> passed it.  It's really pretty simple... your GRIB files contain
several
>> records.  How you set the "fcst" parameter in the config file tells
>> Point-Stat which record(s) to use.  Setting "fcst" to TMP at Z0
tells
>> Point-Stat to select GRIB record number 251 in the data you sent
and
>> compare it to the observations.  Setting it to TMP at Z2 tells
Point-Stat
>> to select GRIB record number 271 in the data you sent instead.
>>
>> TMP at Z0 should be surface temperature, and TMP at Z2 should be
the
>> temperature at 2-meters.  There are separate GRIB records for soil
>> temperature and soil moisture, but we're not telling Point-Stat to
use
>> them, so they are not involved here.  And you wouldn't compare
forecasts
>> of soil temperature to observations of temperature at the surface
anyway.
>>
>> As for why your temperature errors are greater at Z0 than Z2, I
really
>> don't know.  It all depends on the source of those observations.
Perhaps
>> they really are being taken at 2-meters?
>>
>> Hope that helps clarify.
>>
>> Thanks,
>> John
>>
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>>>
>>> Dear John,
>>>
>>> Thank you for your help and detailed explanation. To the pressure
part,
>>> now
>>> I understand, interpretation will be done for the FCST. However,
what I
>> am
>>> still confused about is the Z0 and Z2. According to your
explanation, I
>>> know that the FCST will be compared to the OBS directly without
doing any
>>> interpretation. However, I don't understand why the error between
OBS and
>>> Z0 will be larger than Z2, since my OBS data should be at height
>>> 0(Height-Elevation). So, I am wondering, if I set Z0, whether MET
will
>>> extract the soil temperature from the MET? Thank you again for
>>>
>>> Sincerely,
>>>
>>> Jason
>>>
>>>
>>>
>>>
>>> 2013/11/19 John Halley Gotway via RT <met_help at ucar.edu>
>>>
>>>> Jason,
>>>>
>>>> Sorry for the delay in getting back to you.  I ran Point-Stat
using the
>>>> data you sent me (for Height) and a verbosity level of v (-v 4),
and I
>>>> see
>>>> the following...
>>>>
>>>> For TMP/Z0, Point-Stat is using GRIB record 251 from your
forecast file:
>>>>
>>>>
>>
251:8577024:d=11070100:TMP:kpds5=11:kpds6=1:kpds7=0:TR=10:P1=1:P2=180:TimeU=1:sfc:436hr
>>>> fcst:NAve=0
>>>>
>>>> For TMP/Z2, Point-Stat is using GRIB record 271 from your
forecast file:
>>>>
>>>>
>>
271:9000782:d=11070100:TMP:kpds5=11:kpds6=105:kpds7=2:TR=10:P1=1:P2=180:TimeU=1:2
>>>>
>>>> Since these are both vertical level forecast types being compared
to the
>>>> ADPSFC message type, all of the point observations are being used
for
>>>> both
>>>> comparisons.  Notice that the OBAR (or mean
>>>> observation value) is the same for Z0 and Z2 comparisons:
301.04693.
>>>>   That's because the same set of observations (all 914 of them)
are being
>>>> used for both comparisons.  Now, what sort of behavior
>>>> were you expecting from Point-Stat?  Were you expecting it to
take the
>>>> height of the observation minus the elevation of the station to
>>>> determine
>>>> the height above ground level?  And then only use the
>>>> point observation if it's height above ground level matches the
forecast
>>>> level?
>>>>
>>>> As I mentioned in the past I believe, vertical level matching for
>>>> Point-Stat is rather simple.  It is not doing the checking I just
>>>> described.  Instead, it is all controlled by the "message type".
>>>> When verifying vertical level forecast fields (like Z0, Z2, or
Z10)
>>>> against "surface" message type (like ADPSFC or SFCSHP), all point
>>>> observations will be used regardless of their height.  So really
>>>> it's up to you decide if these point observations of temperature
should
>>>> be
>>>> compared to a 2-meter temperature forecast or a surface
temperature
>>>> forecast.
>>>>
>>>> Next, I ran Point-Stat using the data in the "Pressure"
directory.  All
>>>> of
message
>>>> type.
>>>>   And you're verifying TMP/Z0 and TMP/P1014-990.
>>>> Again Point-Stat finds TMP/Z0 in GRIB record number 251.  For
>>>> TMP/P1014-990, it only finds a single GRIB record in that range;
record
>>>> 238
>>>> contains temperature of 1000mb.
>>>> Again, all of the point observations are used for the
verification
>>>>   But this time the reason is different.  When comparing TMP/Z0
to the
>>>> ADPSFC message type, all point observations are used
>>>> because of my explanation above.  When comparing TMP/P1014-990,
>>>> Point-Stat
>>>> checks the pressure level for each point observation and only
uses it if
>>>> it
>>>> falls between 1014 and 990.  All of your point
>>>> observation do fall in that range, so they are all used.
>>>>
>>>> Next, I tried running Point-Stat to verify TMP/P900-1000.  This
results
>>>> in
>>>> only 19 matched pairs being found.  Point-Stat searches your
forecast
>>>> file
>>>> for TMP records falling between 900 and 1000mb,
>>>> and it finds 5 of them:
>>>>
>>>>
>>
203:10841294:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=900:TR=0:P1=92:P2=0:TimeU=1:900
>>>> mb:92hr fcst:NAve=0
>>>>
>>>>
>>
212:11358896:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=925:TR=0:P1=92:P2=0:TimeU=1:925
>>>> mb:92hr fcst:NAve=0
>>>>
>>>>
>>
221:11885126:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=950:TR=0:P1=92:P2=0:TimeU=1:950
>>>> mb:92hr fcst:NAve=0
>>>>
>>>>
>>
230:12394008:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=975:TR=0:P1=92:P2=0:TimeU=1:975
>>>> mb:92hr fcst:NAve=0
>>>>
>>>>
>>
238:12820842:d=11070100:TMP:kpds5=11:kpds6=100:kpds7=1000:TR=0:P1=92:P2=0:TimeU=1:1000
>>>> mb:92hr fcst:NAve=0
>>>>
>>>> For each point observation that falls in that pressure range, it
>>>> computes
>>>> a forecast value by doing vertical interpolation between for
forecast
>>>> levels above and below the observation.  So for a
>>>> temperature observation at 994mb, it takes the forecast values at
1000mb
>>>> and 975mb and interpolates between them to the observation level.
>>>>
>>>> Hope that helps clarify.
>>>>
>>>> Thanks,
>>>> John Halley Gotway
>>>> met_help at ucar.edu
>>>>
>>>> On 11/14/2013 07:38 AM, Xingcheng Lu via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>>>>>
>>>>> Dear John,
>>>>>
>>>>> Thank you for your help and it works now. I have uploaded a file
>>>> called
>>>>> Jason.zip to the ftp. Inside it, there are two folders called
pressure
>>>> and
>>>>> height respectively which include observation file, wrfout,
config and
>>>> the
>>>>> result I got. The pressure folder is related to the pressure
issue I
>>>>> mentioned to you before and height folder is related to the T0
and T2
>>>>> issues. Thank you!
>>>>>
>>>>> Sincerely,
>>>>>
>>>>> Jason
>>>>>
>>>>>
>>>>> 2013/11/14 John Halley Gotway via RT <met_help at ucar.edu>
>>>>>
>>>>>> Jason,
>>>>>>
>>>>>> Try these commands:
>>>>>>
>>>>>>      cd <directory containing the files you want to post>
>>>>>>      ftp -p ftp.rap.ucar.edu
>>>>>>      cd incoming/irap/met_help
>>>>>>      mkdir xingcheng_data_20131113
>>>>>>      cd xingcheng_data_20131113
>>>>>>      put <file1>
>>>>>>      put <file2>
>>>>>>      ...
>>>>>>      bye
>>>>>>
>>>>>> Do you still have problems?
>>>>>>
>>>>>> Thanks,
>>>>>> John
>>>>>>
>>>>>>
>>>>>> On 11/13/2013 05:44 AM, Xingcheng Lu via RT wrote:
>>>>>>>
>>>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639
>
>>>>>>>
>>>>>>> Dear John,
>>>>>>>
>>>>>>> Thank you for your response, I tried to drop my files to the
FTP,
>>>>>> however,
>>>>>>> while I put my files, error message showed up:
>>>>>>>
>>>>>>> 227 Entering Passive Mode (128,117,192,211,192,15)
>>>>>>> 553 Could not determine cwdir: No such file or directory.
>>>>>>>
>>>>>>> Any method to solve this? Thank you!
>>>>>>>
>>>>>>> Sincerely,
>>>>>>>
>>>>>>> Jason
>>>>>>>
>>>>>>>
>>>>>>> 2013/11/8 John Halley Gotway via RT <met_help at ucar.edu>
>>>>>>>
>>>>>>>> Jason,
>>>>>>>>
>>>>>>>> I'm not exactly sure how to address this issue.  But let me
tell
>>>> you
>>>> how
>>>>>>>> Point-Stat handles verification of "surface" variables.  It
depends
>>>> on
>>>>>> the
>>>>>>>> observation message type being used.  The ADPSFC and
>>>>>>>> SFCSHP message types are special cases.  Basically, any point
>>>>>> observation
>>>>>>>> with an APDSFC or SFCSHP message type are assumed to be at
the
>>>> surface -
>>>>>>>> regardless of their actual elevation or height value.
>>>>>>>>
>>>>>>>> When you're verifying forecasts with a vertical level type
(such as
>>>>>>>> 2-meter temperature or 10-meter winds - any vertical level
>>>> specified
>>>>>> using
>>>>>>>> a "Z") and comparing it to a surface message type (ADPSFC
>>>>>>>> or SFCSHP), all point observations of those types will be
used.  So
>>>> when
>>>>>>>> verifying 2-m TMP and 0-m TMP against the ADPSFC message
type, I
>>>> would
>>>>>>>> expect that they would use the same set of point
>>>>>>>> observations.
>>>>>>>>
>>>>>>>> This vertical level matching part can get a bit tricky.  It'd
>>>> probably
>>>>>> be
>>>>>>>> best to have you send me a sample forecast file, observation
file,
>>>> and
>>>>>>>> Point-Stat config file along with questions as to why
>>>>>>>> Point-Stat is producing the output that it is.  Usually
working
>>>> through
>>>>>> a
>>>>>>>> specific example provides more answers than speaking more
>>>> generally.
>>>>>>>>
>>>> included
>>>>>>>> that in the test data you send as well.  I'm having a
difficult
>>>> time
>>>>>>>> understanding exactly what the issue is.  I could take a
>>>>>>>> look at your config file and your data and perhaps offer some
>>>>>> suggestions.
>>>>>>>>
>>>>>>>> You can send me data by posting it to our anonymous ftp site:
>>>>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> John
>>>>>>>>
>>>>>>>> On 11/06/2013 07:07 AM, Xingcheng Lu via RT wrote:
>>>>>>>>>
>>>>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639 >
>>>>>>>>>
>>>>>>>>> Dear John,
>>>>>>>>>
>>>>>>>>> I met another problem when I ran the MET. In my ascii
observation
>>>> data,
>>>>>>>> the
>>>>>>>>> height and elevation are the same. In the config file I set
both
>>>>>> Z0(TMP)
>>>>>>>>> and Z2(TMP) and found that the RMSE of Z0 reached around 40
and Z2
>>>> only
>>>>>>>>> around 2. In theory, I think that my observation data should
be
>>>> the
>>>>>>>>> temperature near the ground(Not the soil temperature from
wrf)
>>>> because
>>>>>>>>> elevation=height. So, I want to know if I set Z0(TMP),
whether MET
>>>> will
>>>>>>>> use
>>>>>>>>> the soil temperature from wrf to compare with the
observation
>>>> data?
>>>>>>>>>
>>>>>>>>> Also, if it is possible, hope that you can answer my
question
>>>> the
Thank you
>>>> in
>>>>>>>>>
>>>>>>>>> Sincerely,
>>>>>>>>>
>>>>>>>>> Jason
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2013/10/31 Xingcheng Lu <xingchenglu2011 at u.northwestern.edu>
>>>>>>>>>
>>>>>>>>>> Hi John,
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I still not quite understand the neighborhood method, I
know that
>>>> we
>>>>>>>> first
>>>>>>>>>> need to set a threshold to enclose other points which are
closed
>>>> to
>>>>>> the
>>>>>>>>>> center point, but which factor decides whether the grid
within
>>>> the
>>>>>>>>>> searching radius is turn on or not?
>>>>>>>>>>
>>>>>>>>>> I ran the Ascii fortran one just now, and it worked! I
don't know
>>>> why,
>>>>>>>>>> maybe it is due to my cluster issue. By the way, what kind
of
>>>> data
>>>>>> can I
>>>>>>>>>> use if I want to apply the little_r option?
>>>>>>>>>>
>>>>>>>>>> I just made a comparison for my observation data and
forecast
>>>> data
>>>> for
>>>>>>>> Z0.
>>>>>>>>>> I made a test and found that for ADPUPA, only when the
elevation
>>>> is
>>>>>> zero
>>>>>>>>>> can the observation and forecast be matched. However, since
the
>>>>>>>> observation
>>>>>>>>>> height and elevation is the same in my obs data, like if
the
>>>> elevation
>>>>>>>> is 5
>>>>>>>>>> meters, the observation height is also 5m. I don't know
under
>>>> such
>>>>>>>>>> condition whether the obs can be counted as  Z0? If yes, I
don't
>>>> know
>>>>>>>> why
>>>>>>>>>> it cannot be matched by MET. But if I set as ADPSFC, all
the obs
>>>> can
>>>>>> be
>>>>>>>>>> matched.
>>>>>>>>>>
>>>>>>>>>> My data has exact pressure value, and to the Z0, it ranges
from
>>>>>>>> 990-1014.
1013
>>>> and
>>>> Z0
>>>>>>>> are
>>>>>>>>>> not the same. This results seem like: The temperature
related to
>>>>>>>> pressure
>>>>>>>>>> is not the same with that related to height at the same
location.
>>>> I
>>>> am
>>>>>>>>>> wondering whether there is any interpretation for the temp
value
>>>>>>>> related to
>>>>>>>>>> the pressure?(I have attached one of my result to this
email.)
>>>>>>>>>>
>>>>>>>>>> Also, I need to make a full comparison between point obs
and
>>>> forecast
>>>>>> on
>>>>>>>>>> surface, do you have any idea that which interpretation
method is
>>>> more
>>>>>>>>>> reliable. Also, to the surface temperature, I wrote ADPSFC
for
>>>> the
>>>>>> first
>>>>>>>>>> column of obs-ascii, and set Z0 in the pointstat config
file, am
>>>> I
>>>>>>>> correct
>>>>>>>>>> or not? To the UW_Weight and DW_Weight method, I need to
first
>>>> set
>>>> the
>>>>>>>>>> width, any suggestion for that?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>>
>>>>>>>>>> Jason
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2013/10/30 John Halley Gotway via RT <met_help at ucar.edu>
>>>>>>>>>>
>>>>>>>>>>> Jason,
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> John
>>>>>>>>>>>
>>>>>>>>>>> On 10/29/2013 10:11 AM, Xingcheng Lu via RT wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Tue Oct 29 10:11:07 2013: Request 63639 was acted upon.
>>>>>>>>>>>> Transaction: Ticket created by
>>>> xingchenglu2011 at u.northwestern.edu
>>>>>>>>>>>>             Queue: met_help
>>>>>>>>>>>>           Subject: Several questions regarding MET
application
>>>>>>>>>>>>             Owner: Nobody
>>>>>>>>>>>>        Requestors: xingchenglu2011 at u.northwestern.edu
>>>>>>>>>>>>            Status: new
>>>>>>>>>>>>       Ticket <URL:
>>>>>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63639>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I have several questions regarding the application of
MET:
>>>>>>>>>>>>
>>>>>>>>>>>> 1:The threshold setting for variable(e.g. >273) is
frequent in
>>>> the
>>>>>>>>>>>> tutorial, whether the threshold will be invalid if I just
>>>> calculate
>>>>>>>> and
>>>>>>>>>>>> compare the continuous statistics.(Like if MET will get
rid of
>>>> the
>>>>>>>> data
>>>>>>>>>>>> which is less than 273 for continuous verification?)
>>>>>>>>>>>
>>>>>>>>>>> The "cat_thresh" setting stands for "categorical
threshold".
>>>> That
>>>> is
>>>>>>>>>>> used when computing contingency table counts and
statistics (the
>>>> CTC
>>>>>>>> and
>>>>>>>>>>> CTS output line types).  The "cat_thresh" is used to
>>>>>>>>>>> define what constitutes an "event" when computing a 2x2
>>>> contingency
>>>>>>>>>>> table.  It has no impact on the continuous statistics and
>>>> partial
>>>>>> sums
>>>>>>>> in
>>>>>>>>>>> the CNT and SL1L2 output line types.
>>>>>>>>>>>
>>>>>>>>>>> However, in the future we may add a parameter to filter
the
>>>> matched
>>>>>>>> pairs
>>>>>>>>>>> that go into the continuous statistics.  Some users have
>>>> requested
>>>>>> the
>>>>>>>>>>> ability to do conditional verification like that -
>>>>>>>>>>> where you throw out some of the matched pairs before
computing
>>>>>>>> continuous
>>>>>>>>>>> stats.  But that does not currently exist in the current
METv4.1
>>>>>>>> release.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 2:For the neighborhood method applied in gridded-gridded
>>>> comparison,
>>>>>>>>>>>> whether this method is just useful for the categorical
>>>> variables?
>>>>>> Can
>>>>>>>>>>> it be
>>>>>>>>>>>> applied in the continuous statistics? I don't quite
understand
>>>> that
>>>>>>>> why
>>>>>>>>>>> the
>>>>>>>>>>>> width value for the square must be an odd integer. Also,
in the
>>>>>>>> gridded
>>>>>>>>>>>> comparison, I don't quite understand why before
comparison,
>>>> fcst
>>>> and
>>>>>>>> obs
>>>>>>>>>>>> fields needed to be smoothed first.
>>>>>>>>>>>
>>>>>>>>>>> To answer your second question first, they do not need to
be
>>>> smoothed
>>>>>>>>>>> first.  Typically, grid_stat is run with no
"interpolation", or
>>>>>>>> smoothing,
>>>>>>>>>>> done.  That's why the default looks like this:
>>>>>>>>>>> interp = {
>>>>>>>>>>>         field      = BOTH;
>>>>>>>>>>>         vld_thresh = 1.0;
>>>>>>>>>>>
>>>>>>>>>>>         type = [
>>>>>>>>>>>            {
>>>>>>>>>>>               method = UW_MEAN;
>>>>>>>>>>>               width  = 1;
>>>>>>>>>>>            }
>>>>>>>>>>>         ];
>>>>>>>>>>> };
>>>>>>>>>>>
>>>>>>>>>>> However, this provides an easy way to smooth the data
before
>>>>>> computing
>>>>>>>>>>> statistics.  And that is called "upscaling".  So you could
see
>>>> how
>>>>>> the
>>>>>>>>>>> performance of your model improves the more you smooth it.
>>>>>>>>>>>       Typically, smoother forecast score much better than
more
>>>> detailed
>>>>>>>> ones.
>>>>>>>>>>>      But, as I mentioned, typically no smoothing it
performed.
>>>>>>>>>>>
>>>>>>>>>>> The neighborhood methods implemented in Grid-Stat must be
>>>> performed
>>>>>>>> using
>>>>>>>>>>> a threshold.  First, the raw fields are thresholded to
create a
>>>> 0/1
>>>>>>>> bitmap
>>>>>>>>>>> in each.  Then, for each neighborhood width, a
>>>>>>>>>>> "coverage" value is computed as the percentage of grid
squares
>>>> in
>>>>>> that
>>>>>>>>>>> box that are turned on.  The neighborhood stats are
computed
>>>> over
>>>>>> those
>>>>>>>>>>> coverage values.  The widths must be odd so that they're
>>>>>>>>>>> centered on each grid point.  A width of 5 means you have
2 grid
>>>>>> points
>>>>>>>>>>> to the left and right.  7 means there's 3 on each side.  A
width
>>>> of 4
>>>>>>>>>>> wouldn't be centered on the grid box.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 3:In both point-stat and grid-stat, the tutorial states
that it
>>>> is
>>>>>> not
>>>>>>>>>>>> recommended to use analysis field for comparison. I don't
quite
>>>> get
>>>>>>>>>>>> the point what the analysis field means. If I compare two
>>>> wrfout
>>>> by
>>>>>>>>>>> using
>>>>>>>>>>>> different physical schemes, is it counted as the
situation the
>>>>>>>> tutorial
>>>>>>>>>>>> states?
>>>>>>>>>>>
>>>>>>>>>>> An analysis field is just the 0-hour forecast from a
model.
>>>> Users
>>>>>> will
>>>>>>>>>>> often compare a 24-hour forecast from the previous day to
the
>>>> 0-hour
>>>>>>>>>>> forecast of the current day.  They're assuming that the
>>>>>>>>>>> model analysis is "truth".  The problem is that the model
>>>> analysis
>>>> is
>>>>>>>>>>> typically very far from truth.  The model analysis will
contain
>>>> the
>>>>>>>> same
>>>>>>>>>>> type of biases and errors that the forecast will.
>>>>>>>>>>> Verifying against a model analysis won't really tell you
how
>>>> good
>>>>>> your
>>>>>>>>>>> model is doing.
>>>>>>>>>>>
>>>>>>>>>>> However, we set up the MET tools in a general way to
enable
>>>> users
>>>> to
>>>>>>>>>>> perform whatever type of comparison they'd like.  As you
>>>> mention,
>>>> you
>>>>>>>> can
>>>>>>>>>>> compare the output of two different physical schemes.
>>>>>>>>>>> But the tough part will be interpreting the meaning of the
>>>> resulting
>>>>>>>>>>> statistics.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 4: If I compare the grid fcst and grid obs for T2 in a
specific
>>>>>>>>>>>> time(Setting beg/end=0),then I will get some statistics
values,
>>>> such
>>>>>>>> as
>>>>>>>>>>>> ME,MSE. I am not quite sure about the calculation
process, for
>>>>>>>> example,
>>>>>>>>>>> in
>>>>>>>>>>>> the fcst field, whether MET first sum the T2 value from
all
>>>> grid
>>>>>>>> points
>>>>>>>>>>>> first, then compare with the obs? Or it compares the
value
>>>> between
>>>>>>>> fcst
>>>>>>>>>>> and
>>>>>>>>>>>> obs for each point and do the statistics calculation.
>>>>>>>>>>>
>>>>>>>>>>> For gridded verification, MET looks grid-point by grid-
point.
>>>> For
>>>>>> each
>>>>>>>>>>> grid point, it considers the forecast value (f) and the
>>>> observation
>>>>>>>> value
>>>>>>>>>>> (o).  If either of those contain bad data, it skips
>>>>>>>>>>> that point.  If both data values are good, it computes an
error
>>>> value
>>>>>>>> as
>>>>>>>>>>> f - o.  The mean error (ME) is the average error over all
grid
>>>>>> points.
>>>>>>>>     The
>>>>>>>>>>> mean squared error (MSE) is the average squared
>>>>>>>>>>> error over all grid points.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 5: If I want to compare the variables value at the eta-
level
>>>> set
>>>> in
>>>>>>>> the
>>>>>>>>>>> wrf
>>>>>>>>>>>> namelist, any method for me to do that instead of just
setting
>>>> the
>>>>>>>>>>> specific
>>>>>>>>>>>> height?
>>>>>>>>>>>
>>>>>>>>>>> No.  MET assumes that you've post-processed your raw WRF
output
>>>> for
>>>>>> two
>>>>>>>>>>> reasons.  First, post-processing destaggers the data and
puts it
>>>> on a
>>>>>>>>>>> regular grid.  MET doesn't support staggered grids.
>>>>>>>>>>> Second, post-processing interpolates the model output onto
>>>> pressure
>>>>>>>>>>> levels.  Point observations are defined at pressure
levels, not
>>>>>> hybrid
>>>>>>>>>>> eta-levels.  In order to compare your model output to
point
>>>>>>>>>>> data, it needs to be interpolated to pressure levels.
>>>>>>>>>>>
>>>>>>>>>>> For post-processing, we recommend using the Unified
>>>> Post-Processor
>>>>>>>> which
>>>>>>>>>>> writes out GRIB files that MET supports very well.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 6: For the MODE tool, I don't understand the convolution
>>>> process.
>>>>>> The
>>>>>>>>>>>> expression written as: C(x,y)=âˆ‘a(u,v)f(x-u)(x-v), is it
the
>>>> same
>>>>>> with
>>>>>>>>>>>> C(x,y)=âˆ‘a(u,v)f(x-u,x-v)?  I know that we need to first
set
>>>> the R
>>>>>> and
>>>>>>>> H
>>>>>>>>>>>> value, but I don't know the true meaning for setting
them. If H
>>>> is
>>>>>>>>>>> large,
>>>>>>>>>>>> then R would be small, vice and versa.  However, to the
value
>>>> of
>>>>>>>>>>> C(x,y), it
>>>>>>>>>>>> is hard to compare (large area* lower height) versus
(small
>>>> area
>>>>>>>> *large
>>>>>>>>>>>> height). Could you explain to me a little bit more under
what
>>>>>>>> condition
>>>>>>>>>>>> should I set larger H or smaller R?
>>>>>>>>>>>
>>>>>>>>>>> I don't think it's very necessary to understand the
convolution
>>>>>>>> process.
>>>>>>>>>>>      It's just a circular smoothing filter.  The
convolution
>>>> process is
>>>>>>>>>>> config file).  That defines the convolution radius in grid
>>>> units.
>>>>>>    The
>>>>>>>>>>> value at each grid point is just replaced by the average
value
>>>> of
>>>> all
>>>>>>>> grid
>>>>>>>>>>> points falling within the circle of that radius around
>>>>>>>>>>> the point.  I do suggest playing around with it.  Keep the
>>>> threshold
>>>>>>>> set
>>>>>>>>>>> the same and see how the objects change as you
increase/decrease
>>>> the
>>>>>>>>>>>
>>>>>>>>>>> Ultimately, you should play around with both the
convolution
>>>>>> threshold
>>>>>>>>>>> and radius to define objects that capture the phenomenon
of
>>>> interest.
>>>>>>>>     For
>>>>>>>>>>> example, if you're interested in studying large MCS's,
>>>>>>>>>>> you'd set the convolution radius high and the convolution
>>>> threshold
>>>>>> low
>>>>>>>>>>> (small number of large objects).  For small scale
convection,
>>>> you'd
>>>>>>>> set the
>>>>>>>>>>> convolution radius low and the threshold high (large
>>>>>>>>>>> number of small objects).
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 7: If I want to verify the grid data from CMAQ output,
like the
>>>> NO2
>>>>>>>>>>>> concentration, can I do that with MET? How to set the
'field'
>>>> in
>>>> the
>>>>>>>>>>> config
>>>>>>>>>>>> file?
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I'm not familiar with that data set.  If you have a
gridded data
>>>> file
>>>>>>>>>>> that MET supports and have questions about extracting data
from
>>>> it,
>>>>>>>> just
>>>>>>>>>>> post a sample data file to our anonymous ftp site
>>>>>>>>>>> following these instructions:
>>>>>>>>>>>
>> http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>>>>>>>>>>
>>>>>>>>>>> Then send us a met-help ticket about it.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 9:My last question is regarding the ascii to nc tool. My
obs
>>>> data
>>>> is
>>>>>>>> not
>>>>>>>>>>>> bufr nor the standard ascii format for MET. I then used
both
>>>> Fortran
>>>>>>>> and
>>>>>>>>>>>> Matlab to transfer my data to the standard ascii format
for
>>>> MET.
>>>> To
>>>>>>>> the
>>>>>>>>>>>> fortran one, it showed a lot of such warnings:
>>>>>>>>>>>> WARNING:
>>>>>>>>>>>> WARNING: process_little_r_obs() -> the number of data
lines
>>>>>> specified
>>>>>>>> in
>>>>>>>>>>>> the header (10) does not match the number found in the
data (1)
>>>> on
>>>>>>>> line
>>>>>>>>>>>> number 4087.
>>>>>>>>>>>> WARNING:
>>>>>>>>>>>> WARNING:
>>>>>>>>>>>> WARNING: process_little_r_obs() -> the number of data
lines
>>>>>> specified
>>>>>>>> in
>>>>>>>>>>>> the header (10) does not match the number found in the
data (1)
>>>> on
>>>>>>>> line
>>>>>>>>>>>> number 4091.
>>>>>>>>>>>> WARNING:
>>>>>>>>>>>> WARNING:
>>>>>>>>>>>> WARNING: process_little_r_obs() -> the number of data
lines
>>>>>> specified
>>>>>>>> in
>>>>>>>>>>>> the header (10) does not match the number found in the
data (1)
>>>> on
>>>>>>>> line
>>>>>>>>>>>> number 4095.
>>>>>>>>>>>>
>>>>>>>>>>>> But at last, the nc file can be produced. To the Matlab
one,
>>>> the
>>>>>>>>>>> process is
>>>>>>>>>>>> correct, could you please tell me the reason. Is that
related
>>>> to
>>>> the
>>>>>>>>>>> data
>>>>>>>>>>>> type written onto the file, like the string or the float?
But
>>>> the
>>>>>>>>>>> format I
>>>>>>>>>>>> set is the same in both scripts. I have also attached the
data
>>>>>>>>>>> transformed
>>>>>>>>>>>> by fortran and matlab to this email.
>>>>>>>>>>>
>>>>>>>>>>> I ran the two data files you sent through ascii2nc and
both ran
>>>> fine
>>>>>>>>>>> without any warnings.  The warnings about "little_r"
you're
>>>> seeing
>>>>>> are
>>>>>>>> odd.
>>>>>>>>>>>      ascii2nc supports multiple ascii file formats, one of
>>>>>>>>>>> which is named little_r.  So for some reason, it was not
>>>> interpreting
>>>>>>>> the
>>>>>>>>>>> format of the ascii data you passed it correctly.  You can
>>>> explicitly
>>>>>>>> tell
>>>>>>>>>>> it the file format with the "-format" command line
>>>>>>>>>>> option.  I'd suggest passing the "-format met_point"
option to
>>>>>> ascii2nc
>>>>>>>>>>> to explicitly tell it to interpret your data using the MET
point
>>>>>>>> format.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Also, since the data is not coming from bufr, to the
>>>> Message_Type
>>>> I
>>>>>>>> just
>>>>>>>>>>>> write 'ADPUPA', whether this will influence the
statistics
>>>> result?
>>>>>> The
>>>>>>>>>>>> height for different observation stations might be
different,
>>>> is
>>>>>> there
>>>>>>>>>>> any
>>>>>>>>>>>> method for me to compare the fcst and obs for different
>>>> specific
>>>>>>>> heights
>>>>>>>>>>>> instead of just setting a height value(e.g. 2m)?
>>>>>>>>>>>
>>>>>>>>>>> For surface data, you should set the message type to
>>>> When
>>>>>>>>>>> comparing 2-meter temperature to the ADPSFC message type,
no
>>>> vertical
>>>>>>>>>>> interpolation is done.  For upper-air verification at
pressure
>>>>>>>>>>> levels, vertical interpolation is done linear in the log
of
>>>> pressure.
>>>>>>>>>>>      When verifying a certain number of meters above/below
ground
>>>> (like
>>>>>>>> winds
>>>>>>>>>>> at 30m or 40m), vertical interpolation is done linear in
>>>>>>>>>>> height.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Sincerely,
>>>>>>>>>>>>
>>>>>>>>>>>> Jason
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>>
>>
>>
>>
>>

------------------------------------------------
```