[Met_help] [rt.rap.ucar.edu #81887] History for question on masking
John Halley Gotway via RT
met_help at ucar.edu
Wed Sep 6 11:50:17 MDT 2017
----------------------------------------------------------------
Initial Request
----------------------------------------------------------------
Hi,
Is there a way to mask a region on the fly, after you've created
point_stat_* files? For example, I have a cronjob that uses PB2NC to
create netCDF files, then, uses POINT_STAT to create files that have
several domains (global, North Atlantic and North Pacific). Is there a way
to take the output from POINT_STAT and zoom in on a region, say, the
Caribbean or Gulf of Mexico, and get specific stats for a smaller lat/lon
box on the fly? Or, do you have to rerun POINT_STAT, using a separate
mask, and call that from your config file?
Roz
--
Rosalyn MacCracken
Support Scientist
Ocean Applilcations Branch
NOAA/NWS Ocean Prediction Center
NCWCP
5830 University Research Ct
College Park, MD 20740-3818
(p) 301-683-1551
rosalyn.maccracken at noaa.gov
----------------------------------------------------------------
Complete Ticket History
----------------------------------------------------------------
Subject: question on masking
From: John Halley Gotway
Time: Wed Sep 06 10:02:40 2017
Roz,
The answer to your question is yes and no. It all depends on how
you've
configured point-stat. One of the output line types from point_stat
is
named MPR, for matched pairs. When you turn on MPR output, you get
one
output line for every single matched pair that is included in your
statistics. If you have MPR output lines, you can run a stat_analysis
job
to subset them however you'd like (including applying another polyline
region) and re-compute whatever output line type you request. For
example,
the following stat_analysis job would subset MPR line over the LMV
region
(Lower Mississippi Valley) and compute continuous statistics for them:
stat_analysis -lookin point_stat_output_MPR.stat -job aggregate_stat
-line_type MPR -out_line_type CNT -by FCST_VAR,FCST_LEV -mask_poly
$MET_BASE/poly/LMV.poly
Notice that I used "-by FCST_VAR,FCST_LEV" to run this job separately
for
each unique combination of values from those columns.
Practically speaking though, writing MPR output lines is not a very
good
idea. It's a horribly inefficient way of storing the matched pair
data.
The same 22 header columns get repeated for each row! If you're using
a
small number of observation locations or are running a limited case
study
or just want to debug the code, writing them is fine. But it just
isn't
feasible to do so operationally.
That's the whole motivation for the partial sums line types (SL1L2 and
VL1L2) and contingency table counts (CTC). They take up much less
space.
They are computed over pre-defined regions and once you've run
point_stat,
there's no way of breaking them down further.
So unless you have MPR output lines, the answer is no, you can't
subdivide
results further after running point_stat.
Thanks,
John
On Wed, Sep 6, 2017 at 6:24 AM, Rosalyn MacCracken - NOAA Affiliate
via RT <
met_help at ucar.edu> wrote:
>
> Wed Sep 06 06:24:05 2017: Request 81887 was acted upon.
> Transaction: Ticket created by rosalyn.maccracken at noaa.gov
> Queue: met_help
> Subject: question on masking
> Owner: Nobody
> Requestors: rosalyn.maccracken at noaa.gov
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=81887 >
>
>
> Hi,
>
> Is there a way to mask a region on the fly, after you've created
> point_stat_* files? For example, I have a cronjob that uses PB2NC
to
> create netCDF files, then, uses POINT_STAT to create files that have
> several domains (global, North Atlantic and North Pacific). Is
there a way
> to take the output from POINT_STAT and zoom in on a region, say, the
> Caribbean or Gulf of Mexico, and get specific stats for a smaller
lat/lon
> box on the fly? Or, do you have to rerun POINT_STAT, using a
separate
> mask, and call that from your config file?
>
> Roz
>
> --
> Rosalyn MacCracken
> Support Scientist
>
> Ocean Applilcations Branch
> NOAA/NWS Ocean Prediction Center
> NCWCP
> 5830 University Research Ct
> College Park, MD 20740-3818
>
> (p) 301-683-1551
> rosalyn.maccracken at noaa.gov
>
>
------------------------------------------------
Subject: question on masking
From: Rosalyn MacCracken - NOAA Affiliate
Time: Wed Sep 06 10:38:53 2017
Hi John,
Oh, good. It sounds like I should be able to further subdivide my
output,
since I write out the MPR file, even though the files are big, and it
is
somewhat inefficient. I actually use that file to create plots of
location
of GFS, ASCAT and differences. Right now, we have the disk space, and
processing times aren't too bad.
Oh, so, while I have your ear, I was having a slight issue with
creating
match ups for previous forecast cycles. So, I wanted to match up the
00z
ASCAT data with yesterdays 00z GFS cycle, then two days prior GFS, 3
days
prior GFS, and 4 days prior. In other words, I was trying to verify
the 24
hour forecast from yesterday, the 48 hour forecast from 2 days ago,
the 72
hour forecast from 3 days ago and the 96 hour forecast. So, matchups
are
just created from how close the lat/lons are, correct? And, in a
gridded
field, such as the GFS, shouldn't there always be some sort of
corresponding point to the ASCAT point data? I'm just wondering why I
have
matched pairs out to ~48 hours, but, no sometimes no matches after
that. I
was just wondering if there was some reason why the points might
diverage
and not match up in space (lat/lon) when going back in time.
I may have to do some other digging, like maybe there is limited ASCAT
data? But, if there is ASCAT data, there should always be a match,
no?
Roz
On Wed, Sep 6, 2017 at 12:02 PM, John Halley Gotway via RT <
met_help at ucar.edu> wrote:
> Roz,
>
> The answer to your question is yes and no. It all depends on how
you've
> configured point-stat. One of the output line types from point_stat
is
> named MPR, for matched pairs. When you turn on MPR output, you get
one
> output line for every single matched pair that is included in your
> statistics. If you have MPR output lines, you can run a
stat_analysis job
> to subset them however you'd like (including applying another
polyline
> region) and re-compute whatever output line type you request. For
example,
> the following stat_analysis job would subset MPR line over the LMV
region
> (Lower Mississippi Valley) and compute continuous statistics for
them:
>
> stat_analysis -lookin point_stat_output_MPR.stat -job aggregate_stat
> -line_type MPR -out_line_type CNT -by FCST_VAR,FCST_LEV -mask_poly
> $MET_BASE/poly/LMV.poly
>
> Notice that I used "-by FCST_VAR,FCST_LEV" to run this job
separately for
> each unique combination of values from those columns.
>
> Practically speaking though, writing MPR output lines is not a very
good
> idea. It's a horribly inefficient way of storing the matched pair
data.
> The same 22 header columns get repeated for each row! If you're
using a
> small number of observation locations or are running a limited case
study
> or just want to debug the code, writing them is fine. But it just
isn't
> feasible to do so operationally.
>
> That's the whole motivation for the partial sums line types (SL1L2
and
> VL1L2) and contingency table counts (CTC). They take up much less
space.
> They are computed over pre-defined regions and once you've run
point_stat,
> there's no way of breaking them down further.
>
> So unless you have MPR output lines, the answer is no, you can't
subdivide
> results further after running point_stat.
>
> Thanks,
> John
>
> On Wed, Sep 6, 2017 at 6:24 AM, Rosalyn MacCracken - NOAA Affiliate
via RT
> <
> met_help at ucar.edu> wrote:
>
> >
> > Wed Sep 06 06:24:05 2017: Request 81887 was acted upon.
> > Transaction: Ticket created by rosalyn.maccracken at noaa.gov
> > Queue: met_help
> > Subject: question on masking
> > Owner: Nobody
> > Requestors: rosalyn.maccracken at noaa.gov
> > Status: new
> > Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=81887 >
> >
> >
> > Hi,
> >
> > Is there a way to mask a region on the fly, after you've created
> > point_stat_* files? For example, I have a cronjob that uses PB2NC
to
> > create netCDF files, then, uses POINT_STAT to create files that
have
> > several domains (global, North Atlantic and North Pacific). Is
there a
> way
> > to take the output from POINT_STAT and zoom in on a region, say,
the
> > Caribbean or Gulf of Mexico, and get specific stats for a smaller
lat/lon
> > box on the fly? Or, do you have to rerun POINT_STAT, using a
separate
> > mask, and call that from your config file?
> >
> > Roz
> >
> > --
> > Rosalyn MacCracken
> > Support Scientist
> >
> > Ocean Applilcations Branch
> > NOAA/NWS Ocean Prediction Center
> > NCWCP
> > 5830 University Research Ct
> > College Park, MD 20740-3818
> >
> > (p) 301-683-1551
> > rosalyn.maccracken at noaa.gov
> >
> >
>
>
--
Rosalyn MacCracken
Support Scientist
Ocean Applilcations Branch
NOAA/NWS Ocean Prediction Center
NCWCP
5830 University Research Ct
College Park, MD 20740-3818
(p) 301-683-1551
rosalyn.maccracken at noaa.gov
------------------------------------------------
Subject: question on masking
From: John Halley Gotway
Time: Wed Sep 06 11:31:26 2017
Roz,
I read through your email but don't understand exactly what you're
asking.
Yes, I agree with you that observation locations tend to stay at the
same
lat/lon through time. There's no reason to think that the lat/lon's
would
change over time.
You're wondering why sometimes you get 0 matched pairs from
point_stat.
Let me mention 2 things that might help:
(1) Please increase the verbosity level when you run point_stat. Run
with
"-v 3" or higher and you'll see the following type of info printed to
the
screen:
DEBUG 2: Processing TMP/P900-750 versus TMP/P900-750, for observation
type
ADPUPA, over region DTC165, for interpolation method NEAREST(1), using
155
pairs.
DEBUG 3: Number of matched pairs = 155
DEBUG 3: Observations processed = 89893
DEBUG 3: Rejected: SID exclusion = 0
DEBUG 3: Rejected: GRIB code = 79360
DEBUG 3: Rejected: valid time = 0
DEBUG 3: Rejected: bad obs value = 0
DEBUG 3: Rejected: off the grid = 5
DEBUG 3: Rejected: level mismatch = 9607
DEBUG 3: Rejected: quality marker = 0
DEBUG 3: Rejected: message type = 344
DEBUG 3: Rejected: masking region = 422
DEBUG 3: Rejected: bad fcst value = 0
DEBUG 3: Rejected: duplicates = 0
This output tells me that when verifying temperature from 750 to 900
mb
against the ADPUPA message type over a particular region, we found 155
matched pairs. The code considered 89,893 observations and discarded
79,360 because of the GRIB code, 9,607 because of the vertical level,
and
344 because of the message type.
If you are getting 0 matched pairs, this information is critical in
figuring out why.
(2) The "valid_time" listed in the output above indicates whether the
observations were close enough in time to the forecast valid time to
be
included. When you run point_stat, it determines the observation
valid
time window to be applied in one of two ways:
- First, it gets the valid time of the forecast data. It
determines the
time window to be used by the "obs_window" entry in the config file.
The
window is valid_time + beg to valid_time + end.
- If the -obs_valid_beg and -obs_valid_end command line options are
used, they override the config file settings.
You mention that the number of matched pairs decreases the further you
look
out in time. Perhaps that's due to a problem in your valid time
window
logic?
Thanks,
John
On Wed, Sep 6, 2017 at 10:38 AM, Rosalyn MacCracken - NOAA Affiliate
via RT
<met_help at ucar.edu> wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=81887 >
>
> Hi John,
>
> Oh, good. It sounds like I should be able to further subdivide my
output,
> since I write out the MPR file, even though the files are big, and
it is
> somewhat inefficient. I actually use that file to create plots of
location
> of GFS, ASCAT and differences. Right now, we have the disk space,
and
> processing times aren't too bad.
>
> Oh, so, while I have your ear, I was having a slight issue with
creating
> match ups for previous forecast cycles. So, I wanted to match up
the 00z
> ASCAT data with yesterdays 00z GFS cycle, then two days prior GFS, 3
days
> prior GFS, and 4 days prior. In other words, I was trying to verify
the 24
> hour forecast from yesterday, the 48 hour forecast from 2 days ago,
the 72
> hour forecast from 3 days ago and the 96 hour forecast. So,
matchups are
> just created from how close the lat/lons are, correct? And, in a
gridded
> field, such as the GFS, shouldn't there always be some sort of
> corresponding point to the ASCAT point data? I'm just wondering why
I have
> matched pairs out to ~48 hours, but, no sometimes no matches after
that. I
> was just wondering if there was some reason why the points might
diverage
> and not match up in space (lat/lon) when going back in time.
>
> I may have to do some other digging, like maybe there is limited
ASCAT
> data? But, if there is ASCAT data, there should always be a match,
no?
>
> Roz
>
> On Wed, Sep 6, 2017 at 12:02 PM, John Halley Gotway via RT <
> met_help at ucar.edu> wrote:
>
> > Roz,
> >
> > The answer to your question is yes and no. It all depends on how
you've
> > configured point-stat. One of the output line types from
point_stat is
> > named MPR, for matched pairs. When you turn on MPR output, you
get one
> > output line for every single matched pair that is included in your
> > statistics. If you have MPR output lines, you can run a
stat_analysis
> job
> > to subset them however you'd like (including applying another
polyline
> > region) and re-compute whatever output line type you request. For
> example,
> > the following stat_analysis job would subset MPR line over the LMV
region
> > (Lower Mississippi Valley) and compute continuous statistics for
them:
> >
> > stat_analysis -lookin point_stat_output_MPR.stat -job
aggregate_stat
> > -line_type MPR -out_line_type CNT -by FCST_VAR,FCST_LEV -mask_poly
> > $MET_BASE/poly/LMV.poly
> >
> > Notice that I used "-by FCST_VAR,FCST_LEV" to run this job
separately for
> > each unique combination of values from those columns.
> >
> > Practically speaking though, writing MPR output lines is not a
very good
> > idea. It's a horribly inefficient way of storing the matched pair
data.
> > The same 22 header columns get repeated for each row! If you're
using a
> > small number of observation locations or are running a limited
case study
> > or just want to debug the code, writing them is fine. But it just
isn't
> > feasible to do so operationally.
> >
> > That's the whole motivation for the partial sums line types (SL1L2
and
> > VL1L2) and contingency table counts (CTC). They take up much less
space.
> > They are computed over pre-defined regions and once you've run
> point_stat,
> > there's no way of breaking them down further.
> >
> > So unless you have MPR output lines, the answer is no, you can't
> subdivide
> > results further after running point_stat.
> >
> > Thanks,
> > John
> >
> > On Wed, Sep 6, 2017 at 6:24 AM, Rosalyn MacCracken - NOAA
Affiliate via
> RT
> > <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > Wed Sep 06 06:24:05 2017: Request 81887 was acted upon.
> > > Transaction: Ticket created by rosalyn.maccracken at noaa.gov
> > > Queue: met_help
> > > Subject: question on masking
> > > Owner: Nobody
> > > Requestors: rosalyn.maccracken at noaa.gov
> > > Status: new
> > > Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=81887
> >
> > >
> > >
> > > Hi,
> > >
> > > Is there a way to mask a region on the fly, after you've created
> > > point_stat_* files? For example, I have a cronjob that uses
PB2NC to
> > > create netCDF files, then, uses POINT_STAT to create files that
have
> > > several domains (global, North Atlantic and North Pacific). Is
there a
> > way
> > > to take the output from POINT_STAT and zoom in on a region, say,
the
> > > Caribbean or Gulf of Mexico, and get specific stats for a
smaller
> lat/lon
> > > box on the fly? Or, do you have to rerun POINT_STAT, using a
separate
> > > mask, and call that from your config file?
> > >
> > > Roz
> > >
> > > --
> > > Rosalyn MacCracken
> > > Support Scientist
> > >
> > > Ocean Applilcations Branch
> > > NOAA/NWS Ocean Prediction Center
> > > NCWCP
> > > 5830 University Research Ct
> > > College Park, MD 20740-3818
> > >
> > > (p) 301-683-1551
> > > rosalyn.maccracken at noaa.gov
> > >
> > >
> >
> >
>
>
> --
> Rosalyn MacCracken
> Support Scientist
>
> Ocean Applilcations Branch
> NOAA/NWS Ocean Prediction Center
> NCWCP
> 5830 University Research Ct
> College Park, MD 20740-3818
>
> (p) 301-683-1551
> rosalyn.maccracken at noaa.gov
>
>
------------------------------------------------
Subject: question on masking
From: Rosalyn MacCracken - NOAA Affiliate
Time: Wed Sep 06 11:49:33 2017
Hi John,
You know, I bet it's the valid time window logic. I bet I need to
either
have a separate config file for the previous forecasts, or over-ride
the
begin and end time. I'm running PB2NC and POINT_STAT on some recent
forecast days to fill in some missing data when our computer had
issues
(Ok, I crashed the computer by running too much at one time, DOH!),
but,
when I finish running those days, I'll try using -v 3 and see what
happens.
Ok, well, you can close this ticket, and if I have other questions
about
that time window, I'll open a new ticket.
The main thing is that I can use stat_analysis to make my analysis
window
smaller and do some case studies. Lots of great ocean surface wind
verification these past 2 weeks and into next week, and maybe even the
week
after. Very Cool!
Thanks for your help!
Roz
On Wed, Sep 6, 2017 at 1:31 PM, John Halley Gotway via RT
<met_help at ucar.edu
> wrote:
> Roz,
>
> I read through your email but don't understand exactly what you're
asking.
> Yes, I agree with you that observation locations tend to stay at the
same
> lat/lon through time. There's no reason to think that the lat/lon's
would
> change over time.
>
> You're wondering why sometimes you get 0 matched pairs from
point_stat.
> Let me mention 2 things that might help:
>
> (1) Please increase the verbosity level when you run point_stat.
Run with
> "-v 3" or higher and you'll see the following type of info printed
to the
> screen:
>
> DEBUG 2: Processing TMP/P900-750 versus TMP/P900-750, for
observation type
> ADPUPA, over region DTC165, for interpolation method NEAREST(1),
using 155
> pairs.
> DEBUG 3: Number of matched pairs = 155
> DEBUG 3: Observations processed = 89893
> DEBUG 3: Rejected: SID exclusion = 0
> DEBUG 3: Rejected: GRIB code = 79360
> DEBUG 3: Rejected: valid time = 0
> DEBUG 3: Rejected: bad obs value = 0
> DEBUG 3: Rejected: off the grid = 5
> DEBUG 3: Rejected: level mismatch = 9607
> DEBUG 3: Rejected: quality marker = 0
> DEBUG 3: Rejected: message type = 344
> DEBUG 3: Rejected: masking region = 422
> DEBUG 3: Rejected: bad fcst value = 0
> DEBUG 3: Rejected: duplicates = 0
>
> This output tells me that when verifying temperature from 750 to 900
mb
> against the ADPUPA message type over a particular region, we found
155
> matched pairs. The code considered 89,893 observations and
discarded
> 79,360 because of the GRIB code, 9,607 because of the vertical
level, and
> 344 because of the message type.
>
> If you are getting 0 matched pairs, this information is critical in
> figuring out why.
>
> (2) The "valid_time" listed in the output above indicates whether
the
> observations were close enough in time to the forecast valid time to
be
> included. When you run point_stat, it determines the observation
valid
> time window to be applied in one of two ways:
> - First, it gets the valid time of the forecast data. It
determines the
> time window to be used by the "obs_window" entry in the config file.
The
> window is valid_time + beg to valid_time + end.
> - If the -obs_valid_beg and -obs_valid_end command line options
are
> used, they override the config file settings.
>
> You mention that the number of matched pairs decreases the further
you look
> out in time. Perhaps that's due to a problem in your valid time
window
> logic?
>
> Thanks,
> John
>
> On Wed, Sep 6, 2017 at 10:38 AM, Rosalyn MacCracken - NOAA Affiliate
via RT
> <met_help at ucar.edu> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=81887 >
> >
> > Hi John,
> >
> > Oh, good. It sounds like I should be able to further subdivide my
> output,
> > since I write out the MPR file, even though the files are big, and
it is
> > somewhat inefficient. I actually use that file to create plots of
> location
> > of GFS, ASCAT and differences. Right now, we have the disk space,
and
> > processing times aren't too bad.
> >
> > Oh, so, while I have your ear, I was having a slight issue with
creating
> > match ups for previous forecast cycles. So, I wanted to match up
the 00z
> > ASCAT data with yesterdays 00z GFS cycle, then two days prior GFS,
3 days
> > prior GFS, and 4 days prior. In other words, I was trying to
verify the
> 24
> > hour forecast from yesterday, the 48 hour forecast from 2 days
ago, the
> 72
> > hour forecast from 3 days ago and the 96 hour forecast. So,
matchups are
> > just created from how close the lat/lons are, correct? And, in a
gridded
> > field, such as the GFS, shouldn't there always be some sort of
> > corresponding point to the ASCAT point data? I'm just wondering
why I
> have
> > matched pairs out to ~48 hours, but, no sometimes no matches after
> that. I
> > was just wondering if there was some reason why the points might
diverage
> > and not match up in space (lat/lon) when going back in time.
> >
> > I may have to do some other digging, like maybe there is limited
ASCAT
> > data? But, if there is ASCAT data, there should always be a
match, no?
> >
> > Roz
> >
> > On Wed, Sep 6, 2017 at 12:02 PM, John Halley Gotway via RT <
> > met_help at ucar.edu> wrote:
> >
> > > Roz,
> > >
> > > The answer to your question is yes and no. It all depends on
how
> you've
> > > configured point-stat. One of the output line types from
point_stat is
> > > named MPR, for matched pairs. When you turn on MPR output, you
get one
> > > output line for every single matched pair that is included in
your
> > > statistics. If you have MPR output lines, you can run a
stat_analysis
> > job
> > > to subset them however you'd like (including applying another
polyline
> > > region) and re-compute whatever output line type you request.
For
> > example,
> > > the following stat_analysis job would subset MPR line over the
LMV
> region
> > > (Lower Mississippi Valley) and compute continuous statistics for
them:
> > >
> > > stat_analysis -lookin point_stat_output_MPR.stat -job
aggregate_stat
> > > -line_type MPR -out_line_type CNT -by FCST_VAR,FCST_LEV
-mask_poly
> > > $MET_BASE/poly/LMV.poly
> > >
> > > Notice that I used "-by FCST_VAR,FCST_LEV" to run this job
separately
> for
> > > each unique combination of values from those columns.
> > >
> > > Practically speaking though, writing MPR output lines is not a
very
> good
> > > idea. It's a horribly inefficient way of storing the matched
pair
> data.
> > > The same 22 header columns get repeated for each row! If you're
using
> a
> > > small number of observation locations or are running a limited
case
> study
> > > or just want to debug the code, writing them is fine. But it
just
> isn't
> > > feasible to do so operationally.
> > >
> > > That's the whole motivation for the partial sums line types
(SL1L2 and
> > > VL1L2) and contingency table counts (CTC). They take up much
less
> space.
> > > They are computed over pre-defined regions and once you've run
> > point_stat,
> > > there's no way of breaking them down further.
> > >
> > > So unless you have MPR output lines, the answer is no, you can't
> > subdivide
> > > results further after running point_stat.
> > >
> > > Thanks,
> > > John
> > >
> > > On Wed, Sep 6, 2017 at 6:24 AM, Rosalyn MacCracken - NOAA
Affiliate via
> > RT
> > > <
> > > met_help at ucar.edu> wrote:
> > >
> > > >
> > > > Wed Sep 06 06:24:05 2017: Request 81887 was acted upon.
> > > > Transaction: Ticket created by rosalyn.maccracken at noaa.gov
> > > > Queue: met_help
> > > > Subject: question on masking
> > > > Owner: Nobody
> > > > Requestors: rosalyn.maccracken at noaa.gov
> > > > Status: new
> > > > Ticket <URL: https://rt.rap.ucar.edu/rt/
> Ticket/Display.html?id=81887
> > >
> > > >
> > > >
> > > > Hi,
> > > >
> > > > Is there a way to mask a region on the fly, after you've
created
> > > > point_stat_* files? For example, I have a cronjob that uses
PB2NC to
> > > > create netCDF files, then, uses POINT_STAT to create files
that have
> > > > several domains (global, North Atlantic and North Pacific).
Is
> there a
> > > way
> > > > to take the output from POINT_STAT and zoom in on a region,
say, the
> > > > Caribbean or Gulf of Mexico, and get specific stats for a
smaller
> > lat/lon
> > > > box on the fly? Or, do you have to rerun POINT_STAT, using a
> separate
> > > > mask, and call that from your config file?
> > > >
> > > > Roz
> > > >
> > > > --
> > > > Rosalyn MacCracken
> > > > Support Scientist
> > > >
> > > > Ocean Applilcations Branch
> > > > NOAA/NWS Ocean Prediction Center
> > > > NCWCP
> > > > 5830 University Research Ct
> > > > College Park, MD 20740-3818
> > > >
> > > > (p) 301-683-1551
> > > > rosalyn.maccracken at noaa.gov
> > > >
> > > >
> > >
> > >
> >
> >
> > --
> > Rosalyn MacCracken
> > Support Scientist
> >
> > Ocean Applilcations Branch
> > NOAA/NWS Ocean Prediction Center
> > NCWCP
> > 5830 University Research Ct
> > College Park, MD 20740-3818
> >
> > (p) 301-683-1551
> > rosalyn.maccracken at noaa.gov
> >
> >
>
>
--
Rosalyn MacCracken
Support Scientist
Ocean Applilcations Branch
NOAA/NWS Ocean Prediction Center
NCWCP
5830 University Research Ct
College Park, MD 20740-3818
(p) 301-683-1551
rosalyn.maccracken at noaa.gov
------------------------------------------------
More information about the Met_help
mailing list