[Met_help] [rt.rap.ucar.edu #57709] History for Follow-on to 'Processing time for point_stat with NCAR ds337 ncdf files'
John Halley Gotway via RT
met_help at ucar.edu
Tue Sep 25 08:07:44 MDT 2012
----------------------------------------------------------------
Initial Request
----------------------------------------------------------------
Hello again John,
I have come across a posting that might be relevant to my very slow runs
on the NASA Ames' supercomputer Pleiades. That machine uses the Lustre
file system that is mentioned in the blog below. Below is a fragment of
an email regarding this matter that I sent to my contact for Pleiades. I
was wondering if you have any comments.
'...I noticed a posting entitled 'Very slow I/O on large blocksize
filesystems'
(http://sourceforge.net/projects/nco/forums/forum/9829/topic/4898620)
that seemed somewhat relevant, especially this part:
"Problem: This issue may be related to the NOFILL issue with netCDF
4.1.2; in any case, *on filesystems with large blocksizes (2M, for
example, 'lustre' *and NCAR's GLADE system) *the I/O performance* of
even simple 'ncks' operations is horrible - *time-to-completion ratios
(compared to smaller blocksize filesystems) of 300:1 or even 1500:1 are
not uncommon. *
Investigation with NCAR CISL staff showed that *a simple variable
extraction that takes about 20 seconds on a small blocksize filesystem
takes about 40 minutes on the GLADE filesystem (120:1 ratio)*..."
Note that, of course, I'm not simply using the nco series of operations.
I simply am reading a single grib file of NWP data and two ncdf
observation files to pair up forecasts with observations. There's a lot
of spatial interpolation being computed after variable extraction. I can
verify that the slowness occurs during the processing the two ncdf
files, which happen to have the mentioned unlimited dimension.'
I was wondering if MET has ever been run on a machine with a large
blocksize.
John
----------------------------------------------------------------
Complete Ticket History
----------------------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #57709] Follow-on to 'Processing time for point_stat with NCAR ds337 ncdf files'
From: John Halley Gotway
Time: Fri Aug 03 11:02:30 2012
John,
I took the exact same test case you sent me and ran it on glade. What
took 24 seconds on my desktop machine takes about 14 minutes up on
glade!
We do run MET on bluefire/glade for some of our DTC experiments. I
spoke with the staff that does those run and they have found the
runtimes to be significantly longer up there.
I suspect that the blocksize issue you uncovered is the likely
culprit. It may be possible to adjust the logic in MET to do a much
smaller number of larger reads from NetCDF file. Perhaps that would
improve the blocksize issue. I do not have time right now to work on
this, but I will create an issue in our MET enhancements tracking tool
to investigate this further when the time/funding allows.
Thanks for pointing this out!
John
On 08/03/2012 10:36 AM, jhenders at aer.com via RT wrote:
>
> Fri Aug 03 10:36:48 2012: Request 57709 was acted upon.
> Transaction: Ticket created by jhenders at aer.com
> Queue: met_help
> Subject: Follow-on to 'Processing time for point_stat with
NCAR ds337 ncdf files'
> Owner: Nobody
> Requestors: jhenders at aer.com
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=57709 >
>
>
> Hello again John,
>
> I have come across a posting that might be relevant to my very slow
runs
> on the NASA Ames' supercomputer Pleiades. That machine uses the
Lustre
> file system that is mentioned in the blog below. Below is a fragment
of
> an email regarding this matter that I sent to my contact for
Pleiades. I
> was wondering if you have any comments.
>
> '...I noticed a posting entitled 'Very slow I/O on large blocksize
> filesystems'
>
(http://sourceforge.net/projects/nco/forums/forum/9829/topic/4898620)
> that seemed somewhat relevant, especially this part:
>
> "Problem: This issue may be related to the NOFILL issue with netCDF
> 4.1.2; in any case, *on filesystems with large blocksizes (2M, for
> example, 'lustre' *and NCAR's GLADE system) *the I/O performance* of
> even simple 'ncks' operations is horrible - *time-to-completion
ratios
> (compared to smaller blocksize filesystems) of 300:1 or even 1500:1
are
> not uncommon. *
>
> Investigation with NCAR CISL staff showed that *a simple variable
> extraction that takes about 20 seconds on a small blocksize
filesystem
> takes about 40 minutes on the GLADE filesystem (120:1 ratio)*..."
>
> Note that, of course, I'm not simply using the nco series of
operations.
> I simply am reading a single grib file of NWP data and two ncdf
> observation files to pair up forecasts with observations. There's a
lot
> of spatial interpolation being computed after variable extraction. I
can
> verify that the slowness occurs during the processing the two ncdf
> files, which happen to have the mentioned unlimited dimension.'
>
> I was wondering if MET has ever been run on a machine with a large
> blocksize.
>
> John
>
>
>
>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #57709] Follow-on to 'Processing time for point_stat with NCAR ds337 ncdf files'
From: jhenders at aer.com
Time: Fri Aug 03 11:35:02 2012
John,
I'm glad you could reproduce my problem. Reading deep into the blog
posting uncovers some potential work-arounds to the problem. The
simplest, from my perspective, would be to apply the following command
to the file: 'nccopy -u infile outfile'. This converts the 'unlimited'
dimension to a fixed size. I would try this on Pleiades right now, but
it is down for maintenance today. If you have access to the nco
operators on glade (nccopy being one of its utilities), presenting MET
with an obs file that has been 'nccopied' might speed things up.
John
On 8/3/12 1:02 PM, John Halley Gotway via RT wrote:
> John,
>
> I took the exact same test case you sent me and ran it on glade.
What took 24 seconds on my desktop machine takes about 14 minutes up
on glade!
>
> We do run MET on bluefire/glade for some of our DTC experiments. I
spoke with the staff that does those run and they have found the
runtimes to be significantly longer up there.
>
> I suspect that the blocksize issue you uncovered is the likely
culprit. It may be possible to adjust the logic in MET to do a much
smaller number of larger reads from NetCDF file. Perhaps that would
> improve the blocksize issue. I do not have time right now to work
on this, but I will create an issue in our MET enhancements tracking
tool to investigate this further when the time/funding allows.
>
> Thanks for pointing this out!
>
> John
>
> On 08/03/2012 10:36 AM, jhenders at aer.com via RT wrote:
>> Fri Aug 03 10:36:48 2012: Request 57709 was acted upon.
>> Transaction: Ticket created by jhenders at aer.com
>> Queue: met_help
>> Subject: Follow-on to 'Processing time for point_stat with
NCAR ds337 ncdf files'
>> Owner: Nobody
>> Requestors: jhenders at aer.com
>> Status: new
>> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=57709 >
>>
>>
>> Hello again John,
>>
>> I have come across a posting that might be relevant to my very slow
runs
>> on the NASA Ames' supercomputer Pleiades. That machine uses the
Lustre
>> file system that is mentioned in the blog below. Below is a
fragment of
>> an email regarding this matter that I sent to my contact for
Pleiades. I
>> was wondering if you have any comments.
>>
>> '...I noticed a posting entitled 'Very slow I/O on large blocksize
>> filesystems'
>>
(http://sourceforge.net/projects/nco/forums/forum/9829/topic/4898620)
>> that seemed somewhat relevant, especially this part:
>>
>> "Problem: This issue may be related to the NOFILL issue with netCDF
>> 4.1.2; in any case, *on filesystems with large blocksizes (2M, for
>> example, 'lustre' *and NCAR's GLADE system) *the I/O performance*
of
>> even simple 'ncks' operations is horrible - *time-to-completion
ratios
>> (compared to smaller blocksize filesystems) of 300:1 or even 1500:1
are
>> not uncommon. *
>>
>> Investigation with NCAR CISL staff showed that *a simple variable
>> extraction that takes about 20 seconds on a small blocksize
filesystem
>> takes about 40 minutes on the GLADE filesystem (120:1 ratio)*..."
>>
>> Note that, of course, I'm not simply using the nco series of
operations.
>> I simply am reading a single grib file of NWP data and two ncdf
>> observation files to pair up forecasts with observations. There's a
lot
>> of spatial interpolation being computed after variable extraction.
I can
>> verify that the slowness occurs during the processing the two ncdf
>> files, which happen to have the mentioned unlimited dimension.'
>>
>> I was wondering if MET has ever been run on a machine with a large
>> blocksize.
>>
>> John
>>
>>
>>
>>
>>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #57709] Follow-on to 'Processing time for point_stat with NCAR ds337 ncdf files'
From: John Halley Gotway
Time: Mon Aug 06 09:57:20 2012
John,
Thanks a lot for the suggestion. I ran your test case this morning on
bluefire with and without the UNLIMITED dimension in the NetCDF files.
Here's the timing results:
With UNLIMITED 13:48.52 and without 06:46.39
So it's about twice as fast - or more appropriately, 1/2 as slow.
This seems like a good workaround for the time being. But as I
mentioned, I've created a ticket in our issue tracking system to
investigate ways to improve runtime performance on large blocksize
systems. Clearly avoiding the use of UNLIMITED dimensions is one
option to consider!
Thanks,
John
On 08/03/2012 11:35 AM, jhenders at aer.com via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=57709 >
>
> John,
>
> I'm glad you could reproduce my problem. Reading deep into the blog
> posting uncovers some potential work-arounds to the problem. The
> simplest, from my perspective, would be to apply the following
command
> to the file: 'nccopy -u infile outfile'. This converts the
'unlimited'
> dimension to a fixed size. I would try this on Pleiades right now,
but
> it is down for maintenance today. If you have access to the nco
> operators on glade (nccopy being one of its utilities), presenting
MET
> with an obs file that has been 'nccopied' might speed things up.
>
> John
>
> On 8/3/12 1:02 PM, John Halley Gotway via RT wrote:
>> John,
>>
>> I took the exact same test case you sent me and ran it on glade.
What took 24 seconds on my desktop machine takes about 14 minutes up
on glade!
>>
>> We do run MET on bluefire/glade for some of our DTC experiments. I
spoke with the staff that does those run and they have found the
runtimes to be significantly longer up there.
>>
>> I suspect that the blocksize issue you uncovered is the likely
culprit. It may be possible to adjust the logic in MET to do a much
smaller number of larger reads from NetCDF file. Perhaps that would
>> improve the blocksize issue. I do not have time right now to work
on this, but I will create an issue in our MET enhancements tracking
tool to investigate this further when the time/funding allows.
>>
>> Thanks for pointing this out!
>>
>> John
>>
>> On 08/03/2012 10:36 AM, jhenders at aer.com via RT wrote:
>>> Fri Aug 03 10:36:48 2012: Request 57709 was acted upon.
>>> Transaction: Ticket created by jhenders at aer.com
>>> Queue: met_help
>>> Subject: Follow-on to 'Processing time for point_stat with
NCAR ds337 ncdf files'
>>> Owner: Nobody
>>> Requestors: jhenders at aer.com
>>> Status: new
>>> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=57709 >
>>>
>>>
>>> Hello again John,
>>>
>>> I have come across a posting that might be relevant to my very
slow runs
>>> on the NASA Ames' supercomputer Pleiades. That machine uses the
Lustre
>>> file system that is mentioned in the blog below. Below is a
fragment of
>>> an email regarding this matter that I sent to my contact for
Pleiades. I
>>> was wondering if you have any comments.
>>>
>>> '...I noticed a posting entitled 'Very slow I/O on large blocksize
>>> filesystems'
>>>
(http://sourceforge.net/projects/nco/forums/forum/9829/topic/4898620)
>>> that seemed somewhat relevant, especially this part:
>>>
>>> "Problem: This issue may be related to the NOFILL issue with
netCDF
>>> 4.1.2; in any case, *on filesystems with large blocksizes (2M, for
>>> example, 'lustre' *and NCAR's GLADE system) *the I/O performance*
of
>>> even simple 'ncks' operations is horrible - *time-to-completion
ratios
>>> (compared to smaller blocksize filesystems) of 300:1 or even
1500:1 are
>>> not uncommon. *
>>>
>>> Investigation with NCAR CISL staff showed that *a simple variable
>>> extraction that takes about 20 seconds on a small blocksize
filesystem
>>> takes about 40 minutes on the GLADE filesystem (120:1 ratio)*..."
>>>
>>> Note that, of course, I'm not simply using the nco series of
operations.
>>> I simply am reading a single grib file of NWP data and two ncdf
>>> observation files to pair up forecasts with observations. There's
a lot
>>> of spatial interpolation being computed after variable extraction.
I can
>>> verify that the slowness occurs during the processing the two ncdf
>>> files, which happen to have the mentioned unlimited dimension.'
>>>
>>> I was wondering if MET has ever been run on a machine with a large
>>> blocksize.
>>>
>>> John
>>>
>>>
>>>
>>>
>>>
>>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #57709] Follow-on to 'Processing time for point_stat with NCAR ds337 ncdf files'
From: jhenders at aer.com
Time: Mon Aug 06 10:05:18 2012
Hi John,
Thanks for your timing results. Unfortunately, on Pleiades there is no
speed up when removing the UNLIMITED dimension. I'm surprised and
disappointed. If I hear anything from their system administrators,
I'll
pass it along.
John
On 8/6/12 11:57 AM, John Halley Gotway via RT wrote:
> John,
>
> Thanks a lot for the suggestion. I ran your test case this morning
on bluefire with and without the UNLIMITED dimension in the NetCDF
files. Here's the timing results:
> With UNLIMITED 13:48.52 and without 06:46.39
>
> So it's about twice as fast - or more appropriately, 1/2 as slow.
This seems like a good workaround for the time being. But as I
mentioned, I've created a ticket in our issue tracking system to
> investigate ways to improve runtime performance on large blocksize
systems. Clearly avoiding the use of UNLIMITED dimensions is one
option to consider!
>
> Thanks,
> John
>
> On 08/03/2012 11:35 AM, jhenders at aer.com via RT wrote:
>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=57709 >
>>
>> John,
>>
>> I'm glad you could reproduce my problem. Reading deep into the blog
>> posting uncovers some potential work-arounds to the problem. The
>> simplest, from my perspective, would be to apply the following
command
>> to the file: 'nccopy -u infile outfile'. This converts the
'unlimited'
>> dimension to a fixed size. I would try this on Pleiades right now,
but
>> it is down for maintenance today. If you have access to the nco
>> operators on glade (nccopy being one of its utilities), presenting
MET
>> with an obs file that has been 'nccopied' might speed things up.
>>
>> John
>>
>> On 8/3/12 1:02 PM, John Halley Gotway via RT wrote:
>>> John,
>>>
>>> I took the exact same test case you sent me and ran it on glade.
What took 24 seconds on my desktop machine takes about 14 minutes up
on glade!
>>>
>>> We do run MET on bluefire/glade for some of our DTC experiments.
I spoke with the staff that does those run and they have found the
runtimes to be significantly longer up there.
>>>
>>> I suspect that the blocksize issue you uncovered is the likely
culprit. It may be possible to adjust the logic in MET to do a much
smaller number of larger reads from NetCDF file. Perhaps that would
>>> improve the blocksize issue. I do not have time right now to work
on this, but I will create an issue in our MET enhancements tracking
tool to investigate this further when the time/funding allows.
>>>
>>> Thanks for pointing this out!
>>>
>>> John
>>>
>>> On 08/03/2012 10:36 AM, jhenders at aer.com via RT wrote:
>>>> Fri Aug 03 10:36:48 2012: Request 57709 was acted upon.
>>>> Transaction: Ticket created by jhenders at aer.com
>>>> Queue: met_help
>>>> Subject: Follow-on to 'Processing time for point_stat
with NCAR ds337 ncdf files'
>>>> Owner: Nobody
>>>> Requestors: jhenders at aer.com
>>>> Status: new
>>>> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=57709 >
>>>>
>>>>
>>>> Hello again John,
>>>>
>>>> I have come across a posting that might be relevant to my very
slow runs
>>>> on the NASA Ames' supercomputer Pleiades. That machine uses the
Lustre
>>>> file system that is mentioned in the blog below. Below is a
fragment of
>>>> an email regarding this matter that I sent to my contact for
Pleiades. I
>>>> was wondering if you have any comments.
>>>>
>>>> '...I noticed a posting entitled 'Very slow I/O on large
blocksize
>>>> filesystems'
>>>>
(http://sourceforge.net/projects/nco/forums/forum/9829/topic/4898620)
>>>> that seemed somewhat relevant, especially this part:
>>>>
>>>> "Problem: This issue may be related to the NOFILL issue with
netCDF
>>>> 4.1.2; in any case, *on filesystems with large blocksizes (2M,
for
>>>> example, 'lustre' *and NCAR's GLADE system) *the I/O performance*
of
>>>> even simple 'ncks' operations is horrible - *time-to-completion
ratios
>>>> (compared to smaller blocksize filesystems) of 300:1 or even
1500:1 are
>>>> not uncommon. *
>>>>
>>>> Investigation with NCAR CISL staff showed that *a simple variable
>>>> extraction that takes about 20 seconds on a small blocksize
filesystem
>>>> takes about 40 minutes on the GLADE filesystem (120:1 ratio)*..."
>>>>
>>>> Note that, of course, I'm not simply using the nco series of
operations.
>>>> I simply am reading a single grib file of NWP data and two ncdf
>>>> observation files to pair up forecasts with observations. There's
a lot
>>>> of spatial interpolation being computed after variable
extraction. I can
>>>> verify that the slowness occurs during the processing the two
ncdf
>>>> files, which happen to have the mentioned unlimited dimension.'
>>>>
>>>> I was wondering if MET has ever been run on a machine with a
large
>>>> blocksize.
>>>>
>>>> John
>>>>
>>>>
>>>>
>>>>
>>>>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #57709] Follow-on to 'Processing time for point_stat with NCAR ds337 ncdf files'
From: John Halley Gotway
Time: Tue Aug 07 12:07:18 2012
John,
I wanted to let you know that upon closer review, the result I told
you about using a FIXED dimension, as opposed to an UNLIMITED
dimension, is incorrect.
The mistake was due to an error in my script that made the FIXED
dimension one *appear* to run faster. After fixing the problem and
rerunning, I've actually found that the FIXED dimension runs
consistently takes longer than the UNLIMITED dimension one.
So that's not the magic bullet!
John
On 08/06/2012 10:05 AM, jhenders at aer.com via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=57709 >
>
> Hi John,
>
> Thanks for your timing results. Unfortunately, on Pleiades there is
no
> speed up when removing the UNLIMITED dimension. I'm surprised and
> disappointed. If I hear anything from their system administrators,
I'll
> pass it along.
>
> John
>
> On 8/6/12 11:57 AM, John Halley Gotway via RT wrote:
>> John,
>>
>> Thanks a lot for the suggestion. I ran your test case this morning
on bluefire with and without the UNLIMITED dimension in the NetCDF
files. Here's the timing results:
>> With UNLIMITED 13:48.52 and without 06:46.39
>>
>> So it's about twice as fast - or more appropriately, 1/2 as slow.
This seems like a good workaround for the time being. But as I
mentioned, I've created a ticket in our issue tracking system to
>> investigate ways to improve runtime performance on large blocksize
systems. Clearly avoiding the use of UNLIMITED dimensions is one
option to consider!
>>
>> Thanks,
>> John
>>
>> On 08/03/2012 11:35 AM, jhenders at aer.com via RT wrote:
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=57709 >
>>>
>>> John,
>>>
>>> I'm glad you could reproduce my problem. Reading deep into the
blog
>>> posting uncovers some potential work-arounds to the problem. The
>>> simplest, from my perspective, would be to apply the following
command
>>> to the file: 'nccopy -u infile outfile'. This converts the
'unlimited'
>>> dimension to a fixed size. I would try this on Pleiades right now,
but
>>> it is down for maintenance today. If you have access to the nco
>>> operators on glade (nccopy being one of its utilities), presenting
MET
>>> with an obs file that has been 'nccopied' might speed things up.
>>>
>>> John
>>>
>>> On 8/3/12 1:02 PM, John Halley Gotway via RT wrote:
>>>> John,
>>>>
>>>> I took the exact same test case you sent me and ran it on glade.
What took 24 seconds on my desktop machine takes about 14 minutes up
on glade!
>>>>
>>>> We do run MET on bluefire/glade for some of our DTC experiments.
I spoke with the staff that does those run and they have found the
runtimes to be significantly longer up there.
>>>>
>>>> I suspect that the blocksize issue you uncovered is the likely
culprit. It may be possible to adjust the logic in MET to do a much
smaller number of larger reads from NetCDF file. Perhaps that would
>>>> improve the blocksize issue. I do not have time right now to
work on this, but I will create an issue in our MET enhancements
tracking tool to investigate this further when the time/funding
allows.
>>>>
>>>> Thanks for pointing this out!
>>>>
>>>> John
>>>>
>>>> On 08/03/2012 10:36 AM, jhenders at aer.com via RT wrote:
>>>>> Fri Aug 03 10:36:48 2012: Request 57709 was acted upon.
>>>>> Transaction: Ticket created by jhenders at aer.com
>>>>> Queue: met_help
>>>>> Subject: Follow-on to 'Processing time for point_stat
with NCAR ds337 ncdf files'
>>>>> Owner: Nobody
>>>>> Requestors: jhenders at aer.com
>>>>> Status: new
>>>>> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=57709 >
>>>>>
>>>>>
>>>>> Hello again John,
>>>>>
>>>>> I have come across a posting that might be relevant to my very
slow runs
>>>>> on the NASA Ames' supercomputer Pleiades. That machine uses the
Lustre
>>>>> file system that is mentioned in the blog below. Below is a
fragment of
>>>>> an email regarding this matter that I sent to my contact for
Pleiades. I
>>>>> was wondering if you have any comments.
>>>>>
>>>>> '...I noticed a posting entitled 'Very slow I/O on large
blocksize
>>>>> filesystems'
>>>>>
(http://sourceforge.net/projects/nco/forums/forum/9829/topic/4898620)
>>>>> that seemed somewhat relevant, especially this part:
>>>>>
>>>>> "Problem: This issue may be related to the NOFILL issue with
netCDF
>>>>> 4.1.2; in any case, *on filesystems with large blocksizes (2M,
for
>>>>> example, 'lustre' *and NCAR's GLADE system) *the I/O
performance* of
>>>>> even simple 'ncks' operations is horrible - *time-to-completion
ratios
>>>>> (compared to smaller blocksize filesystems) of 300:1 or even
1500:1 are
>>>>> not uncommon. *
>>>>>
>>>>> Investigation with NCAR CISL staff showed that *a simple
variable
>>>>> extraction that takes about 20 seconds on a small blocksize
filesystem
>>>>> takes about 40 minutes on the GLADE filesystem (120:1
ratio)*..."
>>>>>
>>>>> Note that, of course, I'm not simply using the nco series of
operations.
>>>>> I simply am reading a single grib file of NWP data and two ncdf
>>>>> observation files to pair up forecasts with observations.
There's a lot
>>>>> of spatial interpolation being computed after variable
extraction. I can
>>>>> verify that the slowness occurs during the processing the two
ncdf
>>>>> files, which happen to have the mentioned unlimited dimension.'
>>>>>
>>>>> I was wondering if MET has ever been run on a machine with a
large
>>>>> blocksize.
>>>>>
>>>>> John
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>
>
------------------------------------------------
Subject: Follow-on to 'Processing time for point_stat with NCAR ds337 ncdf files'
From: John Halley Gotway
Time: Thu Aug 23 08:37:01 2012
John,
I'm looking through old MET-Help tickets and see our conversation
about slow runtimes for Point-Stat on large blocksize systems.
Coincidentally, one of our testing groups was having the same issue
that same week up on NCAR's bluefire machine.
In the released version of the code, Point-Stat is reading the point
observations one-by-one from the NetCDF file. I tried running a
modified version of Point-Stat that does one single read of all the
observations and then processes them from memory. That change
resulted in Point-Stat running twice as fast on bluefire for my test
case, but slowed Point-Stat down by about 10% on my local machine.
It's still not blazing fast on bluefire, but it's an improvement.
So I've created a development task for the next release (METv4.1
around January 2013) to enhance Point-Stat to allow the user to
configure the number of observations Point-Stat reads at a single time
from the NetCDF point observation file. The default will be one, but
that'll give us a lever to turn on large blocksize systems to improve
the runtime.
For now, I'll resolve this MET-Help ticket.
Thanks,
John
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #57709] Follow-on to 'Processing time for point_stat with NCAR ds337 ncdf files'
From: jhenders at aer.com
Time: Thu Aug 23 08:48:51 2012
Hello John,
Thanks for continuing to test workarounds with a goal of releasing
them. I've been out of the office for a while (currently at the
Foothills Lab attending a GSI tutorial) and haven't yet been able to
replicate the solution on Pleiades that was proposed by their IT
specialist. This involved putting the obs files on /tmp space on the
compute node. I will continue to tackle this when I return to the
office
on Monday.
If you feel a further discussion is in order, please let me know and I
can pop over to your office. It would be nice to put your name to a
face!
Thanks.
Regards,
John
On 8/23/12 10:37 AM, John Halley Gotway via RT wrote:
> John,
>
> I'm looking through old MET-Help tickets and see our conversation
about slow runtimes for Point-Stat on large blocksize systems.
Coincidentally, one of our testing groups was having the same issue
that same week up on NCAR's bluefire machine.
>
> In the released version of the code, Point-Stat is reading the point
observations one-by-one from the NetCDF file. I tried running a
modified version of Point-Stat that does one single read of all the
observations and then processes them from memory. That change
resulted in Point-Stat running twice as fast on bluefire for my test
case, but slowed Point-Stat down by about 10% on my local machine.
It's still not blazing fast on bluefire, but it's an improvement.
>
> So I've created a development task for the next release (METv4.1
around January 2013) to enhance Point-Stat to allow the user to
configure the number of observations Point-Stat reads at a single time
from the NetCDF point observation file. The default will be one, but
that'll give us a lever to turn on large blocksize systems to improve
the runtime.
>
> For now, I'll resolve this MET-Help ticket.
>
> Thanks,
> John
------------------------------------------------
More information about the Met_help
mailing list