[ncl-talk] memory intensive script
Marston Johnston
shejo284 at gmail.com
Sat Jul 23 08:18:39 MDT 2016
Hi Adam,
There are a couple of things I can see right off the bat.
1.) I always average large files by reading in reading in the dimension I
want to average over
1 at a time and then sum them, create counts and then divide. It is a bit
tricky to get right, but it is faster than dim_avg.
Use 0 to hold places where the data is not valid. So you will need a value
array and a count array. Before dividing,
set the count positions that are 0 to a fillvalue. This will save memory
and time. This method would affect parts of the code that are similar to:
in = addfiles (files0,"r")
ListSetType(in,"cat")
lat0 = in[0]->lat
lon0 = in[0]->lon
hyam = in[0]->hyam ; read from a file the mid-layer coef
hybm = in[0]->hybm ; read from a file
hyai = in[0]->hyai ; read from a file the interface-layer coef
hybi = in[0]->hybi ; read from a file
nlat = dimsizes(lat0)
nlon = dimsizes(lon0)
nlevs = dimsizes(hyam)
pc = in[:]->PRECC
pl = in[:]->PRECL
delete(in)
LHl0 = pl*L*rhofw
LHc0 = pc*L*rhofw
delete(pc)
delete(pl)
zlhl0 = dim_avg(dim_avg_n(LHl0,0))
zlhc0 = dim_avg(dim_avg_n(LHc0,0))
2.) You only need to addfile(file,"r") and then loop over the history,
reading in it 1 at a time so:
in = addfile(files0(np),"r")
ps0 = in->PS ; surface pressure [Pa]
becomes
in = addfile(ifile,"r")
do np = 0, nh.
ps0 = in->PS(np,:,:) ; surface pressure [Pa]
end do
Hope this helps,
/Marston
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Marston S. Johnston, PhD
Department of Earth Sciences
University of Gothenburg, Sweden
Email: marston.johnston at gu.se
Phone: +46-31-7864901
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Only the fruitful thing is true!
On Sat, Jul 23, 2016 at 1:11 AM, Adam Herrington <
adam.herrington at stonybrook.edu> wrote:
> Hi all,
>
> Im getting a core-dump from an ncl script that I'm running on yellowstone.
> This is probably due to memory issues. The core-dump occurs after i load
> the very costly 28km global simulations. My usual remedy is to loop through
> each history file individually (there are ~60 history files for each model
> run) instead of using "addfiles", which reduces the size of the variables.
>
> Unfortunately, I'm still getting the core-dump. I've attached my script,
> and would appreciate any suggestions on ways to cut down on memory.
>
> I've never tried splitting the variables into smaller chunks, but I guess
> I will start trying to do this. Unless someone has a better suggestion?
>
> Thanks!
>
> Adam
>
> _______________________________________________
> ncl-talk mailing list
> ncl-talk at ucar.edu
> List instructions, subscriber options, unsubscribe:
> http://mailman.ucar.edu/mailman/listinfo/ncl-talk
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/ncl-talk/attachments/20160723/402928d9/attachment.html
More information about the ncl-talk
mailing list