[ncl-talk] Reading and computation efficiency
Guido Cioni
guidocioni at gmail.com
Mon Jun 6 02:38:59 MDT 2016
Hi,
the question is very simple, and I believe to already have the answer but still, is worth trying.
When managing large files in NCL I always have to create new tricks in the debugging phase in order to avoid long waiting times. Today I was trying to read a dataset with 401x401x150 points (approx 48 GB):
data = addfile("./complete_remap.nc", "r")
p = data->pres ; pressure [Pa]
t = data->temp ; temperature [K]
qv = data->qv ; qv [ kg/kg]
z = data->z_mc ; geopotential [m]
print("FILEs READ in "+get_cpu_time()+"s")
rh=relhum(t, qv, p)
td=dewtemp_trh(t, rh)
print("COMPUTATION "+get_cpu_time()+"s”)
and getting the following printout.
(0) FILEs READ in 47.4748s
(0) COMPUTATION 499.424s
Is there any way to speed up the process? I tried to use as few definition as possible and only pre-included functions.
Why is the computation part taking so long? Maybe it’s something that depends on the system RAM?
In the meantime the best workaround that I could think of consisted in subsetting a region in the previous data and testing the code only on that file.
Cheers
Guido Cioni
http://guidocioni.altervista.org <http://guidocioni.altervista.org/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/ncl-talk/attachments/20160606/4d576a51/attachment.html
More information about the ncl-talk
mailing list