<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Hi Hauss,<div class=""><br class=""></div><div class="">That is correct, NCL uses a consistent data type for all elements of an array without considering whether individual values in the array actually require that level of precision. If an array contained a mix of data types, then in order to access a specific index it would be necessary to determine the size of every element before it; in a single-data-type array (let’s say “float”), the memory address of a particular index is easily determined as an offset from the beginning of the array equal to “<font face="Menlo" class="">sizeof(float) * index</font>”, where the size of a float variable is 4 bytes. Just determining the memory address of an index in a mixed-type array is difficult enough, so trying to perform an actual computation across a multi-dimensional array of mixed types would likely be very slow compared to a homogeneous array.</div><div class=""><br class=""></div><div class=""><br class=""></div><div class="">If you’re certain that losing several decimal places of precision is alright, then you could try using NCL’s <a href="http://www.ncl.ucar.edu/Document/Functions/Contributed/pack_values.shtml" class="">pack_values</a> function. pack_values can “pack” a float (4 bytes per value) or double (8 bytes per value) type array into either “short” (2 bytes) or “byte” (1 byte) arrays, using a multiplier and an offset value to “unpack” an approximation of the original float/double data.</div><div class=""><br class=""></div><div class="">Please note that “packing” data into a smaller data type is a form of “lossy” compression, meaning it may not be possible to recover the exact original data from the compressed data.</div><div class=""><br class=""></div><div class="">If you have a float array “a_float” that you want to compress by a factor of 2, you could pack_values() it into a short array:</div><div class=""><font face="Menlo" class="">a_short = pack_values(a_float, "short", False)</font></div><div class=""><font face="Menlo" class="">a_unpacked = short2flt(a_short)<span class="Apple-tab-span" style="white-space:pre"> </span><span class="Apple-tab-span" style="white-space:pre"> </span>; This is essentially the same as "(a_short * a_short@scale_factor) + a_short@add_offset"</font></div><div class=""><br class=""></div><div class="">You will likely want to compare your original array with the new packed-then-unpacked array to evaluate whether the lost precision is acceptable for your use case.</div><div class=""><br class=""></div><div class="">It is also possible to pack values into a “byte” array (4 bytes to 1 byte compression in this case), although the loss of precision will be even more apparent:</div><div class=""><div class=""><font face="Menlo" class="">a_byte = pack_values(a_float, "byte", False)</font></div><div class=""><font face="Menlo" class="">a_unpacked = byte2flt(a_byte)</font></div></div><div class=""><font face="Menlo" class=""><br class=""></font></div><div class="">Alternatively, there is a way to do this outside of NCL for a netcdf file that already exists using a software package called <a href="http://nco.sourceforge.net/" class="">NetCDF Operators</a>. In particular, the <a href="http://nco.sourceforge.net/nco.html#ncpdq" class="">ncpdq</a> operator can be used to pack data as follows:</div><div class=""><font face="Menlo" class="">ncpdq infile.nc outfile.nc</font></div><div class=""><br class=""></div><div class="">I hope this helps,</div><div class="">Kevin</div><div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Jun 25, 2018, at 7:47 PM, Hauss Reinbold <<a href="mailto:Hauss.Reinbold@dri.edu" class="">Hauss.Reinbold@dri.edu</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Hi all,<br class=""><br class="">I’m creating a large netcdf dataset via NCL and I was looking to reduce the file size by reducing the number of decimal places the float values were holding, but it doesn’t look like it worked. In looking into it further, it seems like NCL allocates space in the file by data type, regardless of what value each individual index of an array might have. Is that correct?<br class=""><br class="">I did some looking and couldn’t see a way to reduce file size explicitly other than by changing data type, which I don’t think I can do. Is there a way to reduce the file size of the netcdf file by limiting the number of decimal places? Or is compression or changing the data type my only alternative here?<br class=""><br class="">Thanks for any help on this.<br class=""><br class="">Hauss Reinbold<br class=""><br class="">PUBLIC RECORDS NOTICE: In accordance with NRS Chapter 239, this email and responses, unless otherwise made confidential by law, may be subject to the Nevada Public Records laws and may be disclosed to the public upon request.<br class="">_______________________________________________<br class="">ncl-talk mailing list<br class=""><a href="mailto:ncl-talk@ucar.edu" class="">ncl-talk@ucar.edu</a><br class="">List instructions, subscriber options, unsubscribe:<br class="">http://mailman.ucar.edu/mailman/listinfo/ncl-talk<br class=""></div></div></blockquote></div><br class=""></div></body></html>