[Go-essp-tech] repeated md5 chksum failures

Jennifer Adams jma at cola.iges.org
Thu Apr 12 15:49:25 MDT 2012


On Apr 12, 2012, at 4:17 AM, Estanislao Gonzalez wrote:

> Hi Jennifer,
> 
> please be aware that this is not only taking up your bandwidth but also that from everyone. That's why I'm not entirely comfortable with automatic systems that can't detect a problem (in this case downloading a 1G file 21 times which might have run over the weekend or even longer). But that's just to give some information on the implications of such procedures.
Getting a large volume of CMIP5 data without some automation is not feasible. I do strive to optimize my workflow by avoiding endless loops of trying to download files that cannot be wgot -- I'm sure there is room for improvement in this area. However, in this case, it is the wget script itself that is doing the automatic repeating. 

> 
> Now regarding the file corruption... I can't tell you for sure what's happening at your end. The wget script is recognizing the file is corrupt and downloading it again, which is the expected (and desired) behavior.
> Now why you are getting a corrupt file in such extremely high proportions is beyond my knowledge, I've never seen a transport failing in such a high rate.
I agree, this behavior can't be normal. 

> 
> The md5 hash you've shown is the correct one:
> $ md5sum /gpfs_750/projects/CMIP5/data/cmip5/output2/MPI-M/MPI-ESM-P/piControl/mon/ocean/Omon/r1i1p1/v20111028/rhopoto/rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc  036aabfc10caa76a8943f967bc10ad4d  /gpfs_750/projects/CMIP5/data/cmip5/output2/MPI-M/MPI-ESM-P/piControl/mon/ocean/Omon/r1i1p1/v20111028/rhopoto/rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc
> 
> So this has at least a couple of possible causes:
> 1) There's a network issue (might be since you are seeing corruption from multiple files and datasets and the behavior is erratically)
> 2) wget is not working as expected (I think you are on MacOS and that system has not been properly tested, at least not by me). Perhaps it's behaving like curl and writing to the file all errors found, this is not the case with the wget version I've been testing it width (and makes no sense in such a non-interrupted download)
I am doing all my CMIP5 work on unix systems running CentOS 5.8. But I do have a mac laptop for testing. 

> 3) You have a disk failure (data gets corrupted when written to disk)
> 4) you have other memory/network buffer errors (unlikely as you should have seen this happening with other connections in Internet)
> 
> My advice: 
> - alter the wget script and remove all file but one of the problematic files.
> - Use "-f" to leave the file even if its md5 doesn't match, and be sure this is the case (there will be an output to the console)
I believe you meant to write "-p", since this is the option to preserve data that failed the chksum
> - rename the file and get it again without the "-f" until is finally there.
> - compare the files to see the difference (you'll see if there's text in there, like an html page, or just bytes or a block of bytes). In bash: 
>     diff <(hexdump -C file1) <(hexdump -C file2) | less
The diffs between two files that failed the chksum were quite numerous! 

> - try getting the same file (3 times perhaps?) from a different machine (even better if the OS is different). If it always succeeds the problem is definitely related to your machine. If the behavior is the same, then the issue is definitely in the network.

This has been a very helpful debugging strategy! I am working with my sysadm on testing the single-file wget on different disks on our unix servers. All but one worked the first time -- the one bad disk was my CMIP5 sandbox. :-(  I've been working there to avoid spawning zombies on the gluster filesystem. There is no clear answer yet,  but I think it may end up being a local issue, whether it is the network, the disk mount, or the underlying filesystem. I will keep you apprised. 

> 
> Hope this helps,
Yes, it did. Thank you very much. 
--Jennifer
> Estani
> 
> Am 11.04.2012 20:51, schrieb Jennifer Adams:
>> 
>> Hi, Everyone -- 
>> I'm trying to download some fairly large files (~1Gb) from the piControl run (monthly ocean variables) and find that the checksum fails to match several times and then will be ok. In some cases, it can take 10 or more re-tries before the checksum succeeds. 
>> 
>> The problem is not with a specific data node. Here are some of the dataset IDs for the troublesome downloads: 
>> cmip5.output1.CCCma.CanESM2.piControl.mon.ocean.Omon.r1i1p1.v20111028
>> cmip5.output1.INM.inmcm4.piControl.mon.ocean.Omon.r1i1p1.v20110323
>> cmip5.output1.MIROC.MIROC-ESM.piControl.mon.ocean.Omon.r1i1p1.v20110929
>> cmip5.output1.MRI.MRI-CGCM3.piControl.mon.ocean.Omon.r1i1p1.v20110831
>> cmip5.output1.NCAR.CCSM4.piControl.mon.ocean.Omon.r1i1p1.v20120220
>> cmip5.output1.NCC.NorESM1-M.piControl.mon.ocean.Omon.r1i1p1.v20110901
>> cmip5.output2.MRI.MRI-CGCM3.piControl.mon.ocean.Omon.r1i1p1.v20110831
>> cmip5.output2.NCC.NorESM1-M.piControl.mon.ocean.Omon.r1i1p1.v20110901
>> cmip5.output1.MPI-M.MPI-ESM-LR.piControl.mon.ocean.Omon.r1i1p1.v20120315
>> cmip5.output1.MPI-M.MPI-ESM-P.piControl.mon.ocean.Omon.r1i1p1.v20120315
>> cmip5.output2.MPI-M.MPI-ESM-P.piControl.mon.ocean.Omon.r1i1p1.v20111028
>> 
>> For example, from the final two datasets in the list, here is an entry from the wget script:
>> 'rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' 'http://bmbf-ipcc-ar5.dkrz.de/thredds/fileServer/cmip5/output2/MPI-M/MPI-ESM-P/piControl/mon/ocean/Omon/r1i1p1/v20111028/rhopoto/rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' 'MD5' '036aabfc10caa76a8943f967bc10ad4d'
>> 
>> Here are the 21 download tries so far today, taking 5 hours, the "md5 failed!" message appears in the log file after each one: 
>> 2012-04-11 09:19:18 (2.19 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 09:35:05 (1.13 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 09:53:26 (1009 KB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 10:05:52 (1.49 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 10:17:03 (1.61 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 10:31:14 (1.30 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 10:48:50 (1.04 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 11:01:09 (1.46 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 11:14:01 (1.40 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 11:29:46 (1.15 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 11:42:39 (1.40 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 12:01:05 (1011 KB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 12:18:25 (1.03 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 12:35:30 (1.04 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 12:49:44 (1.35 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 13:08:38 ( 960 KB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 13:26:11 (1.01 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 13:36:21 (1.78 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 13:50:53 (1.25 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 14:06:26 (1.15 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 2012-04-11 14:19:43 (1.39 MB/s) - `rhopoto_Omon_MPI-ESM-P_piControl_r1i1p1_185001-185912.nc' saved [1083611268/1083611268]
>> 
>> This one failed 14 times before finally getting the "md5 ok" message -- it took 3 hrs 45 minutes to get this file:
>> 'so_Omon_MPI-ESM-P_piControl_r1i1p1_189001-189912.nc' 'http://bmbf-ipcc-ar5.dkrz.de/thredds/fileServer/cmip5/output1/MPI-M/MPI-ESM-P/piControl/mon/ocean/Omon/r1i1p1/v20120315/so/so_Omon_MPI-ESM-P_piControl_r1i1p1_189001-189912.nc' 'MD5' '175d6c9dd3ffea30186e6bc9c7e3dee1'
>> 
>> This problem is sucking up my bandwidth and my time, which are not unlimited. Is there any remedy?  
>> --Jennifer
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> GO-ESSP-TECH mailing list
>> GO-ESSP-TECH at ucar.edu
>> http://mailman.ucar.edu/mailman/listinfo/go-essp-tech
> 
> 
> -- 
> Estanislao Gonzalez
> 
> Max-Planck-Institut für Meteorologie (MPI-M)
> Deutsches Klimarechenzentrum (DKRZ) - German Climate Computing Centre
> Room 108 - Bundesstrasse 45a, D-20146 Hamburg, Germany
> 
> Phone:   +49 (40) 46 00 94-126
> E-Mail:  gonzalez at dkrz.de 

--
Jennifer M. Adams
IGES/COLA
4041 Powder Mill Road, Suite 302
Calverton, MD 20705
jma at cola.iges.org



-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/go-essp-tech/attachments/20120412/125db5f2/attachment-0001.html 


More information about the GO-ESSP-TECH mailing list