[Go-essp-tech] resolution on securing opendap aggregations via ESGF

martin.juckes at stfc.ac.uk martin.juckes at stfc.ac.uk
Thu May 26 06:40:51 MDT 2011


Hello All,

The careful users can keep track of the versions downloaded by keeping their wget scripts - but I agree that this is not a very good system. If every data node was using the agreed DRS directory structure, it would be easy to modify the wget scripts to preserve the version information in the directory structure on the user side. At present, however, it appears that most many modelling groups are not using the DRS directory structure or any form of version control. So the archive managers can't keep track of version changes, let alone the users.

I believe it was intentional to make access much less restricted than at the equivalent stage of CMIP3 in order to get as many groups as possible involved in the initial evaluation of the data.  As far as I can see, the only way of warning users about the volatility at present would be a prominent message on the gateway home page (and we should review what we are saying on the BADC ESGF gateway).

On the issue of where to put QC information, we have a clear plan to put it in the CIM metadata - and I think that work to link this up to the listing of datasets in the gateway is progressing. But, of course, the plan is to run the QC after the data has been moved to PCMDI, BADC or DKRZ and the software to achieve that move in a reliable way is not ready - so getting QC information is not easy. Possibly this does not matter, as a message of the form "no quality assurance information available" should give most users the information they need,

Cheers,
Martin



From: go-essp-tech-bounces at ucar.edu [mailto:go-essp-tech-bounces at ucar.edu] On Behalf Of Estanislao Gonzalez
Sent: 26 May 2011 08:22
To: Sébastien Denvil
Cc: go-essp-tech at ucar.edu
Subject: Re: [Go-essp-tech] resolution on securing opendap aggregations via ESGF

Hi Sébastien,

I'm aware this is how it was intended to be. But among the increasing number of problems submitted to the esg-support, there are a few regarding the retraction of datasets. This tells me that either some modelling groups are not aware of this possibility or that there are non-modelers accessing the data at this stage. Adding the fact that you cannot tell which version was downloaded (AFAIK it's only encoded in the DRS structure and lost while downloaded) I expect it to cause problems even for modelers.

I'm just thinking how we can minimize the number of complaints we get.

Thanks,
Estani
Am 25.05.2011 12:57, schrieb Sébastien Denvil:
Hi all,

On 24/05/2011 09:59, Estanislao Gonzalez wrote:
Hi,

just to be more precise: indeed I think security and CIM data (meta-data from experiments and so on) should be kept away from the catalogs.

I also agree that access policy should be decoupled from the publication step and that CIM metadata should be kept away from the catalogs. CIM instances are exposed through atom feed and services ; that's enough. I agree it's not fairly easy to match files and associated CIM instances but everything is available to do this mapping.


Still, Luca's idea of merging QC level info with the files itself might be a valid idea. The difference is that QC is pretty much like a "semantic" checksum for a file. AFAIK you cannot "downgrade"  the QC without altering the file, i.e. if the file is QC L2 and while performing QC L3 checks an error is found, the file will either be QC L3 approved (the modeler defines the "oddity" as "expected") or it doesn't, which implies a "QC L2 passed; QC L3 failed" flag or the retraction (and maybe re-publication) of it altogether.

Well, that's at least why I think the QC flag is a little different and it's *closely* related to the file. The only difference with the checksum is IMHO that it takes more time to be determined (as well as require other files for it's computation) and thus it's performed in an out-of-band fashion.

We need that QC flag somewhere... and it's far more important than the rest of the CIM meta-data (getting back to Gavin's point about CIM issues and differentiating it to this QC flag: yes, you can still get to the file and download it without CIM data... but without the QC flag you'll have no clue if you *really* want to rely on this data).

Generally speaking I don't think QC flag is far more important than the rest of the CIM meta-data. At least climate modellers and climate scientists will be able to  perform their own QC. If something is wrong with a file (bad units, bad variable (example : precipitation claiming it's a temperature),... , or others discrepancy far more difficult to detect) they will most likely detect it and gave feedback to the appropriate modelling groups. This direct feedback is very important to the whole process. CIM metadata will put some more light over those data and will help scientists to decide up to which point they can rely on a dataset for a particular purpose.

Regarding WG2, WG3, and commercial use of this data the QC flag will be something important. But one must keep in mind that producing information from a multi model - multi experiment project like CMIP5 is a challenging and extremely difficult task. One  need an incredible amount of information to perform the right decision. The QC flag won't be able to summarise that with a "use this data : yes or no"



To be honest I can't understand why people download this data if they *know* it might get corrected. Would you start writing a paper on something that might be altogether wrong?... I suspect they don't realize this.

The group CMIP5 research know that ; it's 100% part of the job. The process they will follow:

- download variables & experiments they are interested in
- perform a first home made analysis on those files
- it's very likely they will catch a lot of things the QC tools won't
- give feedbacks to the modelling groups that produced a dataset they found "strange"
- modelling groups will analyse the situation and will decide to update or delete or keep the files unchanged
- perform a more detailled analysis (catching may be a few more errors)
- give feedbacks to the modelling groups that produced a dataset they found "strange"
- modelling groups will analyse the situation and will decide to update or delete or keep the files unchanged
- start to write their paper
- will do a second round to check if data has been updated (new version, erased version)
- download files that has been updated
- discard files that has been deleted on esg
- rerun their analysis procedure
- update figures and conclusion of the analysis
- will publish a paper that will include proper datasets
- this paper will clearly mention a strange dataset.

This process has already started.

So as a modelling group you will pay extra attention to the feedbacks your receive (you don't want thousands of paper saying your data are strange). And you want to be sure that analyst will use the latest version of your files, or won't use it if you decided to erase it from ESG.

That worked like a charm for CMIP3. We want it better for CMIP5.

Regards.
Sébastien


Anyway, my 2c...

Thanks,
Estani

Am 24.05.2011 03:06, schrieb Gavin M. Bell:
Hi Luca,

I think that the separation of concerns trumps the apparent "simplicity".  Though it is apparently easy to republish (I am not sure I fully agree with that, at least not from the anecdotal information I hear from folks)... it is unnecessary to publish if we keep concerns separated.

As Estani said, the publisher publishes and does basic mechanical sanity checks on data.  That should be the full extent of its operation.  As far as what is easy... one could 'easily' set up an index over the CIM info and "join" on datasetid.  This also provides loose coupling.  If the CIM system has issues, that just means that when you look at your search results you won't see CIM info, but you will still see the dataset and be able to fetch and manipulate it and everything else.  Also if the CIM changes it doesn't affect the pubblisher or publishing in any way.  Catalogs should be viewed as "files" in the system... they essentially are logical files (containing pointers to physical files).

I am still not convinced by your arguments that fusing and coupling these two semantically different aspects of the system so tightly is the right long term architectural solution.  It may be good now, but it not as flexible later. We should leave open the avenue for other meta-metadata to be imbued onto our system ex-post-facto without much ado.

my $0.02

On 5/23/11 2:08 AM, stephen.pascoe at stfc.ac.uk<mailto:stephen.pascoe at stfc.ac.uk> wrote:

I'm with Estani on this.  Authorisation decisions are best decoupled from the application where possible.  Phil is on leave today but I'm sure he'd say the same thing and give much more detailed reasoning.



I think the catalogue already mixes slightly too much information together: location-independent file metadata and location-specific service information.  If we add access control it becomes too tightly coupled.



Stephen.



---

Stephen Pascoe  +44 (0)1235 445980

Centre of Environmental Data Archival

STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot OX11 0QX, UK





-----Original Message-----

From: go-essp-tech-bounces at ucar.edu<mailto:go-essp-tech-bounces at ucar.edu> [mailto:go-essp-tech-bounces at ucar.edu] On Behalf Of Estanislao Gonzalez

Sent: 21 May 2011 09:30

To: Cinquini, Luca (3880)

Cc: go-essp-tech at ucar.edu<mailto:go-essp-tech at ucar.edu>

Subject: Re: [Go-essp-tech] resolution on securing opendap aggregations via ESGF



Hi,



In my opinion we shouldn't encode the access restriction in the catalog

for these reasons:

1) Changing the access would involved re-publishing the files. (this

will be done for instance when QC L2 is reached CMIP5 Research -> CMIP5

Commercial). And think about what would happen if we want to change the

access restriction in a couple of years... we should publish everything

again, and that would involve quite some effort to understand the

procedure again...

2) I'm not sure of this, but I fear TDS security cannot handle multiple

roles. Right now you can publish to as many roles as required, and read

and write access is kept separately. This would involve extending the

TDS capabilities.

3) There could be potential inconsistencies if the authorization service

is detached from datanode (like with the gateway right now) and the

publisher alters the role but forgets to cascade the changes to the

authorizing service (which would proceed according to the last harvested

info)

4) And last but not least, I'm not sure we want to mix administration

with publication. The publisher should only care about making data

available, the administrator should organize this and be responsible for

the security.



So basically I don't agree :-) Although I do think, if required, we

could change "esg-user" for "esgf-controlled" if it's more intuitive.



My 2c anyways,

Estani



Am 20.05.2011 19:17, schrieb Cinquini, Luca (3880):

Hi,

  a few points again on the issue of securing opendap aggregations served by the TDS with ESGF filters:



o There's a new release of the ESGF security filters (esg-orp 1.1.2) that maps the TDS request URI to the dataset ID, and should solve this problem. You can experiment with the JPL test TDS server:



http://test-datanode.jpl.nasa.gov/thredds/catalog.html



where the AIRS dataset (and aggregations) is secured, the MLS is not.



o Now the data node authorization filter will correctly identify the aggregation as secured, and call the configured authorization service. Currently, the p2p Node authorization service can be configured to allow authorization based on URL matching, so it will work. The gateway authorization service will have to implement its own logic to establish authorization.



o Finally, I am wondering if we shouldn't change the way we encode authorization in thredds catalogs. Right now, we use restrictAccess="esg-user" for ALL collections, but should we consider about encoding the proper required access control attribute instead, for example restrictAccess="CMIP5 Research" ? Something to think about - there are prons and cons about this - it's all a question on wether the access control belongs in the catalog (and can be harvested for searching...) or not.



thanks, Luca

_______________________________________________

GO-ESSP-TECH mailing list

GO-ESSP-TECH at ucar.edu<mailto:GO-ESSP-TECH at ucar.edu>

http://mailman.ucar.edu/mailman/listinfo/go-essp-tech



--

Gavin M. Bell

--



 "Never mistake a clear view for a short distance."

               -Paul Saffo






--

Estanislao Gonzalez



Max-Planck-Institut für Meteorologie (MPI-M)

Deutsches Klimarechenzentrum (DKRZ) - German Climate Computing Centre

Room 108 - Bundesstrasse 45a, D-20146 Hamburg, Germany



Phone:   +49 (40) 46 00 94-126

E-Mail:  gonzalez at dkrz.de<mailto:gonzalez at dkrz.de>



_______________________________________________

GO-ESSP-TECH mailing list

GO-ESSP-TECH at ucar.edu<mailto:GO-ESSP-TECH at ucar.edu>

http://mailman.ucar.edu/mailman/listinfo/go-essp-tech




--

Sébastien Denvil

IPSL, Pôle de modélisation du climat

UPMC, Case 101, 4 place Jussieu,

75252 Paris Cedex 5



Tour 45-55 2ème étage Bureau 209

Tel: 33 1 44 27 21 10

Fax: 33 1 44 27 39 02





_______________________________________________

GO-ESSP-TECH mailing list

GO-ESSP-TECH at ucar.edu<mailto:GO-ESSP-TECH at ucar.edu>

http://mailman.ucar.edu/mailman/listinfo/go-essp-tech




--

Estanislao Gonzalez



Max-Planck-Institut für Meteorologie (MPI-M)

Deutsches Klimarechenzentrum (DKRZ) - German Climate Computing Centre

Room 108 - Bundesstrasse 45a, D-20146 Hamburg, Germany



Phone:   +49 (40) 46 00 94-126

E-Mail:  gonzalez at dkrz.de<mailto:gonzalez at dkrz.de>

-- 
Scanned by iCritical.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ucar.edu/pipermail/go-essp-tech/attachments/20110526/46d360cf/attachment-0001.html 


More information about the GO-ESSP-TECH mailing list