Summary

text

     The following pages highlight the key discussion topics (in note form) arising from the workshop.  In many cases the discussions are on-going and the community (workshop attendees and others) are encouraged to continue the debate using the wiki forum that has been established.  Please note that other discussion themes can be established and it not limited to those initiated here.  In particular, not all themes are covered and whilst it is anticipated that some will take place within formal IVOS meetings, readers are encouraged to introduce them here.

Priority Actions

[A-1] Set up small working groups (in-conjunction with GSICS building and generalising upon its current working groups) to draft a CEOS endorsable best practise "procedure" for the various vicarious calibration/validation methodologies.

[A-2]  It was agreed that a first step would be to establish a fully sortable database (all fields) on the Cal/Val portal to collate results of past comparisons between "optical" sensors or sensors to a "reference" (methodology or test-site).

[A-3]  It was noted that if we are to regularly compare between sensors and between methodologies we need to agree some protocols to aid data exchange and interoperability.

Summary

1.) Establishing "best practise" for use and characterization of test-sites

1.1) Differences observed from instrument/reference standard artefacts particularly when both described as traceable to a National Metrology Institute (NMI)

1.2)  Significant differences exist in the value of the Reflectance Factor of reference panels used for the calibration of spectrometers used in the characterisation of test-sites – depending on whether they are calibrated using hemispherical radiation or at a fixed illumination/view angle or goniometrically (multiple illumination angles).

1.3)   All aspects of an instrument (field and satellite) need to be evaluated to avoid potential unaccounted sources of uncertainty which might impact on other measurements e.g. stray-light, field of View, MTF etc.
1.4)  Establishing a best practise for test-site characterisation.  What is achievable in terms of overall uncertainty for sensor vicarious calibration?  Effects of temporal and spatial variance. 

1.5)   Use of fully parameterised model such as proposed by JRC considered to be a target to work towards. 

1.6)   Access to results of test site characterisations - considered to be controversial.

2.) Establishing a Network of Automated test sites: "LANDNET"

2.1)   In principle confirmed that a sub-set of CEOS sites to serve as a focus for the community was a good thing.  It was noted that the first priority was for radiometric gain but that activities were and should be pursued to cover other sensor characteristics  

2.2) The key issue that needs to be addressed by the community concerns data access policy.    

3) Non-test-site" vicarious calibration methodologies: Deep convective clouds, Rayleigh scattering, Sun-glint, water clouds, Moon  

3.1)  Following the presentations and discussion it was considered that  most methodologies were suitable for cross-calibration and in many cases absolute calibration in their own right.  In particular it was considered that the uncertainty capabilities (with some research effort) of all methodologies were comparable to each other and to those obtainable from the test-sites ( ~1%). 

3.2) PRIORITY ACTION:   Action to set up small working groups (in-conjunction with GSICS) to draft a CEOS endorsable best practise "procedure" for the various methodologies.

4) Session 2: Identify biases between sensors …. 

4.1)   Debate on relative merit of performing sensor cross-comparisons in terms of ToA radiance or reflectance.  

4.2)   Key conclusion was that comparisons were critical to a successful internationally harmonised and interoperable EO system.  It was important to ensure that this was fully recognised by the agencies and as part of this to have a clear and consistent understanding/summary of our current knowledge.

4.3)  PRIORITY ACTION:   It was agreed that a first step would be to establish a database on the Cal/Val portal to collate results of past comparisons between "optical" sensors or sensors to a "reference" (methodology or test-site). 

4.4)   A second stage to this comparison process was proposed using "products" as opposed to just L1 (radiances/reflectances)

5.) Session 3: Vicarious cal/val methods for OC imagers 

5.1)   Key debate on relative merits of calibration of OC sensors based on vicarious "surface measured " water leaving radiances i.e. the sensor including algorithm and atmosphere etc or the calibration of the sensor independent of the OC retrieval algorithm (including pre- and post- launch techniques). 

5.2)   How many "primary reference buoys" (MOBY, Bousolle) are needed?

5.3) Some discussion on the Aeronet-OC and its status as cal or val and role to correct algorithms of sensors to account for class 2 waters and associated atmospheres

5.4) Following the recent CEOS pilot comparison of OC vicarious measurements/instruments carried out in Europe and the similar exercise in the US a full CEOS comparison should be organised.  

6.)  Session 4: The way forward/Generic issues

 6.1)  PRIORITY ACTION  It was noted that if we are to regularly compare between sensors and between methodologies we need to agree some protocols to aid data exchange and interoperability. 

6.2)    Concern was raised that the profile of Cal/val portal as core community communication tool needed to be higher.  Also the profile of CEOS WGCV and IVOS as the international coordination point for satellite (optical) cal/val particularly in the emergent climate community. 

6.3)   IVOS workshop should be organised to consider "direct-radiometric calibration and traceabiliy of sensors and associated systems: Pre- and in-flight. 

6.4)   Funding for international: infrastructure, comparisons, data bases, test-sites etc is difficult to rely upon for planning, generally ad-hoc, and not shared in any "fair" manner.  


Session 1:

1. Establishing "best practise" for use and characterization of test-sites

1.1 Differences observed from instrument/reference standard artefacts particularly when both described as traceable to a National Metrology Institute (NMI) 

It was noted that care needs to be taken when interpreting supplier statements of "traceability" these do not necessarily comply with the international metrological definition of traceability (which for information is detailed in QA4EO guidelines).  Often some suppliers use the phrase "traceable", when all they have done is purchased an artefact or a calibration from an NMI, but have not necessarily got appropriate procedures for its use or have a rigorous evaluation of uncertainty in their subsequent use.  

The example quoted - indicated an observed difference between secondary laboratory/company supplied lamps of spectral irradiance traceable to NIST and NPL with differences of ~ 10% in the near IR spectral region whereas the real difference between direct calibrations between NPL and NIST is <1%.

Users are encouraged to ask suppliers for documentary evidence to support any uncertainty they quote.

1.2  Significant differences exist in the value of the Reflectance Factor of reference panels used for the calibration of spectrometers used in the characterisation of test-sites – depending on whether they are calibrated using hemispherical radiation or at a fixed illumination/view angle or goniometrically (multiple illumination angles).  

In terms of the use of test-sites for vicarious calibration/validation of satellite sensors.  What is the preferred value to use or should it be something different entirely?

1.3   All aspects of an instrument (field and satellite) need to be evaluated to avoid potential unaccounted sources of uncertainty which might impact on other measurements e.g. stray-light, field of View, MTF etc.

Clearly an essential aspect that should be encouraged and included within any best-practise guidance and of course is contained within the QA4EO process.   It was noted that the use of test-sites with differences in the spectral and brightness characteristics may help to identify some of these issues particularly cross-calibrations between sensors, e.g. differences in relative biases between use of snow and deserts.

Any proposals for other priorities:  IVOS currently intends to look at sites for MTF and some darker sites for linearity as the next priorities.  

1.4  Establishing a best practise for test-site characterisation.  What is achievable in terms of overall uncertainty for sensor vicarious calibration?  Effects of temporal and spatial variance. 

IVOS have a draft Best practise document.  Needs to be completed.  Should be kept sufficiently generic to allow innovation and optimisation to particular conditions but also scope key measurement requirements, considerations and instrument characteristics to serve as educational material and ensure sufficient consistency in interpretation of resultant data sets and its useability.  

Current uncertainties for vicarious calibration are ~3 to 5% level but could with sufficient effort and monitoring be pushed to ~0.5%, particularly if high accuracy satellite based reference sensors such as CLARREO / TRUTHS can be used to support ground instrumentation.

Need to have regular but not necessarily frequent comparisons between ground measurement techniques and instrumentation with an emphasis on the instrumentation and reference standards as opposed to major field exercises.   

1.5   Use of fully parameterised model such as proposed by JRC considered to be a target to work towards. 

1.6   Access to results of test site characterisations - considered to be controversial.  

Whilst recognised that open access to CEOS members of results of site characterisation data is in principle a good thing it was noted that there are many potential issues some are listed below:

1)  Interpreting ground results for different sensors (characteristics) needs to be done with care and requires significant additional information other than simple surface reflectance data and that this ancilliary information could need interpretation by the data collector.  If only data sets are provided this could lead to wrong analysis and results and confusing messages to the satellite users.

2)  Data collection is expensive and some robust funding mechanism needs to be established to allow any prospect of open data access  

3)  Some countries consider that some data sets need to be restricted access.

4)  Temporal stability needs to be considered or monitored.

5)  Could make information available as to when a site campaign is taking place via the cal/val portal which may lead to site information being more widely available

2. Establishing a Network of Automated test sites: "LANDNET"

2.1   In principle confirmed that a sub-set of CEOS sites to serve as a focus for the community was a good thing.  It was noted that the first priority was for radiometric gain but that activities were and should be pursued to cover other sensor characteristics   

1) Care needed to be taken as all sites are not good for all tasks (spatial resolution, brightness, temporal coverage ...).   

2) More discrimination in criteria for sites to become CEOS endorsement (may result in some of current not remaining and other new ones to be encouraged but still aim for relatively few in overall number). Input on quantitative criteria welcomed.

3) A set of baseline specifications of core instrumentation needed to be established and that for a network this needed to be automated.  Recognised that to be generic to meet the potential needs of a large number of sensors surface spectral information ideally needed to be spectrally continuous. Where stability only monitored in bands, info on the intra-band temporal stability/variability may also be necessary. However also clear that the technology for automation existed and was not necessarily expensive to install but long-term maintenance/development  needed a financial commitment to ensure skills and continuity can be properly developed and planned into the future. 

4) Some agencies (as recommended by CEOS) are now building into their acquisition programmes regular observations over the CEOS Landnet sites.  This needs to be encouraged for all and also the community needs to establish a means of collecting and archiving the data for comparison purposes.   CNES agreed to look at extending the SADE database to allow collection of the data over the Landnet sites.  The community considered that with the opening of the database this provides an ideal framework upon which to build for the satellite measured information and that this could be linked with a secondary database(s) (to be created, evolved) to link ground measured data.  

2.2 The key issue that needs to be addressed by the community concerns data access policy. 

It was clear that there were different views and potential barriers that needed to be overcome in terms of "free and open" access to data from test-sites which would need to be resolved to enable any potential network to be established.

1)   Data from test-sites seems subject to ITAR restrictions of the US. 

2)   Some interpretation of results may be needed for each individual sensor

3)  Maintenance of a site, infrastructure and regular characterisation etc was expensive and some mechanism needed to be established to ensure that this is "guaranteed" and also "fair" in an international sense.

3. "Non-test-site" vicarious calibration methodologies: Deep convective clouds, Rayleigh scattering, Sun-glint, water clouds, Moon 

3.1  Following the presentations and discussion it was considered that  most methodologies were suitable for cross-calibration and in many cases absolute calibration in their own right.  In particular it was considered that the uncertainty capabilities (with some research effort) of all methodologies were comparable to each other and to those obtainable from the test-sites ( ~1%).   

Key output was that there needs to be good documented procedures to enable all community to understand the methodologies, their uncertainties and scope of applicability.  These procedures should be written following the QA4EO guidelines and as a consequence be at a level that would allow the potential of implementation by others.  It was noted that there was an existing effort from GSICS to do some of this activity for operational meteorological focused sensors (and the individuals currently leading some of these efforts perceived no problem in opening up this activity to the wider CEOS community.  An action was taken by the IVOS chair and GSICS representative to explore how to combine efforts to address the wider more generic need, including the prospect of expanding/generalising existing GSICS comparison databases and tools so that they can become a CEOS resource. 

3.2 Priority Action A1:   Action to set up small working groups (in-conjunction with GSICS) to draft a CEOS endorsable best practise "procedure" for the various methodologies.  

ESA has established a small contract to provide some resource through a company called ARGANS to support some of these working groups. This will compliment that already being expended by agencies such a NOAA, NASA, CNES and EUMETSAT within the GSICS framework 


Session 2

4. Identify biases between sensors.

4.1   Debate on relative merit of performing sensor cross-comparisons in terms of ToA radiance or reflectance.  

Consensus was ToA reflectance in general normalised to nadir view, however, it was noted that providing the detail of approach and the conversion factors used well documented that radiance was also reasonable and that the choice should really reflect the sensors normal mode of operation and user requirements.

4.2   Key conclusion was that comparisons were critical to a successful internationally harmonised and interoperable EO system.  It was important to ensure that this was fully recognised by the agencies and as part of this to have a clear and consistent understanding/summary of our current knowledge.

4.3Priority Action A2:   It was agreed that a first step would be to establish a database on the Cal/Val portal to collate results of past comparisons between "optical" sensors or sensors to a "reference" (methodology or test-site).

 The database would then be populated by results (summaries of exercises as opposed to individual measurements) from the community.  Although the results will be open to analysis by any CEOS IVOS member (although any publication needs to be by agreement) a small sub-group will be formed to establish an agreed protocol for this as a baseline for discussion amongst all IVOS members. 

The results of this exercise would also be used to inform those developing best practise guidance for the various methodologies.

4.4   A second stage to this comparison process was proposed using "products" as opposed to just L1 (radiances/reflectances). 

Although some care needs to be taken regarding consistent definitions it was considered that the user community would find this very valuable. 


Session 3

5. Vicarious cal/val methods for OC imagers

5.1   Key debate on relative merits of calibration of OC sensors based on vicarious "surface measured " water leaving radiances i.e. the sensor including algorithm and atmosphere etc or the calibration of the sensor independent of the OC retrieval algorithm (including pre- and post- launch techniques).  

Although not clear that the debate was fully resolved it seems that there was broad consensus that at least over class 1 waters and where the Aerosols are reasonably well defined because our knowledge of atmospheric correction has improved in recent times the two approaches are and should be largely consistent.  

5.2   How many "primary reference buoys" (MOBY, Bousolle) are needed?

Consensus appears to be that only 1 is needed but that ideally 2 or 3 to ensure some redundancy is preferred.  These would allow firm anchoring of orbital sensors in class 1 waters but not of course any geo-stationary systems.

5.3    Some discussion on the Aeronet-OC and its status as cal or val and role to correct algorithms of sensors to account for class 2 waters and associated atmospheres. 

At present it was considered that even if the uncertainty of the Aeronet-OC systems could be reduced (as they are not incomparable) to be relatively close to those of Bousolle and MOBY that the nature of the class 2 water and atmospheric models where they are deployed was such that it should still be considered validation and not calibration.  But that they are essential to enable proper correction of sensor measurements in these regions.  For these systems to really have the potential of being cal sites they need to be hyperspectral and not just multi-spectral.  

5.4   Following the recent CEOS pilot comparison of OC vicarious measurements/instruments carried out in Europe and the similar exercise in the US a full CEOS comparison should be organised.  

A recommendation has been passed from CEOS IVOS to CEOS plenary to look at organising a full CEOS comparison perhaps as a linked series of regional comparisons (e.g. Europe, America, Asia) and if possible linking these to the sensors.  Volunteer agencies willing to support this please contact CEOS IVOS Chair, community interest in participation please indicate in wiki.


Session 4

6. The way forward/Generic issues

6.1  Priority Action A3  It was noted that if we are to regularly compare between sensors and between methodologies we need to agree some protocols to aid data exchange and interoperability.

A discussion document has been produced by D Smith of RAL to initiate this.

6.2    Concern was raised that the profile of Cal/val portal as core community communication tool needed to be higher.  Also the profile of CEOS WGCV and IVOS as the international coordination point for satellite (optical) cal/val particularly in the emergent climate community.  

Views welcomed.

6.3   IVOS workshop should be organised to consider "direct-radiometric calibration and traceabiliy of sensors and associated systems: Pre- and in-flight. 

Aim for 2012/13. Any priority themes/topics that should be covered.

6.4   Funding for international: infrastructure, comparisons, data bases, test-sites etc is difficult to rely upon for planning, generally ad-hoc, and not shared in any "fair" manner.  

Views on how to improve welcomed.