This website uses cookies, which are small text files that are used to make websites work more effectively. In order to continue using this website, you will need to accept the use of cookies.

FraFrancois Massonettençois Massonnet is working as a F.R.S.-FNRS Research Associate at the Université Catholique de Louvain (UCLouvain). His research focuses on the use and assessment of climate general circulation models for prediction at time scales from months to decades. Together with Irina Sandu, he co-leads work package 4 of the APPLICATE project. The work package aims to use state-of-the-art numerical weather prediction systems and climate models to guide the future development of Arctic observing systems over the next decade. The work package addresses two broad questions: (1) How can we make better use of the existing observational Arctic data in order to improve sub-seasonal to seasonal predictions at high- and mid-latitudes? and (2) How would new, hypothetical observations enhance the skill of predictions further?

 

1. How do climate general circulation models (GCM) work and how are they evaluated/validated?

Answering the first question is rather straightforward but answering the second one is rather complicated.

To cut a long story short, GCMs are basically a translation of our understanding of how the climate works, in a programming language. GCMs embody the basic laws of physics (conservation of mass, momentum and energy) that are applied to the relevant components: the ocean, the atmosphere, the cryosphere. Since a few years, GCMs have also attempted to simulate biogeochemical cycles, which is you could read the term Earth System Model (ESM). State-of-the-art GCMs cannot run on simple computers though: they feature at least millions of lines of code, need massive amounts of memory, and require significant storage space to host their results. Running GCMs on even the world's best supercomputers is a challenge! Because computational resources are finite, most GCMs cannot resolve all physical processes especially those occuring at the small scale (e.g., cloud convection). Yet such small-scale processes can have a large-scale influence. The way these small-scale processes are taken into account is one significant source of uncertainty in GCM-based projections.

Regarding evaluation, things become more complicated because evaluation inevitably implies, at some point, subjective choices. While climate scientists won't generally agree on how a GCM should be evaluated, they would probably agree on the general purpose of GCM evaluation. The goal of evaluation is not to assess whether a GCM is good or bad in an absolute sense (we know it can't be a realistic representation of all aspects of nature), but rather to assess whether a GCM can be useful in a particular context. But even so, how do you do in practice? It is necessary to give oneself a reference (verification dataset), but these references are tainted with errors. Then, one needs to summarize the behavior of the GCM using simple diagnostics (maps, time series, ...), which are certainly easy to visualize, but that have come at the price of loosing potentially useful information. Finally, one needs to perform statistical analyses to measure the agreement with the reference dataset; but plenty of measures exist and it is often possible to find one that works better than the others.

My experience with GCM evaluation is that we should be very careful not to over-interpret our results. I'm quite strict on this, personally. When I'm using a model to study a particular scientific question, I always start with the prior hypothesis that the model is not suitable for answering my question and try to convince myself otherwise by using available evidence. If the GCM proves not bad enough to be discarded, I use it as additional evidence to make my point.

2. How did you contribute to the 5th assessment report of the International Panel on Climate Change (IPCC)?

I had the chance to be involved a contributing author of the Chapter 12 of the IPCC Working Group 1 Assessment Report 5, dealing with long-term projections. I was not expected to work on this during my thesis, but this came as a fantastic opportunity. Together with my then-supervisor Thierry Fichefet and with a great team of other scientists, we spent days, if not months, shaping up the sea ice section of the Chapter. This involved reading dozens of papers, carefully choosing the wording in each sentence and making sure the contents reflects the actual state of knowledge. In this IPCC report, the summer Arctic sea ice projections got particular attention. I had introduced a method for attempting to reduce uncertainty in summer sea ice extent projections based on the current model performance, and this method was chosen to narrow down uncertainties in sea ice projections. We came to the conclusion that the Arctic could be summer ice-free by mid-century, with possibility of earlier instances if internal climate variability enhances the forced contribution to the negative trend.

I would recommend this or any other experience with IPCC, to anyone. The IPCC is seeking for reviewers to comment on the successive drafts of reports, so it's everyone's chance to get involved. The IPCC is criticized by some to be a non-transparent politically-oriented institution. I think by contrast that the IPCC adheres to very strict internal rules that guarantee a maximal of transparency and a balanced and cirtical assessment of the most recent literature. The best way  to convince oneself about it is to be involved in the process, be it as author or as reviewer.

3. What do you think will be the main contribution of the APPLICATE project in advancing our understanding of changing Arctic sea-ice conditions?

The project has already delivered very concrete findings and recommendations, like a concerted protocol to evaluate models, the importance of using Arctic observations for weather prediction at lower latitudes and the fact that the available observational network could be better exploited for prediction purposes. Regarding sea ice, I would like to emphasize a recent result obtained by Ed Blockley (MetOffice) and Drew Peterson. They showed that the assimilation of sea ice thickness information drastically improved the seasonal prediction skill of summer sea ice at the spatial scale. This is very encouraging, are new sea ice thickness products are becoming available (e.g., ICESAT-2). Another important finding about sea ice in APPLICATE is the existence of very few degrees of freedom in the sea ice thickness field: that is, the typical time- and length scales of variability of thickness are large (~several months and several hundreds of km at least, respectively, according to modern reanalyses). What does that mean? If the real world has similar scales, then only a few point measurements would be enough to describe the thickness variability with enough accuracy. Potentially, this means that placing a few stations or moorings at strategic locations could already bring interesting insights on the state of sea ice.