The following is a review of “Quantifying the potential future contribution to global mean sea level from the Filchner-Ronne basin, Antarctica” By E. A. Hill, et al.
This manuscript describes a set of experiments using the ice flow model Úa, aimed at characterizing the spread of possible future sea level contribution from the Filchner-Ronne (FR) drainage basin. The authors create a set of ensemble simulations, under various RPC emission scenarios, that extend to the year 2300, and use these simulations to build a surrogate model. To create their ensemble, the authors sample parameters related to ocean forcing, atmospheric forcing, and ice dynamics within bounds they derive from literature, climate model ensembles of future projections, and Bayesian analysis (for the ocean forcing parameters). The authors illustrate that the surrogate model exhibits skill in predicting the sea level contribution projected by Úa in the FR region by year 2300, and they then use the surrogate to create a million-sample distribution to represent the possible spread of sea level contribution from FR within the same period. Surrogates are also created to derive estimates of sea-level contribution at years 2100 and 2200, for analysis of projected sea level contribution through time. Results suggest that the FR region is not likely to contribute positively to sea level contribution in the future, largely due to the modeled increase in accumulation over time in response to warming atmospheric conditions. However, significant contribution to sea level from this region is found to be possible (more than >30cm by year 2300), though this outcome is not probable. The author’s analysis also allowed them to isolate contribution to uncertainty from the various model parameters sampled, and they find that atmospheric and ocean forcing account for the majority of the sea-level projection uncertainty, in agreement with past studies. The authors specifically highlight these model boundary conditions as the most important to improve models of in order to increase confidence in ice sheet model projections of sea level contribution.
Here, the authors present a novel approach to the challenge of quantifying uncertainties in ice sheet model projections. Running the number of model simulations required for robust assessment of uncertainties is, in many cases, not computationally feasible, so the design of an adequate surrogate model for this purpose is highly advantageous. Overall, I find that the authors thoroughly describe their methods, experiments, reasoning, and caveats. The discussion, in particularly, highlights the care that must be taken when considering results from a single ice sheet model where specific assumptions are necessary to produce realistic model ensembles. The manuscript is well-written and the figures are highly illustrative of the methods and results. The workflow diagram (Fig. 2), is especially helpful in describing the investigation’s strategy. In addition, the results are thorough and well-organized, therefore I find this manuscript highly appropriate for publication in the Cryosphere, with revisions and some supporting analysis.
I have a number of questions and comments for the authors, as listed below, for author response and discussion.
General comments:
Discussion section – The discussion is quite thorough, and you cover many important points and caveats. However, I think it would be improved if you also expand upon some interesting topics that are brought up in the results section, specifically pertaining to the advantages and disadvantages of using a surrogate model for this analysis. For example, in the results section, you note an example of an advantage of using the surrogate, can you expand upon this in the discussion to talk about what you learned in that exercise with respect to the importance of including extremes in your training set? Could you also expand upon what might be the disadvantages or pitfalls that others using your methods could encounter? (For example, is there a danger of not capturing threshold behavior or runaway retreat, as you observe in some of your extreme forcing simulations?) Is it possible that runaway retreat is more likely than your training set suggests, or do you think that your sampling space and final pdf capture the spread of possible scenarios accurately?
In addition, how important was it to use a surrogate to capture the full sample space? That is, do your final pdf’s reveal a different pdf than your ensembles suggest? Perhaps you can show some training run (ensemble) pdf’s vs. the surrogate sampling pdf’s in the appendix to illustrate this point.
Specific questions and suggestions:
Line 1: This opening sentence is a bit awkward to read. Maybe adding “The future …” => “change”, “behavior”, or “evolution” or a similar phrase would make your point clearer.
Line 34: … as the “combined area” of the two major drainage basins… (or something similar)
Line 55: I understand the point you are trying to make in this sentence, but it reads awkwardly. Please try rephrasing.
Line 159: Could you add a thin or dashed like to Fig 1 or its small inset that shows where the divide between the two basins sits?
Line 162: Perhaps in the supplement, an illustration of what your mesh looks like, perhaps in some key locations, would be very helpful.
Line 170: Could you please specify the settings for your initial mesh adaption, for instance: What is your minimum mesh for the adaption? Is it still 900m? Is there a maximum mesh size near the grounding line? How close to the grounding line is the adaption imposed (i.e. is there a set buffer length)?
Line 171: What is the minimum thickness value that is imposed?
Like 235: You discuss only changes to surface accumulation through time. Do your simulations have a representation of surface melt as well (i.e., a PDD scheme or something similar)? Please specify this in the text.
Line 315: The term “point” is used a number of times in the text to refer to a location in your sampling space. This terminology is easy to confuse with a point in map space. Is it possible to use a more specific term, like “sample point” or something similar throughout the manuscript to distinguish between sampling space and actual 2d map space?
Line 322: Please quantify the model drift (or spread of model drift for all of your control runs) here.
Line 328: Throughout the manuscript, you refer to this set of simulations as your “ensemble”. It would be helpful for the reader if you explicitly call out here that these are the runs that you will hereafter call the “ensemble” (or create a name for this set of runs that you can refer to later in the manuscript).
Line 339: Please add a quantitative statement with respect to the surrogate model being “close” to the ice flow model response.
Line 345: What type of algorithm is used for the sampling of these 1 million simulations? Is it Latin hypercube as in the other ensemble sampling?
Figure 5: Please note the year at which the sea GMSL represents in the plot or caption (i.e. 2300).
Line 440: Please note in the text the simulations used for this analysis (i.e., the ensemble)
Line 530: While Ritz et al., 2015 do not change surface mass balance through time, Schlegel et al., 2018 apply a step function on accumulation, so there is still a possibility for suppression due to accumulation. The difference is more likely due to your treatment of ocean forcing (e.g., PICO with Bayesian exploration extreme melt rates, which may be lower but considered more realistic, especially with consideration to the possible time lag between atmosphere and ocean warming), as well as your application of no melt on partially floating elements, adaptive grounding line mesh, and even the repetition of inversion procedures for each model simulation.
Line 535: Your results show a strong dependence on accumulation, and the discussion below gives an inclusive overview of the challenges in forcing accumulation on ice sheet models in general. Could you offer a quantitative comparison between the spread (pdf) of the change (anomaly) in precipitation that your sampling imposes on your simulations, and that that is predicted by CMIP5 models? For example, you present the spread in the ocean forcing time delay parameter from LARMIP-2, could you show similarly, maybe in the supplement, a similar pdf for your regional accumulation from various CMIP GCM's and then compare that against a pdf that represents accumulation sampling from the surrogate (or even just the ensemble?). I am curious to see this comparison, since while you sample p in a normal way (Fig. 3), this parameter is exponentially related to accumulation. Is it possible that this choice skews your sampling to higher sensitivity to temperature change? Is the imposed sampling of total increased accumulation realistic as compared to what GCM’s are predicting? If not, this should be mentioned as a caveat in your discussion, because it has implications for the shape of your final GMSL pdfs (Fig. 5), and total probabilities.
Line 764: Could you quantify at what probability does the Reese et al., 2018a value sit in your distribution? Can you comment on the possible implications of your values sitting much lower in magnitude than suggested by this previous study? In Fig. S2 it appears that once the ϒ*T value approaches 2x10-5, that the delta GMSL contribution starts to become highly negative. Can you discuss why that is and how it influences your resulting pdf from Fig. B1?
Line 766: basal mass balance?
Figure S1: Please specify the simulation year of GMSL presented.
Figure S2: The results presented here are a bit puzzling. Are they, as other results, represented as difference from the control simulation? If so, if I am understanding your methods correctly, then I would expect all graphs to cross a value of zero ΔGMSL, when the value of the sampled parameter is equal to the value of the control run. Could you speak to this, specifically why most of the graphs show a negative contribution to sea level for all of the sampled values?
Figure S4: Please check the lettered labels on the right-hand column plots.
We extend our thanks to the reviewers for their positive feedback on our manuscript and constructive comments that will greatly help to improve the manuscript. We have addressed the comments and provide our responses in the attached document, where referee comments are in black and our replies are in blue. We have also included a tracked-changes version of the manuscript at the end of the document.
Review of "Quantifying the potential future contribution to global mean sea level from the Filchner-Ronne basin, Antarctica" by Emily A. Hill, Sebastian H. R. Rosier, G. Hilmar Gudmundsson, and Matthew Collins.
Recommendation: minor revision
One of the two nominated referees had to cancel his/her participation for personnal reasons. I nonetheless received a positive preliminary feedback on this work. To save time in the review process, I have therefore decided to write a review myself.
The paper reads well and the methods are robust and clearly described. The results are important for the ice-sheet and sea-level communities, and I only have a few minor comments that will hopefully improve the paper:
L. 136: effect -> effects
L. 165: “with a percentage deviation of only 3%” -> specify in 2100 or 2300.
Fig. 3: indicate the units of tuned coefficients.
Section 3.3: the increased surface mass balance for higher temperatures holds for moderate warming, but for RCP8.5 warming to 2300, there will likely be more ablation by surface melting and the surface mass balance may become negative at some locations (see Kittel et al. 2021, their Fig. 5 where the negative runoff contribution of the grounded-ice SMB starts to significantly increase towards 2100). This is likely somewhat captured by the lower bound of the p parameter which is well below what is expected from the Clausius-Clapeyron relationship, but this could be briefly discussed.
L. 215-221: please provide references, e.g. to IPCC-AR5, and indicate whether the provided warming values refer to the CMIP5 multi-model mean or to the MAGICC emulator.
L. 275: a better or additional reference here would be Favier et al. (2019) in which the box model (“PICO”) was evaluated as a relatively good parameterization: Favier et al. (2019). Assessment of sub-shelf melting parameterisations using the ocean–ice-sheet coupled model NEMO (v3. 6)–Elmer/Ice (v8. 3). Geosci. Mod. Dev., 12(6), 2255-2283.
L. 280: indicate that this is sea-floor temperature and salinity.
Discussion section: There is a deep uncertainty related to processes that are not represented, e.g. evolution of the position of the calving front, hydrofracturing due to higher surface melt in the future, evolution of ice damage. Some of these processes may play a key role in the future FRIS contribution to sea level, and this should be discussed.
We extend our thanks to the reviewers for their positive feedback on our manuscript and constructive comments that will greatly help to improve the manuscript. We have addressed the comments and provide our responses in the attached document, where referee comments are in black and our replies are in blue. We have also included a tracked-changes version of the manuscript at the end of the document.
I've received late external feedbacks on your paper. Please also provide responses to these four points:
- A single model realisation is used to create the 2300 projections. It would have been more rigorous to use several simulations, in order to test the extremes of plausible scenarios.
- A Latin hypercube is used to sample 500 parameter values. Why was 500 chosen as the design size ? Has anything been done to check the plausibility of the values or to sample extremes?
- Several parameters are given uniform priors without much reasoning. Why was uniform the best choice?
- The long tail for RCP8.5 contributions at 2300 is mentioned as being due to certain parameter combinations – what combinations? Are they plausible? That would tell a lot about how much attention should be paid to these upper values.
We extend our thanks to the reviewers for their positive feedback on our manuscript and constructive comments that will greatly help to improve the manuscript. We have addressed the comments and provide our responses in the attached document, where referee comments are in black and our replies are in blue. We have also included a tracked-changes version of the manuscript at the end of the document.