This is my second review of this paper. The paper has strongly improved following the first round of review and is now in very good shape, almost ready for publication. It will constitute an interesting input to existing literature on the subject. Nevertheless, I still have a few suggestions for clarifications, which may be mandatory from my point of view, to allow it to be published.
1) P. 2, L. 24-26, the AMO potential impacts are claimed to be not “consistently represented in the proxy data”, but the reader is left to understand what this is meaning and how such a strong conclusion has been reached. Furthermore, it is a bit weird to have this claim concerning a preliminary result already in the introduction. I would advise to clearly explain in a paragraph or so why the AMV is not consistently represented in the proxy data (which analysis, metrics used…) and to move this in the results section of the analysis
2) P. 3, L. 32: “stationary manner”. I do not get why the authors are doing this claim. Pseudo-proxy approaches are allowing to apply a given statistical methodology to output of climate models, where the true NAO is known, to see notably if there is a kind of stationarity in the reconstruction quality. It is not assuming any stationarity hypothesis. This is mainly a way to test statistical methodology and see within a model world if it works properly when everything is known i.e. reconstructing an index from a few locations, while the dynamical index is known and this, for a long timeframe (last millennium simulations for instance). Can you please further clarify what you have in mind here?
3) P. 12, L. 19: Usually the calibration/validation approach is made with an ensemble approach (through random selection of different independent calibration and validation periods), leading to a distribution of r2 that allows to have a better idea on the performance of the statistical model used. |