The authors have done a good job of responding to the reviewers' comments; the manuscript has significantly improved. The minor revisions listed below should be addressed, at which point I believe the manuscript will be ready for publication.
Pg. 2, Ln 2: This is too narrow a definition of drought. This should either be specified as a definition of meteorological drought or this sentence should be further qualified.
Pg. 2, Ln. 10: Clause starting "to which" is awkward and should be restructured.
Pg. 2, Ln 30: There is no such thing as PET data. The authors mean that the data necessary to calculate PET are not available.
Pg. 4, Ln 3: "Evaporation is 10 to 20 times of precipitation during the dry season..." Despite saying this, I still think the authors downplay ET overall. This limits their investigation of exclusively precipitation data. I understand their data limitation, but more caveats and caution could be provided in light of the relative role that ET likely plays in the annual moisture budget in the region.
Pg. 4, Ln 8: Remove "be affected"
Pg. 13, Ln. 11: I missed this detail in the original manuscript, but this level of interpolation seems vastly unrealistic given only 29 stations across the domain. The authors discuss their grid choice as being a balance of grid cell density and computation, but the real limitation is sampling density. I would be very concerned about aliasing in this context. At the very least, however, the resolution of the grid far exceeds the sampling density and overestimates the amount of information in the grid. That can be seen clearly, for instance, in the map plotted in Figure 3, where the patterns are smoothed significantly relative to the grid sizing.
Pg. 16, Ln. 1: "can be extracted from Thornes and Stephenson (2001)" is too vague. The authors should include enough detail on this point to provide more clarity in text about how they have estimated the SE.
Pg. 18, Ln. 15: The idea that 13 events prevents cross validation is not very well justified. Yes, it is a small number of events, but what quantitative measure demonstrates that this sampling is too small for something like leave-one-out CV? I don't necessary disagree with the authors, but their argument here is subjective without clearer statistical reasoning.
Figure 5: The red oval in the figure is odd and not described. I don't think it is needed.
Pg 19, Ln. 1: This is not the correct interpretation of the 95% CIs in this case. At least as I understand what they have done, the CIs in this case represent the range of model assessment, i.e. the spread based on different choices of indices etc. Only a CV experiment would yield an assessment of the skill scores in the context of sampling. This description should be revised. |