Chrome Extension
WeChat Mini Program
Use on ChatGLM

Demonstrating the importance of streamflow observation uncertainty when evaluating and comparing hydrological models

crossref(2023)

Cited 0|Views10
No score
Abstract
<p>Large-sample hydrology datasets provide an excellent test-bed for evaluating and comparing hydrological models. The validity of the results from studies that use large-sample hydrology datasets, however, can be undermined when observation uncertainty is not taken into account in the analyses. The differences between model simulations might well be within the observation uncertainty bounds and are, therefore, inconclusive on model performance.</p> <p>To this end, we highlight the importance of including streamflow observation uncertainty when conducting hydrological evaluation and model comparison experiments based on the CAMELS-GB dataset (Coxon et al., 2015) . We introduce a generic flexible workflow that accounts for streamflow observation uncertainty, but is also applicable for other sources of observation uncertainty. This workflow is implemented in the &#8216;FAIR by design&#8217; eWaterCycle platform (Hut et al., 2022).&#160;</p> <p>Two experiments are conducted to demonstrate the effect that streamflow observation uncertainty has on large-sample dataset based conclusions. The first experiment is an inter-model comparison experiment of the distributed PCR-GLOBWB and wflow_sbm hydrological models (Hoch et al. (2022) & van Verseveld et al. (2022)). The second experiment is an inner-model evaluation of the impact of additional streamflow based calibration on the results of the distributed wflow_sbm hydrological model. For the latter we found that approximately one third of the catchment simulations resulted in model differences that fell within the bounds of streamflow observation uncertainty.</p>
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined