Please use this identifier to cite or link to this item: http://hdl.handle.net/2440/111730
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: On lack of robustness in hydrological model development due to absence of guidelines for selecting calibration and evaluation data: demonstration for data-driven models
Author: Zheng, F.
Maier, H.
Wu, W.
Dandy, G.
Gupta, H.
Zhang, T.
Citation: Water Resources Research, 2018; 54(2):1013-1030
Publisher: American Geophysical Union
Issue Date: 2018
ISSN: 0043-1397
1944-7973
Statement of
Responsibility: 
Feifei Zheng, Holger R. Maier, Wenyan Wu, Graeme C. Dandy, Hoshin V. Gupta, and Tuqiao Zhang
Abstract: Hydrological models are used for a wide variety of engineering purposes, including streamflow forecasting and flood-risk estimation. To develop such models, it is common to allocate the available data to calibration and evaluation data subsets. Surprisingly, the issue of how this allocation can affect model evaluation performance has been largely ignored in the research literature. This paper discusses the evaluation performance bias that can arise from how available data are allocated to calibration and evaluation subsets. As a first step to assessing this issue in a statistically rigorous fashion, we present a comprehensive investigation of the influence of data allocation on the development of data-driven artificial neural network (ANN) models of streamflow. Four well-known formal data splitting methods are applied to 754 catchments from Australia and the U.S. to develop 902,483 ANN models. Results clearly show that the choice of the method used for data allocation has a significant impact on model performance, particularly for runoff data that are more highly skewed, highlighting the importance of considering the impact of data splitting when developing hydrological models. The statistical behavior of the data splitting methods investigated is discussed and guidance is offered on the selection of the most appropriate data splitting methods to achieve representative evaluation performance for streamflow data with different statistical properties. Although our results are obtained for data-driven models, they highlight the fact that this issue is likely to have a significant impact on all types of hydrological models, especially conceptual rainfall-runoff models.
Keywords: Model evaluation bias; hydrological models ; calibration and evaluation; artificial neural networks (ANNs); data allocation; data splitting
Rights: © 2018. American Geophysical Union. All Rights Reserved.
RMID: 0030084910
DOI: 10.1002/2017WR021470
Grant ID: http://purl.org/au-research/grants/arc/CE1101028
Appears in Collections:Civil and Environmental Engineering publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.