SWAT Literature Database for Peer-Reviewed Journal Articles

Title:Modifying goodness-of-fit indicators to incorporate both measurement and model uncertainty in model calibration and validation 
Authors:Harmel, R.D., D.K. Smith and K.W. Migliaccio 
Journal:Transactions of the ASABE 
Volume (Issue):53(1) 
Article ID: 
URL (non-DOI journals):ftp://bellnetweb.brc.tamus.edu/pub/outgoing/bkomar/windows/DUET-H_WQ/TransModel+Meas2010.pdf 
Model:EPIC & SWAT 
Broad Application Category:hydrologic only 
Primary Application Category:calibration, sensitivity, and/or uncertainty analysis 
Secondary Application Category:hydrologic assessment 
Watershed Description:Riesel Field Y6 and Medicine River in Texas, Reynolds in Idaho, and South Fork in Iowa, U.S. 
Calibration Summary: 
Validation Summary: 
General Comments:This study does not report any original SWAT and EPIC results but rather uses previously reported results within their analysis of incorporating uncertainty into goodness-of-fit indicators. 
Abstract:Because of numerous practical implications of uncertainty in measured data and model predictions, improved techniques are needed to analyze and understand uncertainty and incorporate it into hydrologic and water quality evaluations. In the present study, a correction factor was developed to incorporate measurement uncertainty and model uncertainty in evaluations of model goodness‐of‐fit (predictive ability). The correction factor, which was developed for pairwise comparisons of measured and predicted values, modifies the typical error term calculation to consider both sources of uncertainty. The correction factor was applied with common distributions and levels of uncertainty (represented by coefficients of variation ranging from 0.026 to 0.256) for each measured value and each predicted value from five example data sets. The modifications resulted in inconsequential changes in goodness‐of‐fit conclusions for example data sets with very good and poor model simulations, which is both logical and appropriate because very good model performance should not improve greatly and poor model performance should not become satisfactory when uncertainty is considered. In contrast, incorporating uncertainty in example data sets with initially moderate goodness‐of‐fit resulted in important improvements in indicator values and in model performance ratings. A model evaluation matrix was developed to present appropriate model performance conclusions, considering both model accuracy and precision, based on various levels of measurement and model uncertainty. In cases with highly uncertain calibration/validation data, definitive “good” fit conclusions are cautioned against even with “good” indicator values because of the uncertain standard of comparison; however, in these cases, poor model accuracy can be confidently concluded from “unsatisfactory” indicator values. In contrast, model accuracy can be confidently concluded from goodness‐of‐fit indicator values in cases with low measurement uncertainty. It is hoped that the modified goodness‐of‐fit indicators and the model evaluation matrix contribute to improved goodness‐of‐fit conclusions and to more complete assessments of model performance. 
Keywords:Index of agreement, Model evaluation, Nash-Sutcliffe coefficient of efficiency, Watershed models