Sensitivity analysis as an ingredient of modeling.
Nineteen years after Science’s review of sensitivity analysis methods in 1989 (System analysis
at molecular scale, by H. Rabitz) we search Science Online to identify and then review all
recent articles on "sensitivity analysis". In spite of the considerable developments which have
taken place in this discipline, of the good practices which have emerged, and of existing
guidelines for sensitivity analysis issued on both sides of the Atlantic, we could not find
other than very primitive sensitivity analysis tools, based on "one-factor-at-a-time" (OAT)
approaches.
In the context of model corroboration or falsification, this use of OAT methods is illicit and
unjustified, unless the model under analysis is proved to be linear.
We show that available good practices, such as variance-based measures and others, are
able to overcome OAT shortcomings and easy to implement. These methods also allow the
concept of factors importance to be defined rigorously, thus making the factors importance
ranking univocal. We analyse the requirements of sensitivity analysis in the context of
modelling, and present best available practices.
In the second talk, we describe smoothing methods for emulation and sensitivity analysis
including summary of tensor product cubic splines and reproducing kernel Hilbert spaces
(RKHS); recursive algorithm to estimate tensor product cubic splines; equivalence between
recursive and en-bloc algorithms; and the problem of heteroskedasticity