# Menu

notatki:ci:literatura

- Baxt, W.G. and White, H. (1995) Bootstrapping confidence intervals for clinical input variable effects in a network trained to identify the presence of acute myocardial infarction, Neural Computation, 7, 624-638.
- Breiman, L. (1996), Heuristics of instability and stabilization in model selection, Annals of Statistics, 24, 2350-2383.
- Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C.J. (1984), Classification and Regression Trees, Belmont, CA: Wadsworth.
- Breiman, L., and Spector, P. (1992), Submodel selection and evaluation in regression: The X-random case, International Statistical Review, 60, 291-319.
- Dijkstra, T.K., ed. (1988), On Model Uncertainty and Its Statistical Implications, Proceedings of a workshop held in Groningen, The Netherlands, September 25-26, 1986, Berlin: Springer-Verlag.
- Efron, B. (1982) The Jackknife, the Bootstrap and Other Resampling Plans, Philadelphia: SIAM.
- Efron, B. (1983), Estimating the error rate of a prediction rule: Improvement on cross-validation, J. of the American Statistical Association, 78, 316-331.
- Efron, B. and Tibshirani, R.J. (1993), An Introduction to the Bootstrap, London: Chapman & Hall.
- Efron, B. and Tibshirani, R.J. (1997), “Improvements on cross-validation: The .632+ bootstrap method,” J. of the American Statistical Association, 92, 548-560.
- Goutte, C. (1997), “Note on free lunches and cross-validation,” Neural Computation, 9, 1211-1215, ftp://eivind.imm.dtu.dk/dist/1997/goutte.nflcv.ps.gz
- Hjorth, J.S.U. (1994), Computer Intensive Statistical Methods Validation, Model Selection, and Bootstrap, London: Chapman & Hall.
- Hurvich, C.M., and Tsai, C.-L. (1989), Regression and time series model selection in small samples, Biometrika, 76, 297-307.
- Kearns, M. (1997), “A bound on the error of cross validation using the approximation and estimation rates, with consequences for the training-test split,” Neural Computation, 9, 1143-1161.
- Kohavi, R. (1995), “A study of cross-validation and bootstrap for accuracy estimation and model selection,” International Joint Conference on Artificial Intelligence (IJCAI), pp. ?, http://robotics.stanford.edu/users/ronnyk/
- Masters, T. (1995) Advanced Algorithms for Neural Networks: A C++ Sourcebook, NY: John Wiley and Sons, ISBN 0-471-10588-0
- Plutowski, M., Sakata, S., and White, H. (1994), “Cross-validation estimates IMSE,” in Cowan, J.D., Tesauro, G., and Alspector, J. (eds.) Advances in Neural Information Processing Systems 6, San Mateo, CA: Morgan Kaufman, pp. 391-398.
- Ripley, B.D. (1996) Pattern Recognition and Neural Networks, Cambridge: Cambridge University Press.
- Shao, J. (1993), Linear model selection by cross-validation, J. of the American Statistical Association, 88, 486-494.
- Shao, J. (1995), An asymptotic theory for linear model selection, Statistica Sinica ?.
- Shao, J. and Tu, D. (1995), The Jackknife and Bootstrap, New York: Springer-Verlag.
- Snijders, T.A.B. (1988), On cross-validation for predictor evaluation in time series, in Dijkstra (1988), pp. 56-69.
- Stone, M. (1977), Asymptotics for and against cross-validation, Biometrika, 64, 29-35.
- Stone, M. (1979), Comments on model selection criteria of Akaike and Schwarz, J. of the Royal Statistical Society, Series B, 41, 276-278.
- Tibshirani, R. (1996), A comparison of some error estimates for neural network models, Neural Computation, 8, 152-163.
- Weiss, S.M. and Kulikowski, C.A. (1991), Computer Systems That Learn, Morgan Kaufmann.
- Zhu, H., and Rohwer, R. (1996), No free lunch for cross-validation, Neural Computation, 8, 1421-1426.

notatki/ci/literatura.txt · Last modified: 2019/03/21 13:06 (external edit)

Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 4.0 International