From 989e3131f809a38af3f1eb2c0b2f0ee5d785335b Mon Sep 17 00:00:00 2001 From: Yousuke Takada Date: Sat, 7 Apr 2018 15:36:33 +0900 Subject: [PATCH] Edit comments on Bayesian model and overfitting (#10) --- prml_errata.tex | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/prml_errata.tex b/prml_errata.tex index f1d1325..66db56c 100644 --- a/prml_errata.tex +++ b/prml_errata.tex @@ -1956,8 +1956,14 @@ \subsubsection*{#1} Bayesian methods, like any other machine learning methods, can overfit because the \emph{true} model from which the data set has been generated is unknown in general so that one could possibly assume an inappropriate (too expressive) model -that would give a terribly wrong prediction very confidently. -This is true even when we take a ``fully'' Bayesian approach as discussed in the following. +that would give a terribly wrong prediction very confidently; +this is true even when we take a ``fully'' Bayesian approach +(i.e., \emph{not} maximum likelihood, MAP, or whatever) as discussed shortly. +We also discuss in what follows +the difference between the two criteria for assessing model complexity, namely, +the \emph{generalization error} (see Section~3.2) and +the \emph{marginal likelihood} (Section~3.4), +which is not well recognized in PRML. \parhead{A Bayesian model that exhibits overfitting} Let us take a Bayesian linear regression model of Section~3.3 as an example and