Bayesian linear regression

Method of statistical analysis
Part of a series on
Bayesian statistics
Posterior = Likelihood × Prior ÷ Evidence
Background
Model building
  • Weak prior ... Strong prior
  • Conjugate prior
  • Linear regression
  • Empirical Bayes
  • Hierarchical model
Posterior approximation
Estimators
Evidence approximation
Model evaluation
  • icon Mathematics portal
  • v
  • t
  • e
Part of a series on
Regression analysis
Models
Estimation
Background
  • icon Mathematics portal
  • v
  • t
  • e

Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients (as well as other parameters describing the distribution of the regressand) and ultimately allowing the out-of-sample prediction of the regressand (often labelled y {\displaystyle y} ) conditional on observed values of the regressors (usually X {\displaystyle X} ). The simplest and most widely used version of this model is the normal linear model, in which y {\displaystyle y} given X {\displaystyle X} is distributed Gaussian. In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated.

Model setup

Consider a standard linear regression problem, in which for i = 1 , , n {\displaystyle i=1,\ldots ,n} we specify the mean of the conditional distribution of y i {\displaystyle y_{i}} given a k × 1 {\displaystyle k\times 1} predictor vector x i {\displaystyle \mathbf {x} _{i}} :

y i = x i T β + ε i , {\displaystyle y_{i}=\mathbf {x} _{i}^{\mathsf {T}}{\boldsymbol {\beta }}+\varepsilon _{i},}

where β {\displaystyle {\boldsymbol {\beta }}} is a k × 1 {\displaystyle k\times 1} vector, and the ε i {\displaystyle \varepsilon _{i}} are independent and identically normally distributed random variables:

ε i N ( 0 , σ 2 ) . {\displaystyle \varepsilon _{i}\sim N(0,\sigma ^{2}).}

This corresponds to the following likelihood function:

ρ ( y X , β , σ 2 ) ( σ 2 ) n / 2 exp ( 1 2 σ 2 ( y X β ) T ( y X β ) ) . {\displaystyle \rho (\mathbf {y} \mid \mathbf {X} ,{\boldsymbol {\beta }},\sigma ^{2})\propto (\sigma ^{2})^{-n/2}\exp \left(-{\frac {1}{2\sigma ^{2}}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})\right).}

The ordinary least squares solution is used to estimate the coefficient vector using the Moore–Penrose pseudoinverse:

β ^ = ( X T X ) 1 X T y {\displaystyle {\hat {\boldsymbol {\beta }}}=(\mathbf {X} ^{\mathsf {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} }

where X {\displaystyle \mathbf {X} } is the n × k {\displaystyle n\times k} design matrix, each row of which is a predictor vector x i T {\displaystyle \mathbf {x} _{i}^{\mathsf {T}}} ; and y {\displaystyle \mathbf {y} } is the column n {\displaystyle n} -vector [ y 1 y n ] T {\displaystyle [y_{1}\;\cdots \;y_{n}]^{\mathsf {T}}} .

This is a frequentist approach, and it assumes that there are enough measurements to say something meaningful about β {\displaystyle {\boldsymbol {\beta }}} . In the Bayesian approach, the data are supplemented with additional information in the form of a prior probability distribution. The prior belief about the parameters is combined with the data's likelihood function according to Bayes theorem to yield the posterior belief about the parameters β {\displaystyle {\boldsymbol {\beta }}} and σ {\displaystyle \sigma } . The prior can take different functional forms depending on the domain and the information that is available a priori.

Since the data comprise both y {\displaystyle \mathbf {y} } and X {\displaystyle \mathbf {X} } , the focus only on the distribution of y {\displaystyle \mathbf {y} } conditional on X {\displaystyle \mathbf {X} } needs justification. In fact, a "full" Bayesian analysis would require a joint likelihood ρ ( y , X β , σ 2 , γ ) {\displaystyle \rho (\mathbf {y} ,\mathbf {X} \mid {\boldsymbol {\beta }},\sigma ^{2},\gamma )} along with a prior ρ ( β , σ 2 , γ ) {\displaystyle \rho (\beta ,\sigma ^{2},\gamma )} , where γ {\displaystyle \gamma } symbolizes the parameters of the distribution for X {\displaystyle \mathbf {X} } . Only under the assumption of (weak) exogeneity can the joint likelihood be factored into ρ ( y X , β , σ 2 ) ρ ( X γ ) {\displaystyle \rho (\mathbf {y} \mid {\boldsymbol {\mathbf {X} }},\beta ,\sigma ^{2})\rho (\mathbf {X} \mid \gamma )} .[1] The latter part is usually ignored under the assumption of disjoint parameter sets. More so, under classic assumptions X {\displaystyle \mathbf {X} } are considered chosen (for example, in a designed experiment) and therefore has a known probability without parameters.[2]

With conjugate priors

Conjugate prior distribution

For an arbitrary prior distribution, there may be no analytical solution for the posterior distribution. In this section, we will consider a so-called conjugate prior for which the posterior distribution can be derived analytically.

A prior ρ ( β , σ 2 ) {\displaystyle \rho ({\boldsymbol {\beta }},\sigma ^{2})} is conjugate to this likelihood function if it has the same functional form with respect to β {\displaystyle {\boldsymbol {\beta }}} and σ {\displaystyle \sigma } . Since the log-likelihood is quadratic in β {\displaystyle {\boldsymbol {\beta }}} , the log-likelihood is re-written such that the likelihood becomes normal in ( β β ^ ) {\displaystyle ({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})} . Write

( y X β ) T ( y X β ) = [ ( y X β ^ ) + ( X β ^ X β ) ] T [ ( y X β ^ ) + ( X β ^ X β ) ] = ( y X β ^ ) T ( y X β ^ ) + ( β β ^ ) T ( X T X ) ( β β ^ ) + 2 ( X β ^ X β ) T ( y X β ^ ) =   0 = ( y X β ^ ) T ( y X β ^ ) + ( β β ^ ) T ( X T X ) ( β β ^ ) . {\displaystyle {\begin{aligned}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})&=[(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})+(\mathbf {X} {\hat {\boldsymbol {\beta }}}-\mathbf {X} {\boldsymbol {\beta }})]^{\mathsf {T}}[(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})+(\mathbf {X} {\hat {\boldsymbol {\beta }}}-\mathbf {X} {\boldsymbol {\beta }})]\\&=(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})+({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} )({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})+\underbrace {2(\mathbf {X} {\hat {\boldsymbol {\beta }}}-\mathbf {X} {\boldsymbol {\beta }})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})} _{=\ 0}\\&=(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})+({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} )({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})\,.\end{aligned}}}

The likelihood is now re-written as

ρ ( y | X , β , σ 2 ) ( σ 2 ) v 2 exp ( v s 2 2 σ 2 ) ( σ 2 ) n v 2 exp ( 1 2 σ 2 ( β β ^ ) T ( X T X ) ( β β ^ ) ) , {\displaystyle \rho (\mathbf {y} |\mathbf {X} ,{\boldsymbol {\beta }},\sigma ^{2})\propto (\sigma ^{2})^{-{\frac {v}{2}}}\exp \left(-{\frac {vs^{2}}{2{\sigma }^{2}}}\right)(\sigma ^{2})^{-{\frac {n-v}{2}}}\exp \left(-{\frac {1}{2{\sigma }^{2}}}({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} )({\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}})\right),}
where
v s 2 = ( y X β ^ ) T ( y X β ^ )  and  v = n k , {\displaystyle vs^{2}=(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}})\quad {\text{ and }}\quad v=n-k,}
where k {\displaystyle k} is the number of regression coefficients.

This suggests a form for the prior:

ρ ( β , σ 2 ) = ρ ( σ 2 ) ρ ( β σ 2 ) , {\displaystyle \rho ({\boldsymbol {\beta }},\sigma ^{2})=\rho (\sigma ^{2})\rho ({\boldsymbol {\beta }}\mid \sigma ^{2}),}
where ρ ( σ 2 ) {\displaystyle \rho (\sigma ^{2})} is an inverse-gamma distribution
ρ ( σ 2 ) ( σ 2 ) v 0 2 1 exp ( v 0 s 0 2 2 σ 2 ) . {\displaystyle \rho (\sigma ^{2})\propto (\sigma ^{2})^{-{\frac {v_{0}}{2}}-1}\exp \left(-{\frac {v_{0}s_{0}^{2}}{2\sigma ^{2}}}\right).}

In the notation introduced in the inverse-gamma distribution article, this is the density of an Inv-Gamma ( a 0 , b 0 ) {\displaystyle {\text{Inv-Gamma}}(a_{0},b_{0})} distribution with a 0 = v 0 2 {\displaystyle a_{0}={\tfrac {v_{0}}{2}}} and b 0 = 1 2 v 0 s 0 2 {\displaystyle b_{0}={\tfrac {1}{2}}v_{0}s_{0}^{2}} with v 0 {\displaystyle v_{0}} and s 0 2 {\displaystyle s_{0}^{2}} as the prior values of v {\displaystyle v} and s 2 {\displaystyle s^{2}} , respectively. Equivalently, it can also be described as a scaled inverse chi-squared distribution, Scale-inv- χ 2 ( v 0 , s 0 2 ) . {\displaystyle {\text{Scale-inv-}}\chi ^{2}(v_{0},s_{0}^{2}).}

Further the conditional prior density ρ ( β | σ 2 ) {\displaystyle \rho ({\boldsymbol {\beta }}|\sigma ^{2})} is a normal distribution,

ρ ( β σ 2 ) ( σ 2 ) k / 2 exp ( 1 2 σ 2 ( β μ 0 ) T Λ 0 ( β μ 0 ) ) . {\displaystyle \rho ({\boldsymbol {\beta }}\mid \sigma ^{2})\propto (\sigma ^{2})^{-k/2}\exp \left(-{\frac {1}{2\sigma ^{2}}}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})^{\mathsf {T}}\mathbf {\Lambda } _{0}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})\right).}

In the notation of the normal distribution, the conditional prior distribution is N ( μ 0 , σ 2 Λ 0 1 ) . {\displaystyle {\mathcal {N}}\left({\boldsymbol {\mu }}_{0},\sigma ^{2}{\boldsymbol {\Lambda }}_{0}^{-1}\right).}

Posterior distribution

With the prior now specified, the posterior distribution can be expressed as

ρ ( β , σ 2 y , X ) ρ ( y X , β , σ 2 ) ρ ( β σ 2 ) ρ ( σ 2 ) ( σ 2 ) n / 2 exp ( 1 2 σ 2 ( y X β ) T ( y X β ) ) ( σ 2 ) k / 2 exp ( 1 2 σ 2 ( β μ 0 ) T Λ 0 ( β μ 0 ) ) ( σ 2 ) ( a 0 + 1 ) exp ( b 0 σ 2 ) {\displaystyle {\begin{aligned}\rho ({\boldsymbol {\beta }},\sigma ^{2}\mid \mathbf {y} ,\mathbf {X} )&\propto \rho (\mathbf {y} \mid \mathbf {X} ,{\boldsymbol {\beta }},\sigma ^{2})\rho ({\boldsymbol {\beta }}\mid \sigma ^{2})\rho (\sigma ^{2})\\&\propto (\sigma ^{2})^{-n/2}\exp \left(-{\frac {1}{2{\sigma }^{2}}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})\right)(\sigma ^{2})^{-k/2}\exp \left(-{\frac {1}{2\sigma ^{2}}}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})^{\mathsf {T}}{\boldsymbol {\Lambda }}_{0}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})\right)(\sigma ^{2})^{-(a_{0}+1)}\exp \left(-{\frac {b_{0}}{\sigma ^{2}}}\right)\end{aligned}}}

With some re-arrangement,[3] the posterior can be re-written so that the posterior mean μ n {\displaystyle {\boldsymbol {\mu }}_{n}} of the parameter vector β {\displaystyle {\boldsymbol {\beta }}} can be expressed in terms of the least squares estimator β ^ {\displaystyle {\hat {\boldsymbol {\beta }}}} and the prior mean μ 0 {\displaystyle {\boldsymbol {\mu }}_{0}} , with the strength of the prior indicated by the prior precision matrix Λ 0 {\displaystyle {\boldsymbol {\Lambda }}_{0}}

μ n = ( X T X + Λ 0 ) 1 ( X T X β ^ + Λ 0 μ 0 ) . {\displaystyle {\boldsymbol {\mu }}_{n}=(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +{\boldsymbol {\Lambda }}_{0})^{-1}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} {\hat {\boldsymbol {\beta }}}+{\boldsymbol {\Lambda }}_{0}{\boldsymbol {\mu }}_{0}).}

To justify that μ n {\displaystyle {\boldsymbol {\mu }}_{n}} is indeed the posterior mean, the quadratic terms in the exponential can be re-arranged as a quadratic form in β μ n {\displaystyle {\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{n}} .[4]

( y X β ) T ( y X β ) + ( β μ 0 ) T Λ 0 ( β μ 0 ) = ( β μ n ) T ( X T X + Λ 0 ) ( β μ n ) + y T y μ n T ( X T X + Λ 0 ) μ n + μ 0 T Λ 0 μ 0 . {\displaystyle (\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})^{\mathsf {T}}(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }})+({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})^{\mathsf {T}}{\boldsymbol {\Lambda }}_{0}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{0})=({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{n})^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +{\boldsymbol {\Lambda }}_{0})({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{n})+\mathbf {y} ^{\mathsf {T}}\mathbf {y} -{\boldsymbol {\mu }}_{n}^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +{\boldsymbol {\Lambda }}_{0}){\boldsymbol {\mu }}_{n}+{\boldsymbol {\mu }}_{0}^{\mathsf {T}}{\boldsymbol {\Lambda }}_{0}{\boldsymbol {\mu }}_{0}.}

Now the posterior can be expressed as a normal distribution times an inverse-gamma distribution:

ρ ( β , σ 2 y , X ) ( σ 2 ) k / 2 exp ( 1 2 σ 2 ( β μ n ) T ( X T X + Λ 0 ) ( β μ n ) ) ( σ 2 ) n + 2 a 0 2 1 exp ( 2 b 0 + y T y μ n T ( X T X + Λ 0 ) μ n + μ 0 T Λ 0 μ 0 2 σ 2 ) . {\displaystyle \rho ({\boldsymbol {\beta }},\sigma ^{2}\mid \mathbf {y} ,\mathbf {X} )\propto (\sigma ^{2})^{-k/2}\exp \left(-{\frac {1}{2{\sigma }^{2}}}({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{n})^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +\mathbf {\Lambda } _{0})({\boldsymbol {\beta }}-{\boldsymbol {\mu }}_{n})\right)(\sigma ^{2})^{-{\frac {n+2a_{0}}{2}}-1}\exp \left(-{\frac {2b_{0}+\mathbf {y} ^{\mathsf {T}}\mathbf {y} -{\boldsymbol {\mu }}_{n}^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +{\boldsymbol {\Lambda }}_{0}){\boldsymbol {\mu }}_{n}+{\boldsymbol {\mu }}_{0}^{\mathsf {T}}{\boldsymbol {\Lambda }}_{0}{\boldsymbol {\mu }}_{0}}{2\sigma ^{2}}}\right).}

Therefore, the posterior distribution can be parametrized as follows.

ρ ( β , σ 2 y , X ) ρ ( β σ 2 , y , X ) ρ ( σ 2 y , X ) , {\displaystyle \rho ({\boldsymbol {\beta }},\sigma ^{2}\mid \mathbf {y} ,\mathbf {X} )\propto \rho ({\boldsymbol {\beta }}\mid \sigma ^{2},\mathbf {y} ,\mathbf {X} )\rho (\sigma ^{2}\mid \mathbf {y} ,\mathbf {X} ),}
where the two factors correspond to the densities of N ( μ n , σ 2 Λ n 1 ) {\displaystyle {\mathcal {N}}\left({\boldsymbol {\mu }}_{n},\sigma ^{2}{\boldsymbol {\Lambda }}_{n}^{-1}\right)\,} and Inv-Gamma ( a n , b n ) {\displaystyle {\text{Inv-Gamma}}\left(a_{n},b_{n}\right)} distributions, with the parameters of these given by

Λ n = ( X T X + Λ 0 ) , μ n = ( Λ n ) 1 ( X T X β ^ + Λ 0 μ 0 ) , {\displaystyle {\boldsymbol {\Lambda }}_{n}=(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +\mathbf {\Lambda } _{0}),\quad {\boldsymbol {\mu }}_{n}=({\boldsymbol {\Lambda }}_{n})^{-1}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} {\hat {\boldsymbol {\beta }}}+{\boldsymbol {\Lambda }}_{0}{\boldsymbol {\mu }}_{0}),}
a n = a 0 + n 2 , b n = b 0 + 1 2 ( y T y + μ 0 T Λ 0 μ 0 μ n T Λ n μ n ) . {\displaystyle a_{n}=a_{0}+{\frac {n}{2}},\qquad b_{n}=b_{0}+{\frac {1}{2}}(\mathbf {y} ^{\mathsf {T}}\mathbf {y} +{\boldsymbol {\mu }}_{0}^{\mathsf {T}}{\boldsymbol {\Lambda }}_{0}{\boldsymbol {\mu }}_{0}-{\boldsymbol {\mu }}_{n}^{\mathsf {T}}{\boldsymbol {\Lambda }}_{n}{\boldsymbol {\mu }}_{n}).}

which illustrates Bayesian inference being a compromise between the information contained in the prior and the information contained in the sample.

Model evidence

The model evidence p ( y m ) {\displaystyle p(\mathbf {y} \mid m)} is the probability of the data given the model m {\displaystyle m} . It is also known as the marginal likelihood, and as the prior predictive density. Here, the model is defined by the likelihood function p ( y X , β , σ ) {\displaystyle p(\mathbf {y} \mid \mathbf {X} ,{\boldsymbol {\beta }},\sigma )} and the prior distribution on the parameters, i.e. p ( β , σ ) {\displaystyle p({\boldsymbol {\beta }},\sigma )} . The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models by Bayesian model comparison. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integrating p ( y , β , σ X ) {\displaystyle p(\mathbf {y} ,{\boldsymbol {\beta }},\sigma \mid \mathbf {X} )} over all possible values of β {\displaystyle {\boldsymbol {\beta }}} and σ {\displaystyle \sigma } .

p ( y | m ) = p ( y X , β , σ ) p ( β , σ ) d β d σ {\displaystyle p(\mathbf {y} |m)=\int p(\mathbf {y} \mid \mathbf {X} ,{\boldsymbol {\beta }},\sigma )\,p({\boldsymbol {\beta }},\sigma )\,d{\boldsymbol {\beta }}\,d\sigma }
This integral can be computed analytically and the solution is given in the following equation.[5]
p ( y m ) = 1 ( 2 π ) n / 2 det ( Λ 0 ) det ( Λ n ) b 0 a 0 b n a n Γ ( a n ) Γ ( a 0 ) {\displaystyle p(\mathbf {y} \mid m)={\frac {1}{(2\pi )^{n/2}}}{\sqrt {\frac {\det({\boldsymbol {\Lambda }}_{0})}{\det({\boldsymbol {\Lambda }}_{n})}}}\cdot {\frac {b_{0}^{a_{0}}}{b_{n}^{a_{n}}}}\cdot {\frac {\Gamma (a_{n})}{\Gamma (a_{0})}}}

Here Γ {\displaystyle \Gamma } denotes the gamma function. Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values of β {\displaystyle {\boldsymbol {\beta }}} and σ {\displaystyle \sigma } .

p ( y m ) = p ( β , σ | m ) p ( y X , β , σ , m ) p ( β , σ y , X , m ) {\displaystyle p(\mathbf {y} \mid m)={\frac {p({\boldsymbol {\beta }},\sigma |m)\,p(\mathbf {y} \mid \mathbf {X} ,{\boldsymbol {\beta }},\sigma ,m)}{p({\boldsymbol {\beta }},\sigma \mid \mathbf {y} ,\mathbf {X} ,m)}}}
Note that this equation is nothing but a re-arrangement of Bayes theorem. Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above.

Other cases

In general, it may be impossible or impractical to derive the posterior distribution analytically. However, it is possible to approximate the posterior by an approximate Bayesian inference method such as Monte Carlo sampling[6] or variational Bayes.

The special case μ 0 = 0 , Λ 0 = c I {\displaystyle {\boldsymbol {\mu }}_{0}=0,\mathbf {\Lambda } _{0}=c\mathbf {I} } is called ridge regression.

A similar analysis can be performed for the general case of the multivariate regression and part of this provides for Bayesian estimation of covariance matrices: see Bayesian multivariate linear regression.

See also

Notes

  1. ^ See Jackman (2009), p. 101.
  2. ^ See Gelman et al. (2013), p. 354.
  3. ^ The intermediate steps of this computation can be found in O'Hagan (1994) at the beginning of the chapter on Linear models.
  4. ^ The intermediate steps are in Fahrmeir et al. (2009) on page 188.
  5. ^ The intermediate steps of this computation can be found in O'Hagan (1994) on page 257.
  6. ^ Carlin and Louis (2008) and Gelman, et al. (2003) explain how to use sampling methods for Bayesian linear regression.

References

  • Box, G. E. P.; Tiao, G. C. (1973). Bayesian Inference in Statistical Analysis. Wiley. ISBN 0-471-57428-7.
  • Carlin, Bradley P.; Louis, Thomas A. (2008). Bayesian Methods for Data Analysis (Third ed.). Boca Raton, FL: Chapman and Hall/CRC. ISBN 1-58488-697-8.
  • Fahrmeir, L.; Kneib, T.; Lang, S. (2009). Regression. Modelle, Methoden und Anwendungen (Second ed.). Heidelberg: Springer. doi:10.1007/978-3-642-01837-4. ISBN 978-3-642-01836-7.
  • Gelman, Andrew; et al. (2013). "Introduction to regression models". Bayesian Data Analysis (Third ed.). Boca Raton, FL: Chapman and Hall/CRC. pp. 353–380. ISBN 978-1-4398-4095-5.
  • Jackman, Simon (2009). "Regression models". Bayesian Analysis for the Social Sciences. Wiley. pp. 99–124. ISBN 978-0-470-01154-6.
  • Rossi, Peter E.; Allenby, Greg M.; McCulloch, Robert (2006). Bayesian Statistics and Marketing. John Wiley & Sons. ISBN 0470863676.
  • O'Hagan, Anthony (1994). Bayesian Inference. Kendall's Advanced Theory of Statistics. Vol. 2B (First ed.). Halsted. ISBN 0-340-52922-9.

External links

  • Bayesian estimation of linear models (R programming wikibook). Bayesian linear regression as implemented in R.
  • v
  • t
  • e
Computational statistics
Correlation and dependence
Regression analysis
Regression as a
statistical model
Linear regression
Predictor structure
Non-standard
Non-normal errors
Decomposition of variance
Model exploration
Background
Design of experiments
Numerical approximation
Applications
  • v
  • t
  • e
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
  • Z-test (normal)
  • Student's t-test
  • F-test
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
  • Category
  • icon Mathematics portal
  • Commons
  • WikiProject