Publications by Ken Wood
ANOVA - Bayesian Methods
Introduction ANOVA (analysis of variance) is used when we have categorical explanatory variables where the observations belong to groups, i.e., we compare the variability of responses between groups. If the variability between groups is large relative to the variability within groups, we conclude that there is a ‘grouping effect’. Data As an e...
2763 sym R (4250 sym/34 pcs) 8 img
Linear Regression Quizes
library("car") # load the 'car' package ## Loading required package: carData data("Anscombe") # load the data set ?Anscombe # read a description of the data head(Anscombe) # look at the first few lines of the data ## education income young urban ## ME 189 2824 350.7 508 ## NH 169 3259 345.9 564 ## VT 230 3072 348....
70 sym R (8620 sym/59 pcs) 19 img
Linear Regression using Bayesian Methods
If we have \(k\) different predictor variables \(x_1,x_2,...,x_k\), what is the primary advantage of fitting a joint linear model (multiple regression \(E(y)=\beta_0+\beta_1 x_1+...+\beta_k x_k\)) over fitting \(k\) simple linear regressions (\(E(y)=\beta_0+\beta_j x_j\)), one for each predictor? Answer: Each coefficient in the multiple regression ...
1394 sym R (2856 sym/21 pcs) 5 img
Gibbs Sampling
Introduction So far, we have demonstrated MCMC for a single parameter. What if we seek the posterior distribution of multiple parameters, and that posterior distribution does not have a standard form? One option is to perform Metropolis-Hastings (M-H) by sampling candidates for all parameters at once, and accepting or rejecting all of those candida...
3607 sym
Introduction to JAGS
Introduction There are several software packages available that will handle the details of MCMC for us. See the supplementary material for a brief overview of options. The package we will use in this course is JAGS (Just Another Gibbs Sampler) by Martyn Plummer. The program is free, and runs on Mac OS, Windows, and Linux. Better yet, the program ca...
2907 sym R (1437 sym/11 pcs) 1 img
Metropolis-Hastings Algorithm
Introduction Metropolis-Hastings is an algorithm that allows us to sample from a generic probability distribution (which we will call the target distribution), even if we do not know the normalizing constant. To do this, we construct and sample from a Markov chain whose stationary distribution is the target distribution. It consists of picking an a...
5985 sym
Monte Carlo Simulation of Bayesian Models
\(y_i=\mu+\epsilon_i,\space \space \epsilon \stackrel{iid}{\sim}N(0,\sigma^2),\space i=1,..,n\) \(\therefore \space y_i \stackrel{iid}{\sim}N(0,\sigma^2)\) Likelihood: \(P(y|\theta) = \frac{P(y,\theta)}{P(\theta)}\), where \(P(\theta)\) is the prior probability distribution of \(\theta\). Posterior: \(P(\theta|y) = \frac{P(y,\theta)}{P(y)} = \frac{...
3937 sym 2 img
Bayesian Models
\(y_i=\mu+\epsilon_i,\space \space \epsilon \stackrel{iid}{\sim}N(0,\sigma^2),\space i=1,..,n\) \(\therefore \space y_i \stackrel{iid}{\sim}N(0,\sigma^2)\) Likelihood: \(P(y|\theta) = \frac{P(y,\theta)}{P(\theta)}\), where \(P(\theta)\) is the prior probability distribution of \(\theta\). Posterior: \(P(\theta|y) = \frac{P(y,\theta)}{P(y)} = \frac{...
1543 sym 1 img
Bayesian Models
\(y_i=\mu+\epsilon_i,\space \space \epsilon \stackrel{iid}{\sim}N(0,\sigma^2),\space i=1,..,n\) \(\therefore \space y_i \stackrel{iid}{\sim}N(0,\sigma^2)\) Likelihood: \(P(y|\theta) = \frac{P(y,\theta)}{P(\theta)}\), where \(P(\theta)\) is the prior probability distribution of \(\theta\). Posterior: \(P(\theta|y) = \frac{P(y,\theta)}{P(y)} = \frac{...
1322 sym 1 img
Bayesian Statistics - Linear Regression
Read in the data golf=read.table('http://www.stat.ufl.edu/~winner/data/pgalpga2008.dat') colnames(golf) <- c('drive_distance', 'accuracy', 'gender') golf_female <- subset(golf,gender==1) golf_male <- subset(golf,gender==2) Plots We fit a linear regression model to the female golfer data. golf_female_lm = lm(accuracy~drive_distance,data=golf_femal...
398 sym 2 img