Publications by Jake

Modelling Bayesian

30.09.2022

Modelling_Bayesian Jake 29/09/2022 Bayesian Networks can be queried for probabilities Probability of Evidence A simple query is the probability of some variable instantiation Consider evidence \(\mathbf{E}\) and network variables \(\mathbf{W}\) \[ P(\mathbf{e}),\quad\mathbf{E}\in\mathbf{W}\] For example: \[ P(X=True, D=True)\] Prior and ...

4450 sym 10 img

Propositional Logic

30.09.2022

Propositional Logic Jake 22/09/2022 Propositional Syntax Propositional variables can be sentences or sentences of sentences Typicall binary, however can be extended to be multi-valued \[ P_1,...,P_n\] Logical Connectives There are three primitive logical connectives: \[ \land\text{ - Logical conjunction (and)} \] \[ \lor\text{ - Logical d...

3066 sym 3 img

Bayesian Networks

30.09.2022

Bayesian Networks Jake 27/09/2022 Bayesian Networks are a modelling tool to specify joint probability distributions Can be used to decrease the computational complexity of a joint probability distribution through independence modelling. Graphs Bayesian Networks employ directed acyclic graphs (DAG) Nodes represent variables Edges can be use...

9021 sym 9 img

Probability Calculus

30.09.2022

Probability_Calculus Jake 24/09/2022 Worlds and Degrees of Belief We can assign a ‘degree of belief’ (probability) to each world The probability of a sentence, \(\alpha\), is the sum of probabilities where \(\alpha\) is true. \[ P(\alpha)=\sum_{w_i\models\alpha}P(w_i)\] Can use this idea to create joint probability distribution tables ...

4184 sym 1 img

BayesNet Classification

07.10.2022

Classification_BayesNet Jake 05/10/2022 Classification with Bayesian Networks We can divide a network variable set into class (query) variables, \(C\), and attributes (evidence variables), \(\mathbf{e}\). Attributes are typically all variables except the class variables. Given a set of evidence variables, we want to know the most likely clas...

4300 sym 8 img

Variable Elimination

07.10.2022

Variable Elimination Jake 06/10/2022 Variable elimination successively removes variables from a Bayesian Network while maintaining its ability to answer queries of interest MPE,MAP, probability of evidence, prior and posterior marginals Main insight is that we can use variable elimination to sum out variables without constructing joint probab...

9047 sym 25 img

bayes_inf_intro

12.10.2022

Intro_Bayesian_Inference Jake 01/10/2022 Bayes Theorem \[ P(A|B) = \frac{P(A,B)}{P(B)} = \frac{P(B|A)P(A)}{P(B)}\] Bayes Theorem and Distributions \[ P(\theta|x) = \frac{P(x|\theta)P(\theta)}{P(x)}\] \[ \pi(\theta|x) = \frac{L(x|\theta)\pi(\theta)}{\pi(x)}\propto L(x|\theta)\pi(\theta)\] Definitions Note our main idea is to define parameters ...

6174 sym

Further_Priors

12.10.2022

Further Priors Jake 01/10/2022 Conjugate Priors Conjugate priors permit the posterior distribution to stay in the same family as the prior. Same distribution but altered parameters Only easy case for conjugate pairs is the exponential family: \[ f(x|\theta) = h(x)g(\theta)\exp(t(x)c(x)) \] \[ \pi(\theta | x)\propto \pi(\theta)g(\theta)^n\ex...

2440 sym 1 img

Multivariate_Bayes

12.10.2022

Multivariate Bayesian Models Jake 01/10/2022 Mixture Priors We may require more flexibility in our priors Mixture of distributions can add flexibility Particularly useful to mix conjugate distributions Consider conjugate priors \(\pi_1(\theta),...,\pi_k(\theta)\), leading to posteriors \(\pi_1(\theta),...,\pi_k(\theta|x)\) We consider the m...

3885 sym

Loss Functions and Asymptotics

25.10.2022

Loss Functions and Asymptotics Jake 13/10/2022 Loss Functions For a decision \(d\in\mathcal{D}\), a loss function defines the penalty of decision \(d\): \[ L(\theta,d)\] We want to minimise the loss function \(d^*=arg\min_dL(\theta,d)\), however we consider \(\theta\sim\pi(\theta|x)\), therefore we want: \[ d^*=arg\min_d\mathbb{E}_\pi[L(\the...

3136 sym