'Rmd file won't output file (LaTeX formatting error?)
I am taking an intro to bayesian statistics class this semester and I decided to take notes with Rmd this last class, not the best idea. After class when I was trying to convert to a pdf, I received the following error messages.
! Missing { inserted.
<to be read again>
\mathop
l.204 ...(data|\lambda^q)\pi(\lambda^q)d\lambda^q}
\)
I have no idea what the problem is in the coding that I have done or what this error is trying to tell me what to do. Here is my Rmd code that is relevant to the problem (the rest of my rmd should be good):
## Finding the Model
- Remember, $P(A\cap B)=P(A)P(B)$ and $P(A\cap B|C)=P(A|C)P(B|C)$
- A = $(X_1=x_1)$, $B=(X_2=x_2)$, $C=(\Lambda=\lambda)$
- $P[(X_1=x_1)\cap(X_2=x_2)|\lambda]=P[(X_1=x_1|\lambda]P[(X_2=x_2)|\lambda]$, which are the product of 2 poisson pmf's.
- This can then be extended to all n observations, or all n RV's.
- iid = "independent and identically distributed as"
### For likelihood:
- shorthand: $X_i|\lambda ~-^{iid}~Pois(\lambda)$
- distribution: $f(x_1,...,x_n|\lambda)=f(data|\lambda)=\Pi_{i=1}^nf(x_i|\lambda)$
- always true for our likelihood of the data as long as it's a random sample
- poisson: $\Pi_{i=1}^n\frac{e^{-\lambda}\lambda^{x_i}}{x_i!}~=~e^{-n\lambda}\lambda^{\sum_{i=1}^nx_i}(\Pi_{i=1}^n\frac{1}{x_i!})$
### For prior:
- $\lambda~-~Gam(\gamma,\phi)$
- $\pi(\lambda)~=~\frac{\phi^\gamma}{\Gamma(\phi)}\lambda^{\phi-1}e^{-\gamma\lambda}$
### For Posterior:
- $\pi(\lambda|data)~=~\frac{f(data|\lambda)\pi(\lambda)}{\int_0^\inf(data|\lambda^q)\pi(\lambda^q)d\lambda^q}$
- The denominator is the normalizing constant so it's just a number. For this example we'll make the normalizing constant $\frac{1}{c}$, $\sum_{i=1}^n\frac{1}{x_i!}$ = a (constant), and $\frac{\phi^\gamma}{\Gamma(\phi)}$ = b (constant)
- So, $cf(data|\lambda)\pi(\lambda)~=~c(e^{-n\lambda}\lambda^{\sum_{i=1}^n\frac{1}{x_i!}})(\frac{\phi^\gamma}{\Gamma(\phi)}\lambda^{\phi-1}e^{-\gamma\lambda})~=~cab*e^{-\lambda(n+\phi)}\lambda^{phi+(\sum_{i=1}^nx_i)-1}$
- Our posterior kernel $e^{-\lambda(n+\phi)}\lambda^{phi+(\sum_{i=1}^nx_i)-1}$ has new parameters $\gamma^q\gamma+\sum_{i=1}^nx_i~\mbox{and}~\phi^q=\phi+n$
- Our constants cab = $\frac{(\phi^q)^{\gamma^q}}{\Gamma(\phi^q)}$
- Shorthand: $\lambda|data~-~Gam(\gamma^q,\phi^q)\mbox{, where}~\gamma^q=\gamma+\sum_{i=1}^nx_i~\mbox{and}~\phi^q=\phi+n$
Yeah, I realize it would be much easier to write this stuff down, but my laptop pen is dead and I don't own any notebooks, sooo this is my best option right now. I realize it is a ton of stuff, but, any help would be much appreciated especially help with understanding this error message and what it is trying to tell me to do.
If needed, here are my other LaTeX notations that I've used in the document.
- So, because all objects are randomly selected, we can consider them independent of each other.
- Thus, we can distinguish all n RV's as $X_i$ = count of the $i^{th}$ randomly selected object from the pop.
- $X_1...X_n$ represents the n RV's and $x_1...x_n$ represents the observed n values from "data".
Solution 1:[1]
It is a really bad habit to write exponents or subscripts without {...} around them, you risk omitting them when necessary.
---
output:
pdf_document:
keep_tex: true
---
## Finding the Model
- Remember, $P(A\cap B)=P(A)P(B)$ and $P(A\cap B|C)=P(A|C)P(B|C)$
- A = $(X_1=x_1)$, $B=(X_2=x_2)$, $C=(\Lambda=\lambda)$
- $P[(X_1=x_1)\cap(X_2=x_2)|\lambda]=P[(X_1=x_1|\lambda]P[(X_2=x_2)|\lambda]$, which are the product of 2 poisson pmf's.
- This can then be extended to all n observations, or all n RV's.
- iid = "independent and identically distributed as"
### For likelihood:
- shorthand: $X_i|\lambda ~-^{iid}~Pois(\lambda)$
- distribution: $f(x_1,...,x_n|\lambda)=f(data|\lambda)=\Pi_{i=1}^nf(x_i|\lambda)$
- always true for our likelihood of the data as long as it's a random sample
- poisson: $\Pi_{i=1}^n\frac{e^{-\lambda}\lambda^{x_i}}{x_i!}~=~e^{-n\lambda}\lambda^{\sum_{i=1}^nx_i}(\Pi_{i=1}^n\frac{1}{x_i!})$
### For prior:
- $\lambda~-~Gam(\gamma,\phi)$
- $\pi(\lambda)~=~\frac{\phi^\gamma}{\Gamma(\phi)}\lambda^{\phi-1}e^{-\gamma\lambda}$
### For Posterior:
- $\pi(\lambda|data)~=~\frac{f(data|\lambda)\pi(\lambda)}{\int_{0}^{\inf}(data|\lambda^q)\pi(\lambda^q)d\lambda^q}$
- The denominator is the normalizing constant so it's just a number. For this example we'll make the normalizing constant $\frac{1}{c}$, $\sum_{i=1}^n\frac{1}{x_i!}$ = a (constant), and $\frac{\phi^\gamma}{\Gamma(\phi)}$ = b (constant)
- So, $cf(data|\lambda)\pi(\lambda)~=~c(e^{-n\lambda}\lambda^{\sum_{i=1}^n\frac{1}{x_i!}})(\frac{\phi^\gamma}{\Gamma(\phi)}\lambda^{\phi-1}e^{-\gamma\lambda})~=~cab*e^{-\lambda(n+\phi)}\lambda^{phi+(\sum_{i=1}^nx_i)-1}$
- Our posterior kernel $e^{-\lambda(n+\phi)}\lambda^{phi+(\sum_{i=1}^nx_i)-1}$ has new parameters $\gamma^q\gamma+\sum_{i=1}^nx_i~\mbox{and}~\phi^q=\phi+n$
- Our constants cab = $\frac{(\phi^q)^{\gamma^q}}{\Gamma(\phi^q)}$
- Shorthand: $\lambda|data~-~Gam(\gamma^q,\phi^q)\mbox{, where}~\gamma^q=\gamma+\sum_{i=1}^nx_i~\mbox{and}~\phi^q=\phi+n$
You also shouldn't set multi-letter words like "data" or "Gam" in math mode. Use text font to get correct kerning.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
