## Discrete version

Suppose $X$ is a discrete r.v. and $Y=g(X)$. By definition, the Expectation $E(g(X))=E(Y)=∑_{y}y⋅P(Y=y)$. However, this requires us to find $P(Y=y)$, the PMF of $Y$. Instead, if we already have the PMF of $X$, we can use that instead:

$E(Y)=y∑ y⋅P(Y=y)=x∑ g(x)⋅P(X=x)$## Continuous version

This formula is even more useful in the continuous version, because finding the PDF of $Y$ given $X$‘s PDF is often very nontrivial.

Often, when we’re told to not use the PDF when finding expectation, this just means to use LOTUS instead.

Suppose $X$ is a continuous r.v. and $Y=g(X)$, where $g:R→R$. Again, $E(g(X))=E(Y)=∫_{−∞}f_{Y}(y)dy$. We want to avoid finding $f_{y}(Y)$, so instead we can calculate $E(Y)$ as follows, given we know $f_{X}$.

$E(Y)=∫_{−∞}y⋅f_{Y}(y)dy=∫_{−∞}g(x)⋅f_{X}(x)dx$### Calculating variance

This formula is useful when we want to calculate the Variance of $X$ and need $E(X_{2})$. We simply let $Y=X_{2}$ and so

$E(X_{2})=∫_{−∞}x_{2}⋅f_{X}(x)dx$## 2D version

Consider two r.v.s $X$ and $Y$ with different distributions, and let their Joint distribution be $W=XY$. Another way of interpreting this is as a transformation $g(X,Y)=XY$. If we wanted to find $E(W)$, calculating its PDF may be difficult. Instead, we claim that

$E(W)=E(g(X,Y))=x∑ y∑ g(x,y)P(X=x,Y=y)$for two discrete distributions. If $X$ and $Y$ are both continuous and their joint PDF is $f_{X,Y}$, then we have

$E(W)=∫_{−∞}∫_{−∞}g(x,y)⋅f_{X,Y}(x,y)dxdy$