Math 323 - Class 18

Continued ferom last class normal pdf

Notes

  1. \(f_X\) is symmetric about \(\mu\)
  2. Looks like this bell shape

3. As you shift \(\mu\) so you shift the “location” of \(f_X\) (i.e. the pivot of symmetry) . As you increase (decrease) \(\sigma\), so the increase (decrease) the spread of \(f_X\).

  1. The cdf of \(X\) is not known in closed form it is

    … this is something from last class

  2. Normal probabilties can only be computed numerically

    e.g. P(-1<X <= 4), if mu = 6 and sigma = 2

    then P(-1<X<=4) = integral from -1 to 4 of , just plugin the value

\(X\) is not known in closed form it is

… this is something from last class

  1. The normal distribution is important because

    1. it seems to arise as the distribution of many different r.v.s in real life.
    2. In statistic, the sample average of \(n\) observations, \(X_1,X_2,\dots,X_n\) say, denoted by \(\bar X\) will under mild contions, will have an approximate Normal distribution if \(n\) is large. Sample average play a cential role in statistic.

    Both 1 and 2 have their roots in the Famins Central Limit Theorem. Essentially the C.L.T says that if \(n\) is large, sums of r.v.s will be approximately Normal

The last named distribution is

The Beta Distribution

Def. The r.v. X has a Beta distribution with parameter alpha and beta if

\[f_X(x) = \frac {1} {B(\alpha,\beta)} x^{\alpha-1}(1-x)^{\beta-1} I(x)\quad \forall \alpha, \beta >0\]

where \(B(\alpha,\beta\), the so-called Beta function is defined as

\[B(\alpha,\beta) = \int\limits_0^1x^{\alpha-1}(1-x)^{\beta-1}dx\]

We write the formula above as \(X \sim B_e(\alpha,\beta)\)

A \(B_e(1,1)\) distribution is \(\mu(0,1)\) distribution. The Beta distribution is useful for modelling r.v.s (on (0,1)), because of the wide varierty of different shapes for the pdf that you can get for different \(\alpha\) and \(\beta\)

There’s a graph here

Expectation

The idea here is to seek ways to summerize a probability distribution. The distribution provides the complete picture.

To this end, we are given a r.v. \(X\) (and its distribution) a natural summary might be its “average value”. Or, we may wish to summarize a r.v. by somehow, giving its “average spread”

Formally, we have

Definition

Let \(X\) be a r.v. with pdf x \(f_X(x)\) (or pmf \(p_X(x)\)). Then we define the expected value (expectation) of \(X\) to be (for continous case)

\[E(X) = \mu_X = \int\limits_{-\infty}^\infty x f_X(x) dx\]

and for discrete case

\[E(X) = \sum\limits_{\forall x} x p_X(x) = \sum\limits_{\forall x} x P(X =x)\]

\(\mu_X\) is called “the mean of X”

Claim

\(E(X)\) is the “average value of X” consider the special case where \(X\) has a discrete uniform distribution on \(a_1,a_2,\dots,a_n\). Then by definition:

\[\begin{aligned} E(X) &= \sum\limits_{i=1}^N a_i P(X=a_i) \ &=\sum\limits_{i=1}^N a_i*\frac 1 N\ &= \frac 1 N \sum\limits_{i=1}^N \end{aligned}\]

Which is just the average of the possible values of \(X\). More generally if \(X\) is discrete,

\[E(X) = \sum\limits_{\forall X} x P(X=x)\]

And we see that the r.h.s is a weighted average of the values of \(X\), where the weight assigned to \(x\) is the probability that \(X\) will take on the value \(x\)

Asimilar interpretation holds in the continous case with the interpretation \(\int\) is roughly a sum, and \(x[f_Xdx]\) is roughly \(xP(x<X\leq x+dx\)

Some properties of \(E(X)\)

  1. \(E(X)\) is a constant, determined by the distribution of \(X\)
  2. \(E(cX) = c E(X)\)
  3. \(E\left[\sum\limits_{i=1}^n X_i\right] = \sum\limits_{i=1}^n E(X_i)\)

    Note: This result does not require that the r.v.s be independent.

  4. We say that “the expected value exists” if \(E(|X|) < \infty\)

Example

Let \(X\) have pdf

\[\begin{aligned} f_X(x) &= -x & \quad \text{for $-1<x<0$}\ &=\frac 3 2 x^2 & \quad \text{for $0\leq x<1$}\ &=0 &\quad \text{elsewhere} \end{aligned}\]

Find \(E(x)\)

Solution

begin{aligned} E(X) &= intlimits_{-infty}\^{infty} x f_X(x) dx \ &= intlimits_{-1}\^0 x(-x)dx+intlimits_0\^1 x 3/2 x\^2 dx \ &= frac 1 {24} end{aligned}

Example on insurance premiums

Suppose that you wish to insure your laptop for $1000, if the insurance company knows that they will only pay at 5% of occasions that insurance premium, C, say, should they charge so that their expected gain is 0.

Solution

iLet the gain of the company be \(X\). We want the \(c\), so that \(E(X)=0\) We need the distribution of \(X\)