San José State University
Department of Economics |
---|
applet-magic.com Thayer Watkins Silicon Valley & Tornado Alley USA |
---|
as a Function of Sample Size |
Let x be a stochastic variable with a probability density function p(x) and a cumulative probability distribution function of P(x). Thus the probability of an observation being less than or equal to x is P(x). This function P(x) necessarily has an inverse function P-1(z). Note that P(½)=xmedian. Let xmax be defined to be the lowest value of x such that P(x)=1. Likewise xmin is the largest x such that P(x)=0. Note that xmax and xmin may or may not be finite.
The probability density function for the sample maximum of a sample of size n, qn(x) is given by the probability of getting (n-1) observations which are less than or equal to x and one that is exactly x. The one observation that is exactly x can occur at any one of n places in the sample. Thus the probability density is
Note that
The expected value of the sample maximum is
Let z=[P(x)]n so x=P-1(z1/n). Then changing the variable of integration in the above expression to z results in
Now consider the limit of Mn as n increases without bound and note that the limit of a function of a variable is equal to the function of the limit of the the variable; i.e.,
But for all z in the interval 0<z≤1, limn→∞z1/n = 1. Therefore
Note that M1 = xmean.
The characteristic function for sample maximums can be defined as
where i is the square root of −1.
From the previous work this reduces to
Integration by parts can be applied to this expression to obtain
Since limx→∞P(x)=1` and limx→−∞P(x)=0 the above reduces to
The integral on the right is the characteristic function of the n-th power of P(x) which can be expressed as the the n-th convolution product of the characteristic function of P(x). The characteristic function of P(x) is the characteristic function of p(x) divided by iω.
(To be continued.)
HOME PAGE OF Thayer Watkins |