A random variable X has a continuous uniform distribution over the interval from 0 to θ, where θ is an unknown parameter. If the Sum of the largest and smallest values of X in the sample of n observations of X is used to estimate θ, explain whether or not the estimator is (1) unbiased (2) consistent.
Subhayu why have you fixed x-min and x-max as 0 and theta? The minimum and maximum value will differ from sample to sample and as such they should be random variables?? Take note of the fact that the sum of smallest and largest values is a sample statistic..
But the answer is Unbiased and Consistent.
Here is the explanation that is given for unbiasedness:
"The estimator will be unbiased. Call the maximum value of X in the sample Xmax and the minimum value Xmin. Given the symmetry of the distribution of X, the distributions of Xmax and Xmin will be identical, except that that of Xmax will be to the right of 0 and that of Xmax will be to the left of θ. Hence, for any n, E(Xmin)-0=θ-E(Xmax) and the expected value of their sum is equal to θ."
Now what I don't understand here is the equation in bold above!! Can someone please explain this?
Just a doubt sir. Is it important to check for consistency once we know that an estimator is unbiased. Can there be cases in which some estimator is unbiased but not consistent ?
Unbiasedness and consistency are two very different properties. Unbiasedness property says that the estimator on an average delivers the true value. Consistency says that as sample size increases the probability that the estimators delivers the value very different from the true one goes to zero. In other words, it delivers the value very close to true value with almost certainty when the sample size is high.
For example, in the problem that is considered above where we were trying to estimate theta:
Y = max{X(1), ... , X(n)} is a biased but consistent estimator of theta.
And T = X(1) + X(n) is an unbiased but not a consistent estimator of theta.