What is the unbiased estimator of covariance matrix of N-dimensional random variable?
$begingroup$
Suppose $x$ is a random vector in $mathbb{R}^n$ which is distributed according to $D$.
What is the unbiased estimator of covariance matrix of an N-dimensional random variable?
When $y$ is a i.i.d. random variable and we have access to $(y_1,y_2,cdots,y_n)$, the sample mean is an unbiased estimator of $hat{mu}=frac{sum_{i=1}^N}{N}$ and $hat{sigma}^2=frac{1}{N-1}sum_{i=1}^N(y_i-hat{mu})^2$ is an unbiased estimator of variance.
By going to higher dimension in addition to variance we have covariance between each element of the random vector. My question is
$$
hat{C}=?
$$
where $hat{C}$ is an unbiased estimator of $C = mathbb{E}[(x-mu)(x-mu)^T]$.
statistics estimation covariance estimation-theory
$endgroup$
add a comment |
$begingroup$
Suppose $x$ is a random vector in $mathbb{R}^n$ which is distributed according to $D$.
What is the unbiased estimator of covariance matrix of an N-dimensional random variable?
When $y$ is a i.i.d. random variable and we have access to $(y_1,y_2,cdots,y_n)$, the sample mean is an unbiased estimator of $hat{mu}=frac{sum_{i=1}^N}{N}$ and $hat{sigma}^2=frac{1}{N-1}sum_{i=1}^N(y_i-hat{mu})^2$ is an unbiased estimator of variance.
By going to higher dimension in addition to variance we have covariance between each element of the random vector. My question is
$$
hat{C}=?
$$
where $hat{C}$ is an unbiased estimator of $C = mathbb{E}[(x-mu)(x-mu)^T]$.
statistics estimation covariance estimation-theory
$endgroup$
$begingroup$
Essentially what you might expect, with $hat{vec{mu}}= frac1n sum vec{x}_i$ for the estimator of the mean vector and $frac1{n-1} sum (vec{x}_i - hat{vec{mu}})(vec{x}_i - hat{vec{mu}})^T$ for the estimator of the covariance matrix. See math.stackexchange.com/questions/2019122/…
$endgroup$
– Henry
Dec 26 '18 at 18:12
$begingroup$
@Henry: That answer is for the $(X,Y)$ random vector, I need a proof in terms of vector.
$endgroup$
– Saeed
Dec 26 '18 at 19:33
add a comment |
$begingroup$
Suppose $x$ is a random vector in $mathbb{R}^n$ which is distributed according to $D$.
What is the unbiased estimator of covariance matrix of an N-dimensional random variable?
When $y$ is a i.i.d. random variable and we have access to $(y_1,y_2,cdots,y_n)$, the sample mean is an unbiased estimator of $hat{mu}=frac{sum_{i=1}^N}{N}$ and $hat{sigma}^2=frac{1}{N-1}sum_{i=1}^N(y_i-hat{mu})^2$ is an unbiased estimator of variance.
By going to higher dimension in addition to variance we have covariance between each element of the random vector. My question is
$$
hat{C}=?
$$
where $hat{C}$ is an unbiased estimator of $C = mathbb{E}[(x-mu)(x-mu)^T]$.
statistics estimation covariance estimation-theory
$endgroup$
Suppose $x$ is a random vector in $mathbb{R}^n$ which is distributed according to $D$.
What is the unbiased estimator of covariance matrix of an N-dimensional random variable?
When $y$ is a i.i.d. random variable and we have access to $(y_1,y_2,cdots,y_n)$, the sample mean is an unbiased estimator of $hat{mu}=frac{sum_{i=1}^N}{N}$ and $hat{sigma}^2=frac{1}{N-1}sum_{i=1}^N(y_i-hat{mu})^2$ is an unbiased estimator of variance.
By going to higher dimension in addition to variance we have covariance between each element of the random vector. My question is
$$
hat{C}=?
$$
where $hat{C}$ is an unbiased estimator of $C = mathbb{E}[(x-mu)(x-mu)^T]$.
statistics estimation covariance estimation-theory
statistics estimation covariance estimation-theory
asked Dec 26 '18 at 18:05
SaeedSaeed
1,036310
1,036310
$begingroup$
Essentially what you might expect, with $hat{vec{mu}}= frac1n sum vec{x}_i$ for the estimator of the mean vector and $frac1{n-1} sum (vec{x}_i - hat{vec{mu}})(vec{x}_i - hat{vec{mu}})^T$ for the estimator of the covariance matrix. See math.stackexchange.com/questions/2019122/…
$endgroup$
– Henry
Dec 26 '18 at 18:12
$begingroup$
@Henry: That answer is for the $(X,Y)$ random vector, I need a proof in terms of vector.
$endgroup$
– Saeed
Dec 26 '18 at 19:33
add a comment |
$begingroup$
Essentially what you might expect, with $hat{vec{mu}}= frac1n sum vec{x}_i$ for the estimator of the mean vector and $frac1{n-1} sum (vec{x}_i - hat{vec{mu}})(vec{x}_i - hat{vec{mu}})^T$ for the estimator of the covariance matrix. See math.stackexchange.com/questions/2019122/…
$endgroup$
– Henry
Dec 26 '18 at 18:12
$begingroup$
@Henry: That answer is for the $(X,Y)$ random vector, I need a proof in terms of vector.
$endgroup$
– Saeed
Dec 26 '18 at 19:33
$begingroup$
Essentially what you might expect, with $hat{vec{mu}}= frac1n sum vec{x}_i$ for the estimator of the mean vector and $frac1{n-1} sum (vec{x}_i - hat{vec{mu}})(vec{x}_i - hat{vec{mu}})^T$ for the estimator of the covariance matrix. See math.stackexchange.com/questions/2019122/…
$endgroup$
– Henry
Dec 26 '18 at 18:12
$begingroup$
Essentially what you might expect, with $hat{vec{mu}}= frac1n sum vec{x}_i$ for the estimator of the mean vector and $frac1{n-1} sum (vec{x}_i - hat{vec{mu}})(vec{x}_i - hat{vec{mu}})^T$ for the estimator of the covariance matrix. See math.stackexchange.com/questions/2019122/…
$endgroup$
– Henry
Dec 26 '18 at 18:12
$begingroup$
@Henry: That answer is for the $(X,Y)$ random vector, I need a proof in terms of vector.
$endgroup$
– Saeed
Dec 26 '18 at 19:33
$begingroup$
@Henry: That answer is for the $(X,Y)$ random vector, I need a proof in terms of vector.
$endgroup$
– Saeed
Dec 26 '18 at 19:33
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3053161%2fwhat-is-the-unbiased-estimator-of-covariance-matrix-of-n-dimensional-random-vari%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3053161%2fwhat-is-the-unbiased-estimator-of-covariance-matrix-of-n-dimensional-random-vari%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Essentially what you might expect, with $hat{vec{mu}}= frac1n sum vec{x}_i$ for the estimator of the mean vector and $frac1{n-1} sum (vec{x}_i - hat{vec{mu}})(vec{x}_i - hat{vec{mu}})^T$ for the estimator of the covariance matrix. See math.stackexchange.com/questions/2019122/…
$endgroup$
– Henry
Dec 26 '18 at 18:12
$begingroup$
@Henry: That answer is for the $(X,Y)$ random vector, I need a proof in terms of vector.
$endgroup$
– Saeed
Dec 26 '18 at 19:33