Why does MLE make sense, given the probability of an individual sample is 0?












13












$begingroup$


This is kind of an odd thought I had while reviewing some old statistics and for some reason I can't seem to think of the answer.



A continuous PDF tells us the density of observing values in any given range. Namely, if $X sim N(mu,sigma^2)$, for example, then the probability that a realization falls between $a$ and $b$ is simply $int_a^{b}phi(x)dx$ where $phi$ is the density of the standard normal.



When we think about doing an MLE estimate of a parameter, say of $mu$, we write the joint density of, say $N$, random variables $X_1 .. X_N$ and differentiate the log-likelihood wrt to $mu$, set equal to 0 and solve for $mu$. The interpretation often given is "given the data, which parameter makes this density function most plausible".



The part that is bugging me is this: we have a density of $N$ r.v., and the probability that we get a particular realization, say our sample, is exactly 0. Why does it even make sense to maximize the joint density given our data (since again the probability of observing our actual sample is exactly 0)?



The only rationalization I could come up with is that we want to make the PDF is peaked as possible around our observed sample so that the integral in the region (and therefore probability of observing stuff in this region) is highest.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    For the same reason we use probability densities stats.stackexchange.com/q/4220/35989
    $endgroup$
    – Tim
    Jan 6 at 16:58










  • $begingroup$
    I understand (I think) why it makes sense to use densities. What I don't understand is why it makes sense to maximize a density conditional on observing a sample that has 0 probability of occurring.
    $endgroup$
    – Alex
    Jan 6 at 17:12






  • 2




    $begingroup$
    Because probability densities tell us what values are relatively more likely then others.
    $endgroup$
    – Tim
    Jan 6 at 17:24










  • $begingroup$
    If you have the time to answer the question fully, I think that would be more helpful for me and the next person.
    $endgroup$
    – Alex
    Jan 6 at 17:35










  • $begingroup$
    Because, fortunately, the likelihood is not a probability!
    $endgroup$
    – AdamO
    Jan 7 at 18:02
















13












$begingroup$


This is kind of an odd thought I had while reviewing some old statistics and for some reason I can't seem to think of the answer.



A continuous PDF tells us the density of observing values in any given range. Namely, if $X sim N(mu,sigma^2)$, for example, then the probability that a realization falls between $a$ and $b$ is simply $int_a^{b}phi(x)dx$ where $phi$ is the density of the standard normal.



When we think about doing an MLE estimate of a parameter, say of $mu$, we write the joint density of, say $N$, random variables $X_1 .. X_N$ and differentiate the log-likelihood wrt to $mu$, set equal to 0 and solve for $mu$. The interpretation often given is "given the data, which parameter makes this density function most plausible".



The part that is bugging me is this: we have a density of $N$ r.v., and the probability that we get a particular realization, say our sample, is exactly 0. Why does it even make sense to maximize the joint density given our data (since again the probability of observing our actual sample is exactly 0)?



The only rationalization I could come up with is that we want to make the PDF is peaked as possible around our observed sample so that the integral in the region (and therefore probability of observing stuff in this region) is highest.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    For the same reason we use probability densities stats.stackexchange.com/q/4220/35989
    $endgroup$
    – Tim
    Jan 6 at 16:58










  • $begingroup$
    I understand (I think) why it makes sense to use densities. What I don't understand is why it makes sense to maximize a density conditional on observing a sample that has 0 probability of occurring.
    $endgroup$
    – Alex
    Jan 6 at 17:12






  • 2




    $begingroup$
    Because probability densities tell us what values are relatively more likely then others.
    $endgroup$
    – Tim
    Jan 6 at 17:24










  • $begingroup$
    If you have the time to answer the question fully, I think that would be more helpful for me and the next person.
    $endgroup$
    – Alex
    Jan 6 at 17:35










  • $begingroup$
    Because, fortunately, the likelihood is not a probability!
    $endgroup$
    – AdamO
    Jan 7 at 18:02














13












13








13


6



$begingroup$


This is kind of an odd thought I had while reviewing some old statistics and for some reason I can't seem to think of the answer.



A continuous PDF tells us the density of observing values in any given range. Namely, if $X sim N(mu,sigma^2)$, for example, then the probability that a realization falls between $a$ and $b$ is simply $int_a^{b}phi(x)dx$ where $phi$ is the density of the standard normal.



When we think about doing an MLE estimate of a parameter, say of $mu$, we write the joint density of, say $N$, random variables $X_1 .. X_N$ and differentiate the log-likelihood wrt to $mu$, set equal to 0 and solve for $mu$. The interpretation often given is "given the data, which parameter makes this density function most plausible".



The part that is bugging me is this: we have a density of $N$ r.v., and the probability that we get a particular realization, say our sample, is exactly 0. Why does it even make sense to maximize the joint density given our data (since again the probability of observing our actual sample is exactly 0)?



The only rationalization I could come up with is that we want to make the PDF is peaked as possible around our observed sample so that the integral in the region (and therefore probability of observing stuff in this region) is highest.










share|cite|improve this question











$endgroup$




This is kind of an odd thought I had while reviewing some old statistics and for some reason I can't seem to think of the answer.



A continuous PDF tells us the density of observing values in any given range. Namely, if $X sim N(mu,sigma^2)$, for example, then the probability that a realization falls between $a$ and $b$ is simply $int_a^{b}phi(x)dx$ where $phi$ is the density of the standard normal.



When we think about doing an MLE estimate of a parameter, say of $mu$, we write the joint density of, say $N$, random variables $X_1 .. X_N$ and differentiate the log-likelihood wrt to $mu$, set equal to 0 and solve for $mu$. The interpretation often given is "given the data, which parameter makes this density function most plausible".



The part that is bugging me is this: we have a density of $N$ r.v., and the probability that we get a particular realization, say our sample, is exactly 0. Why does it even make sense to maximize the joint density given our data (since again the probability of observing our actual sample is exactly 0)?



The only rationalization I could come up with is that we want to make the PDF is peaked as possible around our observed sample so that the integral in the region (and therefore probability of observing stuff in this region) is highest.







normal-distribution maximum-likelihood pdf






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 7 at 8:21









Tim

59.2k9129222




59.2k9129222










asked Jan 6 at 16:45









AlexAlex

1684




1684








  • 1




    $begingroup$
    For the same reason we use probability densities stats.stackexchange.com/q/4220/35989
    $endgroup$
    – Tim
    Jan 6 at 16:58










  • $begingroup$
    I understand (I think) why it makes sense to use densities. What I don't understand is why it makes sense to maximize a density conditional on observing a sample that has 0 probability of occurring.
    $endgroup$
    – Alex
    Jan 6 at 17:12






  • 2




    $begingroup$
    Because probability densities tell us what values are relatively more likely then others.
    $endgroup$
    – Tim
    Jan 6 at 17:24










  • $begingroup$
    If you have the time to answer the question fully, I think that would be more helpful for me and the next person.
    $endgroup$
    – Alex
    Jan 6 at 17:35










  • $begingroup$
    Because, fortunately, the likelihood is not a probability!
    $endgroup$
    – AdamO
    Jan 7 at 18:02














  • 1




    $begingroup$
    For the same reason we use probability densities stats.stackexchange.com/q/4220/35989
    $endgroup$
    – Tim
    Jan 6 at 16:58










  • $begingroup$
    I understand (I think) why it makes sense to use densities. What I don't understand is why it makes sense to maximize a density conditional on observing a sample that has 0 probability of occurring.
    $endgroup$
    – Alex
    Jan 6 at 17:12






  • 2




    $begingroup$
    Because probability densities tell us what values are relatively more likely then others.
    $endgroup$
    – Tim
    Jan 6 at 17:24










  • $begingroup$
    If you have the time to answer the question fully, I think that would be more helpful for me and the next person.
    $endgroup$
    – Alex
    Jan 6 at 17:35










  • $begingroup$
    Because, fortunately, the likelihood is not a probability!
    $endgroup$
    – AdamO
    Jan 7 at 18:02








1




1




$begingroup$
For the same reason we use probability densities stats.stackexchange.com/q/4220/35989
$endgroup$
– Tim
Jan 6 at 16:58




$begingroup$
For the same reason we use probability densities stats.stackexchange.com/q/4220/35989
$endgroup$
– Tim
Jan 6 at 16:58












$begingroup$
I understand (I think) why it makes sense to use densities. What I don't understand is why it makes sense to maximize a density conditional on observing a sample that has 0 probability of occurring.
$endgroup$
– Alex
Jan 6 at 17:12




$begingroup$
I understand (I think) why it makes sense to use densities. What I don't understand is why it makes sense to maximize a density conditional on observing a sample that has 0 probability of occurring.
$endgroup$
– Alex
Jan 6 at 17:12




2




2




$begingroup$
Because probability densities tell us what values are relatively more likely then others.
$endgroup$
– Tim
Jan 6 at 17:24




$begingroup$
Because probability densities tell us what values are relatively more likely then others.
$endgroup$
– Tim
Jan 6 at 17:24












$begingroup$
If you have the time to answer the question fully, I think that would be more helpful for me and the next person.
$endgroup$
– Alex
Jan 6 at 17:35




$begingroup$
If you have the time to answer the question fully, I think that would be more helpful for me and the next person.
$endgroup$
– Alex
Jan 6 at 17:35












$begingroup$
Because, fortunately, the likelihood is not a probability!
$endgroup$
– AdamO
Jan 7 at 18:02




$begingroup$
Because, fortunately, the likelihood is not a probability!
$endgroup$
– AdamO
Jan 7 at 18:02










1 Answer
1






active

oldest

votes


















18












$begingroup$

The probability of any sample, $mathbb{P}_theta(X=x)$, is equal to zero and yet one sample is realised by drawing from a probability distribution. Probability is therefore the wrong tool for evaluating a sample and the likelihood it occurs. The statistical likelihood, as defined by Fisher (1912), is based on the limiting argument of the probability of observing the sample $x$ within an interval of length $delta$ when $delta$ goes to zero (quoting from Aldrich, 1997):



$qquadqquadqquad$ Aldrich, J. (1997) Statistical Science12, 162-176



when renormalising this probability by $delta$. The term of likelihood function is only introduced in Fisher (1921) and of maximum likelihood in Fisher (1922).



Although he went under the denomination of "most probable value", and used a principle of inverse probability (Bayesian inference) with a flat prior, Carl Friedrich Gauß had already derived in 1809 a maximum likelihood estimator for the variance parameter of a Normal distribution. Hald (1999) mentions several other occurrences of maximum likelihood estimators before Fisher's 1912 paper, which set the general principle.



A later justification of the maximum likelihood approach is that, since the renormalised log-likelihood of a sample $(x_1,ldots,x_n)$
$$frac{1}{n} sum_{i=1}^n log f_theta(x_i)$$ converges to [Law of Large Numbers]$$mathbb{E}[log f_theta(X)]=int log f_theta(x),f_0(x),text{d}x$$(where $f_0$ denotes the true density of the iid sample), maximising the likelihood [as a function of $theta$] is asymptotically equivalent to minimising [in $theta$] the Kullback-Leibler divergence
$$int log dfrac{f_0(x)}{f_theta(x)}, f_0(x),text{d}x=underbrace{int log f_0(x),f_0(x),text{d}x}_{text{constant}\text{in }theta}-int log f_theta(x),f_0(x),text{d}x$$
between the true distribution of the iid sample and the family of distributions represented by the $f_theta$'s.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thanks for the answer. Could you expand a bit on the KL argument? I'm not seeing how this is the case immediately.
    $endgroup$
    – Alex
    Jan 7 at 16:24











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f385862%2fwhy-does-mle-make-sense-given-the-probability-of-an-individual-sample-is-0%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









18












$begingroup$

The probability of any sample, $mathbb{P}_theta(X=x)$, is equal to zero and yet one sample is realised by drawing from a probability distribution. Probability is therefore the wrong tool for evaluating a sample and the likelihood it occurs. The statistical likelihood, as defined by Fisher (1912), is based on the limiting argument of the probability of observing the sample $x$ within an interval of length $delta$ when $delta$ goes to zero (quoting from Aldrich, 1997):



$qquadqquadqquad$ Aldrich, J. (1997) Statistical Science12, 162-176



when renormalising this probability by $delta$. The term of likelihood function is only introduced in Fisher (1921) and of maximum likelihood in Fisher (1922).



Although he went under the denomination of "most probable value", and used a principle of inverse probability (Bayesian inference) with a flat prior, Carl Friedrich Gauß had already derived in 1809 a maximum likelihood estimator for the variance parameter of a Normal distribution. Hald (1999) mentions several other occurrences of maximum likelihood estimators before Fisher's 1912 paper, which set the general principle.



A later justification of the maximum likelihood approach is that, since the renormalised log-likelihood of a sample $(x_1,ldots,x_n)$
$$frac{1}{n} sum_{i=1}^n log f_theta(x_i)$$ converges to [Law of Large Numbers]$$mathbb{E}[log f_theta(X)]=int log f_theta(x),f_0(x),text{d}x$$(where $f_0$ denotes the true density of the iid sample), maximising the likelihood [as a function of $theta$] is asymptotically equivalent to minimising [in $theta$] the Kullback-Leibler divergence
$$int log dfrac{f_0(x)}{f_theta(x)}, f_0(x),text{d}x=underbrace{int log f_0(x),f_0(x),text{d}x}_{text{constant}\text{in }theta}-int log f_theta(x),f_0(x),text{d}x$$
between the true distribution of the iid sample and the family of distributions represented by the $f_theta$'s.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thanks for the answer. Could you expand a bit on the KL argument? I'm not seeing how this is the case immediately.
    $endgroup$
    – Alex
    Jan 7 at 16:24
















18












$begingroup$

The probability of any sample, $mathbb{P}_theta(X=x)$, is equal to zero and yet one sample is realised by drawing from a probability distribution. Probability is therefore the wrong tool for evaluating a sample and the likelihood it occurs. The statistical likelihood, as defined by Fisher (1912), is based on the limiting argument of the probability of observing the sample $x$ within an interval of length $delta$ when $delta$ goes to zero (quoting from Aldrich, 1997):



$qquadqquadqquad$ Aldrich, J. (1997) Statistical Science12, 162-176



when renormalising this probability by $delta$. The term of likelihood function is only introduced in Fisher (1921) and of maximum likelihood in Fisher (1922).



Although he went under the denomination of "most probable value", and used a principle of inverse probability (Bayesian inference) with a flat prior, Carl Friedrich Gauß had already derived in 1809 a maximum likelihood estimator for the variance parameter of a Normal distribution. Hald (1999) mentions several other occurrences of maximum likelihood estimators before Fisher's 1912 paper, which set the general principle.



A later justification of the maximum likelihood approach is that, since the renormalised log-likelihood of a sample $(x_1,ldots,x_n)$
$$frac{1}{n} sum_{i=1}^n log f_theta(x_i)$$ converges to [Law of Large Numbers]$$mathbb{E}[log f_theta(X)]=int log f_theta(x),f_0(x),text{d}x$$(where $f_0$ denotes the true density of the iid sample), maximising the likelihood [as a function of $theta$] is asymptotically equivalent to minimising [in $theta$] the Kullback-Leibler divergence
$$int log dfrac{f_0(x)}{f_theta(x)}, f_0(x),text{d}x=underbrace{int log f_0(x),f_0(x),text{d}x}_{text{constant}\text{in }theta}-int log f_theta(x),f_0(x),text{d}x$$
between the true distribution of the iid sample and the family of distributions represented by the $f_theta$'s.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thanks for the answer. Could you expand a bit on the KL argument? I'm not seeing how this is the case immediately.
    $endgroup$
    – Alex
    Jan 7 at 16:24














18












18








18





$begingroup$

The probability of any sample, $mathbb{P}_theta(X=x)$, is equal to zero and yet one sample is realised by drawing from a probability distribution. Probability is therefore the wrong tool for evaluating a sample and the likelihood it occurs. The statistical likelihood, as defined by Fisher (1912), is based on the limiting argument of the probability of observing the sample $x$ within an interval of length $delta$ when $delta$ goes to zero (quoting from Aldrich, 1997):



$qquadqquadqquad$ Aldrich, J. (1997) Statistical Science12, 162-176



when renormalising this probability by $delta$. The term of likelihood function is only introduced in Fisher (1921) and of maximum likelihood in Fisher (1922).



Although he went under the denomination of "most probable value", and used a principle of inverse probability (Bayesian inference) with a flat prior, Carl Friedrich Gauß had already derived in 1809 a maximum likelihood estimator for the variance parameter of a Normal distribution. Hald (1999) mentions several other occurrences of maximum likelihood estimators before Fisher's 1912 paper, which set the general principle.



A later justification of the maximum likelihood approach is that, since the renormalised log-likelihood of a sample $(x_1,ldots,x_n)$
$$frac{1}{n} sum_{i=1}^n log f_theta(x_i)$$ converges to [Law of Large Numbers]$$mathbb{E}[log f_theta(X)]=int log f_theta(x),f_0(x),text{d}x$$(where $f_0$ denotes the true density of the iid sample), maximising the likelihood [as a function of $theta$] is asymptotically equivalent to minimising [in $theta$] the Kullback-Leibler divergence
$$int log dfrac{f_0(x)}{f_theta(x)}, f_0(x),text{d}x=underbrace{int log f_0(x),f_0(x),text{d}x}_{text{constant}\text{in }theta}-int log f_theta(x),f_0(x),text{d}x$$
between the true distribution of the iid sample and the family of distributions represented by the $f_theta$'s.






share|cite|improve this answer











$endgroup$



The probability of any sample, $mathbb{P}_theta(X=x)$, is equal to zero and yet one sample is realised by drawing from a probability distribution. Probability is therefore the wrong tool for evaluating a sample and the likelihood it occurs. The statistical likelihood, as defined by Fisher (1912), is based on the limiting argument of the probability of observing the sample $x$ within an interval of length $delta$ when $delta$ goes to zero (quoting from Aldrich, 1997):



$qquadqquadqquad$ Aldrich, J. (1997) Statistical Science12, 162-176



when renormalising this probability by $delta$. The term of likelihood function is only introduced in Fisher (1921) and of maximum likelihood in Fisher (1922).



Although he went under the denomination of "most probable value", and used a principle of inverse probability (Bayesian inference) with a flat prior, Carl Friedrich Gauß had already derived in 1809 a maximum likelihood estimator for the variance parameter of a Normal distribution. Hald (1999) mentions several other occurrences of maximum likelihood estimators before Fisher's 1912 paper, which set the general principle.



A later justification of the maximum likelihood approach is that, since the renormalised log-likelihood of a sample $(x_1,ldots,x_n)$
$$frac{1}{n} sum_{i=1}^n log f_theta(x_i)$$ converges to [Law of Large Numbers]$$mathbb{E}[log f_theta(X)]=int log f_theta(x),f_0(x),text{d}x$$(where $f_0$ denotes the true density of the iid sample), maximising the likelihood [as a function of $theta$] is asymptotically equivalent to minimising [in $theta$] the Kullback-Leibler divergence
$$int log dfrac{f_0(x)}{f_theta(x)}, f_0(x),text{d}x=underbrace{int log f_0(x),f_0(x),text{d}x}_{text{constant}\text{in }theta}-int log f_theta(x),f_0(x),text{d}x$$
between the true distribution of the iid sample and the family of distributions represented by the $f_theta$'s.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 7 at 17:45

























answered Jan 6 at 18:51









Xi'anXi'an

58.7k897363




58.7k897363












  • $begingroup$
    Thanks for the answer. Could you expand a bit on the KL argument? I'm not seeing how this is the case immediately.
    $endgroup$
    – Alex
    Jan 7 at 16:24


















  • $begingroup$
    Thanks for the answer. Could you expand a bit on the KL argument? I'm not seeing how this is the case immediately.
    $endgroup$
    – Alex
    Jan 7 at 16:24
















$begingroup$
Thanks for the answer. Could you expand a bit on the KL argument? I'm not seeing how this is the case immediately.
$endgroup$
– Alex
Jan 7 at 16:24




$begingroup$
Thanks for the answer. Could you expand a bit on the KL argument? I'm not seeing how this is the case immediately.
$endgroup$
– Alex
Jan 7 at 16:24


















draft saved

draft discarded




















































Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f385862%2fwhy-does-mle-make-sense-given-the-probability-of-an-individual-sample-is-0%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Cabo Verde

Gyllenstierna