Expected Value of the Probability












1












$begingroup$


A paper I was reading they gave the following result without explanation:



Let $sinleft{0,1right}^n$ where the probability of 1 occurring is p. Let k be the number of 1s occurring in s.
Then $mathbf{E}[p|k]=frac{k+1}{n+2}$ .



Intuitively, I would assume $f(p)=p^k*(1-p)^{n-k}$ to be the probability density function. But then I get $mathbf{E}[p|k]=int_{0}^{1}x^{k+1}*(1-x)^{n-k}$ which yields a different result.



I'm thankful for any tips! On a side note, I've only done first year probability, hence I'm not very capable.










share|cite|improve this question









$endgroup$












  • $begingroup$
    That's the expected value of the Bayesian estimator for p given a uniform prior - have a look at math.stackexchange.com/questions/2643324/…
    $endgroup$
    – Matthew Towers
    Jan 2 at 12:06










  • $begingroup$
    the problem is in the order of 0 and 1. For example, for $k = 1$ you are counting $p(1-p)^{n-1}$, but in reality there are $n$ possible s's with this probability (hint: binomial)
    $endgroup$
    – Martín Vacas Vignolo
    Jan 2 at 12:30


















1












$begingroup$


A paper I was reading they gave the following result without explanation:



Let $sinleft{0,1right}^n$ where the probability of 1 occurring is p. Let k be the number of 1s occurring in s.
Then $mathbf{E}[p|k]=frac{k+1}{n+2}$ .



Intuitively, I would assume $f(p)=p^k*(1-p)^{n-k}$ to be the probability density function. But then I get $mathbf{E}[p|k]=int_{0}^{1}x^{k+1}*(1-x)^{n-k}$ which yields a different result.



I'm thankful for any tips! On a side note, I've only done first year probability, hence I'm not very capable.










share|cite|improve this question









$endgroup$












  • $begingroup$
    That's the expected value of the Bayesian estimator for p given a uniform prior - have a look at math.stackexchange.com/questions/2643324/…
    $endgroup$
    – Matthew Towers
    Jan 2 at 12:06










  • $begingroup$
    the problem is in the order of 0 and 1. For example, for $k = 1$ you are counting $p(1-p)^{n-1}$, but in reality there are $n$ possible s's with this probability (hint: binomial)
    $endgroup$
    – Martín Vacas Vignolo
    Jan 2 at 12:30
















1












1








1





$begingroup$


A paper I was reading they gave the following result without explanation:



Let $sinleft{0,1right}^n$ where the probability of 1 occurring is p. Let k be the number of 1s occurring in s.
Then $mathbf{E}[p|k]=frac{k+1}{n+2}$ .



Intuitively, I would assume $f(p)=p^k*(1-p)^{n-k}$ to be the probability density function. But then I get $mathbf{E}[p|k]=int_{0}^{1}x^{k+1}*(1-x)^{n-k}$ which yields a different result.



I'm thankful for any tips! On a side note, I've only done first year probability, hence I'm not very capable.










share|cite|improve this question









$endgroup$




A paper I was reading they gave the following result without explanation:



Let $sinleft{0,1right}^n$ where the probability of 1 occurring is p. Let k be the number of 1s occurring in s.
Then $mathbf{E}[p|k]=frac{k+1}{n+2}$ .



Intuitively, I would assume $f(p)=p^k*(1-p)^{n-k}$ to be the probability density function. But then I get $mathbf{E}[p|k]=int_{0}^{1}x^{k+1}*(1-x)^{n-k}$ which yields a different result.



I'm thankful for any tips! On a side note, I've only done first year probability, hence I'm not very capable.







probability statistics expected-value






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 2 at 11:07









AlisonAlison

164




164












  • $begingroup$
    That's the expected value of the Bayesian estimator for p given a uniform prior - have a look at math.stackexchange.com/questions/2643324/…
    $endgroup$
    – Matthew Towers
    Jan 2 at 12:06










  • $begingroup$
    the problem is in the order of 0 and 1. For example, for $k = 1$ you are counting $p(1-p)^{n-1}$, but in reality there are $n$ possible s's with this probability (hint: binomial)
    $endgroup$
    – Martín Vacas Vignolo
    Jan 2 at 12:30




















  • $begingroup$
    That's the expected value of the Bayesian estimator for p given a uniform prior - have a look at math.stackexchange.com/questions/2643324/…
    $endgroup$
    – Matthew Towers
    Jan 2 at 12:06










  • $begingroup$
    the problem is in the order of 0 and 1. For example, for $k = 1$ you are counting $p(1-p)^{n-1}$, but in reality there are $n$ possible s's with this probability (hint: binomial)
    $endgroup$
    – Martín Vacas Vignolo
    Jan 2 at 12:30


















$begingroup$
That's the expected value of the Bayesian estimator for p given a uniform prior - have a look at math.stackexchange.com/questions/2643324/…
$endgroup$
– Matthew Towers
Jan 2 at 12:06




$begingroup$
That's the expected value of the Bayesian estimator for p given a uniform prior - have a look at math.stackexchange.com/questions/2643324/…
$endgroup$
– Matthew Towers
Jan 2 at 12:06












$begingroup$
the problem is in the order of 0 and 1. For example, for $k = 1$ you are counting $p(1-p)^{n-1}$, but in reality there are $n$ possible s's with this probability (hint: binomial)
$endgroup$
– Martín Vacas Vignolo
Jan 2 at 12:30






$begingroup$
the problem is in the order of 0 and 1. For example, for $k = 1$ you are counting $p(1-p)^{n-1}$, but in reality there are $n$ possible s's with this probability (hint: binomial)
$endgroup$
– Martín Vacas Vignolo
Jan 2 at 12:30












1 Answer
1






active

oldest

votes


















1












$begingroup$

Your $f(p)$ is not a probability density function but is proportional to the likelihood. So if you have a uniform prior on $[0,1]$ then $f(p)$ is also proportional to the posterior; the scaling factor is essentially a Beta function so the probability density function would be $dfrac{p^k(1-p)^{n-k}}{B(k+1,n-k+1)}$.



The mean of this posterior distribution would then be $$dfrac{int_0^1 p cdot p^k(1-p)^{n-k} , dp}{int_0^1 quad p^k(1-p)^{n-k} , dp} = dfrac{B(k+2,n-k+1)}{B(k+1,n-k+1)}=dfrac{k+1}{n+2}$$






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thank you! The intuition is that p has a probability of 1 of being in the interval [0,1] and hence we scale $p^k*(1-p)^{n-k}$ to obtain the probability density function with the integral over [0,1] equalling 1, correct?
    $endgroup$
    – Alison
    Jan 3 at 12:30










  • $begingroup$
    @Alison: Yes - that is an essential feature of a probability distribution
    $endgroup$
    – Henry
    Jan 3 at 14:09











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3059346%2fexpected-value-of-the-probability%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Your $f(p)$ is not a probability density function but is proportional to the likelihood. So if you have a uniform prior on $[0,1]$ then $f(p)$ is also proportional to the posterior; the scaling factor is essentially a Beta function so the probability density function would be $dfrac{p^k(1-p)^{n-k}}{B(k+1,n-k+1)}$.



The mean of this posterior distribution would then be $$dfrac{int_0^1 p cdot p^k(1-p)^{n-k} , dp}{int_0^1 quad p^k(1-p)^{n-k} , dp} = dfrac{B(k+2,n-k+1)}{B(k+1,n-k+1)}=dfrac{k+1}{n+2}$$






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thank you! The intuition is that p has a probability of 1 of being in the interval [0,1] and hence we scale $p^k*(1-p)^{n-k}$ to obtain the probability density function with the integral over [0,1] equalling 1, correct?
    $endgroup$
    – Alison
    Jan 3 at 12:30










  • $begingroup$
    @Alison: Yes - that is an essential feature of a probability distribution
    $endgroup$
    – Henry
    Jan 3 at 14:09
















1












$begingroup$

Your $f(p)$ is not a probability density function but is proportional to the likelihood. So if you have a uniform prior on $[0,1]$ then $f(p)$ is also proportional to the posterior; the scaling factor is essentially a Beta function so the probability density function would be $dfrac{p^k(1-p)^{n-k}}{B(k+1,n-k+1)}$.



The mean of this posterior distribution would then be $$dfrac{int_0^1 p cdot p^k(1-p)^{n-k} , dp}{int_0^1 quad p^k(1-p)^{n-k} , dp} = dfrac{B(k+2,n-k+1)}{B(k+1,n-k+1)}=dfrac{k+1}{n+2}$$






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thank you! The intuition is that p has a probability of 1 of being in the interval [0,1] and hence we scale $p^k*(1-p)^{n-k}$ to obtain the probability density function with the integral over [0,1] equalling 1, correct?
    $endgroup$
    – Alison
    Jan 3 at 12:30










  • $begingroup$
    @Alison: Yes - that is an essential feature of a probability distribution
    $endgroup$
    – Henry
    Jan 3 at 14:09














1












1








1





$begingroup$

Your $f(p)$ is not a probability density function but is proportional to the likelihood. So if you have a uniform prior on $[0,1]$ then $f(p)$ is also proportional to the posterior; the scaling factor is essentially a Beta function so the probability density function would be $dfrac{p^k(1-p)^{n-k}}{B(k+1,n-k+1)}$.



The mean of this posterior distribution would then be $$dfrac{int_0^1 p cdot p^k(1-p)^{n-k} , dp}{int_0^1 quad p^k(1-p)^{n-k} , dp} = dfrac{B(k+2,n-k+1)}{B(k+1,n-k+1)}=dfrac{k+1}{n+2}$$






share|cite|improve this answer









$endgroup$



Your $f(p)$ is not a probability density function but is proportional to the likelihood. So if you have a uniform prior on $[0,1]$ then $f(p)$ is also proportional to the posterior; the scaling factor is essentially a Beta function so the probability density function would be $dfrac{p^k(1-p)^{n-k}}{B(k+1,n-k+1)}$.



The mean of this posterior distribution would then be $$dfrac{int_0^1 p cdot p^k(1-p)^{n-k} , dp}{int_0^1 quad p^k(1-p)^{n-k} , dp} = dfrac{B(k+2,n-k+1)}{B(k+1,n-k+1)}=dfrac{k+1}{n+2}$$







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Jan 2 at 14:34









HenryHenry

101k481168




101k481168












  • $begingroup$
    Thank you! The intuition is that p has a probability of 1 of being in the interval [0,1] and hence we scale $p^k*(1-p)^{n-k}$ to obtain the probability density function with the integral over [0,1] equalling 1, correct?
    $endgroup$
    – Alison
    Jan 3 at 12:30










  • $begingroup$
    @Alison: Yes - that is an essential feature of a probability distribution
    $endgroup$
    – Henry
    Jan 3 at 14:09


















  • $begingroup$
    Thank you! The intuition is that p has a probability of 1 of being in the interval [0,1] and hence we scale $p^k*(1-p)^{n-k}$ to obtain the probability density function with the integral over [0,1] equalling 1, correct?
    $endgroup$
    – Alison
    Jan 3 at 12:30










  • $begingroup$
    @Alison: Yes - that is an essential feature of a probability distribution
    $endgroup$
    – Henry
    Jan 3 at 14:09
















$begingroup$
Thank you! The intuition is that p has a probability of 1 of being in the interval [0,1] and hence we scale $p^k*(1-p)^{n-k}$ to obtain the probability density function with the integral over [0,1] equalling 1, correct?
$endgroup$
– Alison
Jan 3 at 12:30




$begingroup$
Thank you! The intuition is that p has a probability of 1 of being in the interval [0,1] and hence we scale $p^k*(1-p)^{n-k}$ to obtain the probability density function with the integral over [0,1] equalling 1, correct?
$endgroup$
– Alison
Jan 3 at 12:30












$begingroup$
@Alison: Yes - that is an essential feature of a probability distribution
$endgroup$
– Henry
Jan 3 at 14:09




$begingroup$
@Alison: Yes - that is an essential feature of a probability distribution
$endgroup$
– Henry
Jan 3 at 14:09


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3059346%2fexpected-value-of-the-probability%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Cabo Verde

Gyllenstierna