Showing this sum converges (sum of normal tail probabilities)?











up vote
0
down vote

favorite












Let $S_n sim N(0,n)$ so that $S_n$ is the partial sum process of standard normal random variables.



My objective is to prove:



$$sum_{n=1}^infty P(S_n geq (1+epsilon)sqrt{2 n log log n }) < infty$$



for all $epsilon > 0$.



I cannot use the law of iterated logarithm, as the reason I want to prove the above inequality is to prove a weaker form of iterated logarithm ($leq 1$ instead of $=1$).



My attempt has been to bound the tail probabilities using known inequalities for standard normal. Firstly, we rewrite it as



$$sum_{n=1}^infty P(S_n/sqrt{n} geq (1+epsilon)sqrt{2 log log n }) < infty$$



$$sum_{n=1}^infty (1-Phi((1+epsilon)sqrt{2 log log n })) < infty$$



Inequalities I know of:



$$1-Phi(x) leq x^{-1} e^{-x^2/2}/sqrt{2pi}$$



$$1-Phi(x) leq frac{1}{2} e^{-x^2/2}$$



But neither is enough to bound the sum by a convergent sequence. Alternatively, I also know that



$1-Phi(x) sim phi(x)$ but that doesn't seem to help










share|cite|improve this question
























  • In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
    – Kavi Rama Murthy
    Dec 3 at 5:44















up vote
0
down vote

favorite












Let $S_n sim N(0,n)$ so that $S_n$ is the partial sum process of standard normal random variables.



My objective is to prove:



$$sum_{n=1}^infty P(S_n geq (1+epsilon)sqrt{2 n log log n }) < infty$$



for all $epsilon > 0$.



I cannot use the law of iterated logarithm, as the reason I want to prove the above inequality is to prove a weaker form of iterated logarithm ($leq 1$ instead of $=1$).



My attempt has been to bound the tail probabilities using known inequalities for standard normal. Firstly, we rewrite it as



$$sum_{n=1}^infty P(S_n/sqrt{n} geq (1+epsilon)sqrt{2 log log n }) < infty$$



$$sum_{n=1}^infty (1-Phi((1+epsilon)sqrt{2 log log n })) < infty$$



Inequalities I know of:



$$1-Phi(x) leq x^{-1} e^{-x^2/2}/sqrt{2pi}$$



$$1-Phi(x) leq frac{1}{2} e^{-x^2/2}$$



But neither is enough to bound the sum by a convergent sequence. Alternatively, I also know that



$1-Phi(x) sim phi(x)$ but that doesn't seem to help










share|cite|improve this question
























  • In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
    – Kavi Rama Murthy
    Dec 3 at 5:44













up vote
0
down vote

favorite









up vote
0
down vote

favorite











Let $S_n sim N(0,n)$ so that $S_n$ is the partial sum process of standard normal random variables.



My objective is to prove:



$$sum_{n=1}^infty P(S_n geq (1+epsilon)sqrt{2 n log log n }) < infty$$



for all $epsilon > 0$.



I cannot use the law of iterated logarithm, as the reason I want to prove the above inequality is to prove a weaker form of iterated logarithm ($leq 1$ instead of $=1$).



My attempt has been to bound the tail probabilities using known inequalities for standard normal. Firstly, we rewrite it as



$$sum_{n=1}^infty P(S_n/sqrt{n} geq (1+epsilon)sqrt{2 log log n }) < infty$$



$$sum_{n=1}^infty (1-Phi((1+epsilon)sqrt{2 log log n })) < infty$$



Inequalities I know of:



$$1-Phi(x) leq x^{-1} e^{-x^2/2}/sqrt{2pi}$$



$$1-Phi(x) leq frac{1}{2} e^{-x^2/2}$$



But neither is enough to bound the sum by a convergent sequence. Alternatively, I also know that



$1-Phi(x) sim phi(x)$ but that doesn't seem to help










share|cite|improve this question















Let $S_n sim N(0,n)$ so that $S_n$ is the partial sum process of standard normal random variables.



My objective is to prove:



$$sum_{n=1}^infty P(S_n geq (1+epsilon)sqrt{2 n log log n }) < infty$$



for all $epsilon > 0$.



I cannot use the law of iterated logarithm, as the reason I want to prove the above inequality is to prove a weaker form of iterated logarithm ($leq 1$ instead of $=1$).



My attempt has been to bound the tail probabilities using known inequalities for standard normal. Firstly, we rewrite it as



$$sum_{n=1}^infty P(S_n/sqrt{n} geq (1+epsilon)sqrt{2 log log n }) < infty$$



$$sum_{n=1}^infty (1-Phi((1+epsilon)sqrt{2 log log n })) < infty$$



Inequalities I know of:



$$1-Phi(x) leq x^{-1} e^{-x^2/2}/sqrt{2pi}$$



$$1-Phi(x) leq frac{1}{2} e^{-x^2/2}$$



But neither is enough to bound the sum by a convergent sequence. Alternatively, I also know that



$1-Phi(x) sim phi(x)$ but that doesn't seem to help







probability convergence self-learning






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 3 at 6:42

























asked Dec 3 at 5:11









Xiaomi

951114




951114












  • In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
    – Kavi Rama Murthy
    Dec 3 at 5:44


















  • In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
    – Kavi Rama Murthy
    Dec 3 at 5:44
















In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
– Kavi Rama Murthy
Dec 3 at 5:44




In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
– Kavi Rama Murthy
Dec 3 at 5:44










1 Answer
1






active

oldest

votes

















up vote
2
down vote



accepted










(Too long for a comment)



This cannot be shown in the way you're trying. Indeed, the best possible this way is $Omega(sqrt{log n})$. This is because a commensurate lower bound to the gaussian CDF exists:



$$1 - Phi(x) ge frac{1}{sqrt{2pi}} frac{x}{x^2 + 1} e^{-x^2/2}$$



(to show this, integrate by parts one more time in the proof of the first upper bound you state).



One reason you are failing is the following: with the proof strategy, the following statement would also follow: Consider independent Gaussians $Z_n sim mathcal{N}(0,1).$ Then ${Z_n ge (1+epsilon)sqrt{2log log n} }$ occurs only finitely often a.s. It is intuitive that this is false (for a proof, use the lower bound above and B-C). The LIL truly uses the fact that one is dealing with a sum of Gaussians, by exploting the correlations between $S_n$ and $S_{(1+c)n}$ for small $c$.






share|cite|improve this answer





















  • That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
    – Xiaomi
    Dec 3 at 6:41










  • Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
    – stochasticboy321
    Dec 3 at 21:24











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3023667%2fshowing-this-sum-converges-sum-of-normal-tail-probabilities%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
2
down vote



accepted










(Too long for a comment)



This cannot be shown in the way you're trying. Indeed, the best possible this way is $Omega(sqrt{log n})$. This is because a commensurate lower bound to the gaussian CDF exists:



$$1 - Phi(x) ge frac{1}{sqrt{2pi}} frac{x}{x^2 + 1} e^{-x^2/2}$$



(to show this, integrate by parts one more time in the proof of the first upper bound you state).



One reason you are failing is the following: with the proof strategy, the following statement would also follow: Consider independent Gaussians $Z_n sim mathcal{N}(0,1).$ Then ${Z_n ge (1+epsilon)sqrt{2log log n} }$ occurs only finitely often a.s. It is intuitive that this is false (for a proof, use the lower bound above and B-C). The LIL truly uses the fact that one is dealing with a sum of Gaussians, by exploting the correlations between $S_n$ and $S_{(1+c)n}$ for small $c$.






share|cite|improve this answer





















  • That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
    – Xiaomi
    Dec 3 at 6:41










  • Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
    – stochasticboy321
    Dec 3 at 21:24















up vote
2
down vote



accepted










(Too long for a comment)



This cannot be shown in the way you're trying. Indeed, the best possible this way is $Omega(sqrt{log n})$. This is because a commensurate lower bound to the gaussian CDF exists:



$$1 - Phi(x) ge frac{1}{sqrt{2pi}} frac{x}{x^2 + 1} e^{-x^2/2}$$



(to show this, integrate by parts one more time in the proof of the first upper bound you state).



One reason you are failing is the following: with the proof strategy, the following statement would also follow: Consider independent Gaussians $Z_n sim mathcal{N}(0,1).$ Then ${Z_n ge (1+epsilon)sqrt{2log log n} }$ occurs only finitely often a.s. It is intuitive that this is false (for a proof, use the lower bound above and B-C). The LIL truly uses the fact that one is dealing with a sum of Gaussians, by exploting the correlations between $S_n$ and $S_{(1+c)n}$ for small $c$.






share|cite|improve this answer





















  • That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
    – Xiaomi
    Dec 3 at 6:41










  • Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
    – stochasticboy321
    Dec 3 at 21:24













up vote
2
down vote



accepted







up vote
2
down vote



accepted






(Too long for a comment)



This cannot be shown in the way you're trying. Indeed, the best possible this way is $Omega(sqrt{log n})$. This is because a commensurate lower bound to the gaussian CDF exists:



$$1 - Phi(x) ge frac{1}{sqrt{2pi}} frac{x}{x^2 + 1} e^{-x^2/2}$$



(to show this, integrate by parts one more time in the proof of the first upper bound you state).



One reason you are failing is the following: with the proof strategy, the following statement would also follow: Consider independent Gaussians $Z_n sim mathcal{N}(0,1).$ Then ${Z_n ge (1+epsilon)sqrt{2log log n} }$ occurs only finitely often a.s. It is intuitive that this is false (for a proof, use the lower bound above and B-C). The LIL truly uses the fact that one is dealing with a sum of Gaussians, by exploting the correlations between $S_n$ and $S_{(1+c)n}$ for small $c$.






share|cite|improve this answer












(Too long for a comment)



This cannot be shown in the way you're trying. Indeed, the best possible this way is $Omega(sqrt{log n})$. This is because a commensurate lower bound to the gaussian CDF exists:



$$1 - Phi(x) ge frac{1}{sqrt{2pi}} frac{x}{x^2 + 1} e^{-x^2/2}$$



(to show this, integrate by parts one more time in the proof of the first upper bound you state).



One reason you are failing is the following: with the proof strategy, the following statement would also follow: Consider independent Gaussians $Z_n sim mathcal{N}(0,1).$ Then ${Z_n ge (1+epsilon)sqrt{2log log n} }$ occurs only finitely often a.s. It is intuitive that this is false (for a proof, use the lower bound above and B-C). The LIL truly uses the fact that one is dealing with a sum of Gaussians, by exploting the correlations between $S_n$ and $S_{(1+c)n}$ for small $c$.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Dec 3 at 5:44









stochasticboy321

2,477617




2,477617












  • That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
    – Xiaomi
    Dec 3 at 6:41










  • Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
    – stochasticboy321
    Dec 3 at 21:24


















  • That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
    – Xiaomi
    Dec 3 at 6:41










  • Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
    – stochasticboy321
    Dec 3 at 21:24
















That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
– Xiaomi
Dec 3 at 6:41




That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
– Xiaomi
Dec 3 at 6:41












Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
– stochasticboy321
Dec 3 at 21:24




Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
– stochasticboy321
Dec 3 at 21:24


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3023667%2fshowing-this-sum-converges-sum-of-normal-tail-probabilities%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Cabo Verde

Gyllenstierna