Showing this sum converges (sum of normal tail probabilities)?
up vote
0
down vote
favorite
Let $S_n sim N(0,n)$ so that $S_n$ is the partial sum process of standard normal random variables.
My objective is to prove:
$$sum_{n=1}^infty P(S_n geq (1+epsilon)sqrt{2 n log log n }) < infty$$
for all $epsilon > 0$.
I cannot use the law of iterated logarithm, as the reason I want to prove the above inequality is to prove a weaker form of iterated logarithm ($leq 1$ instead of $=1$).
My attempt has been to bound the tail probabilities using known inequalities for standard normal. Firstly, we rewrite it as
$$sum_{n=1}^infty P(S_n/sqrt{n} geq (1+epsilon)sqrt{2 log log n }) < infty$$
$$sum_{n=1}^infty (1-Phi((1+epsilon)sqrt{2 log log n })) < infty$$
Inequalities I know of:
$$1-Phi(x) leq x^{-1} e^{-x^2/2}/sqrt{2pi}$$
$$1-Phi(x) leq frac{1}{2} e^{-x^2/2}$$
But neither is enough to bound the sum by a convergent sequence. Alternatively, I also know that
$1-Phi(x) sim phi(x)$ but that doesn't seem to help
probability convergence self-learning
add a comment |
up vote
0
down vote
favorite
Let $S_n sim N(0,n)$ so that $S_n$ is the partial sum process of standard normal random variables.
My objective is to prove:
$$sum_{n=1}^infty P(S_n geq (1+epsilon)sqrt{2 n log log n }) < infty$$
for all $epsilon > 0$.
I cannot use the law of iterated logarithm, as the reason I want to prove the above inequality is to prove a weaker form of iterated logarithm ($leq 1$ instead of $=1$).
My attempt has been to bound the tail probabilities using known inequalities for standard normal. Firstly, we rewrite it as
$$sum_{n=1}^infty P(S_n/sqrt{n} geq (1+epsilon)sqrt{2 log log n }) < infty$$
$$sum_{n=1}^infty (1-Phi((1+epsilon)sqrt{2 log log n })) < infty$$
Inequalities I know of:
$$1-Phi(x) leq x^{-1} e^{-x^2/2}/sqrt{2pi}$$
$$1-Phi(x) leq frac{1}{2} e^{-x^2/2}$$
But neither is enough to bound the sum by a convergent sequence. Alternatively, I also know that
$1-Phi(x) sim phi(x)$ but that doesn't seem to help
probability convergence self-learning
In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
– Kavi Rama Murthy
Dec 3 at 5:44
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Let $S_n sim N(0,n)$ so that $S_n$ is the partial sum process of standard normal random variables.
My objective is to prove:
$$sum_{n=1}^infty P(S_n geq (1+epsilon)sqrt{2 n log log n }) < infty$$
for all $epsilon > 0$.
I cannot use the law of iterated logarithm, as the reason I want to prove the above inequality is to prove a weaker form of iterated logarithm ($leq 1$ instead of $=1$).
My attempt has been to bound the tail probabilities using known inequalities for standard normal. Firstly, we rewrite it as
$$sum_{n=1}^infty P(S_n/sqrt{n} geq (1+epsilon)sqrt{2 log log n }) < infty$$
$$sum_{n=1}^infty (1-Phi((1+epsilon)sqrt{2 log log n })) < infty$$
Inequalities I know of:
$$1-Phi(x) leq x^{-1} e^{-x^2/2}/sqrt{2pi}$$
$$1-Phi(x) leq frac{1}{2} e^{-x^2/2}$$
But neither is enough to bound the sum by a convergent sequence. Alternatively, I also know that
$1-Phi(x) sim phi(x)$ but that doesn't seem to help
probability convergence self-learning
Let $S_n sim N(0,n)$ so that $S_n$ is the partial sum process of standard normal random variables.
My objective is to prove:
$$sum_{n=1}^infty P(S_n geq (1+epsilon)sqrt{2 n log log n }) < infty$$
for all $epsilon > 0$.
I cannot use the law of iterated logarithm, as the reason I want to prove the above inequality is to prove a weaker form of iterated logarithm ($leq 1$ instead of $=1$).
My attempt has been to bound the tail probabilities using known inequalities for standard normal. Firstly, we rewrite it as
$$sum_{n=1}^infty P(S_n/sqrt{n} geq (1+epsilon)sqrt{2 log log n }) < infty$$
$$sum_{n=1}^infty (1-Phi((1+epsilon)sqrt{2 log log n })) < infty$$
Inequalities I know of:
$$1-Phi(x) leq x^{-1} e^{-x^2/2}/sqrt{2pi}$$
$$1-Phi(x) leq frac{1}{2} e^{-x^2/2}$$
But neither is enough to bound the sum by a convergent sequence. Alternatively, I also know that
$1-Phi(x) sim phi(x)$ but that doesn't seem to help
probability convergence self-learning
probability convergence self-learning
edited Dec 3 at 6:42
asked Dec 3 at 5:11
Xiaomi
951114
951114
In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
– Kavi Rama Murthy
Dec 3 at 5:44
add a comment |
In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
– Kavi Rama Murthy
Dec 3 at 5:44
In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
– Kavi Rama Murthy
Dec 3 at 5:44
In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
– Kavi Rama Murthy
Dec 3 at 5:44
add a comment |
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
(Too long for a comment)
This cannot be shown in the way you're trying. Indeed, the best possible this way is $Omega(sqrt{log n})$. This is because a commensurate lower bound to the gaussian CDF exists:
$$1 - Phi(x) ge frac{1}{sqrt{2pi}} frac{x}{x^2 + 1} e^{-x^2/2}$$
(to show this, integrate by parts one more time in the proof of the first upper bound you state).
One reason you are failing is the following: with the proof strategy, the following statement would also follow: Consider independent Gaussians $Z_n sim mathcal{N}(0,1).$ Then ${Z_n ge (1+epsilon)sqrt{2log log n} }$ occurs only finitely often a.s. It is intuitive that this is false (for a proof, use the lower bound above and B-C). The LIL truly uses the fact that one is dealing with a sum of Gaussians, by exploting the correlations between $S_n$ and $S_{(1+c)n}$ for small $c$.
That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
– Xiaomi
Dec 3 at 6:41
Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
– stochasticboy321
Dec 3 at 21:24
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
(Too long for a comment)
This cannot be shown in the way you're trying. Indeed, the best possible this way is $Omega(sqrt{log n})$. This is because a commensurate lower bound to the gaussian CDF exists:
$$1 - Phi(x) ge frac{1}{sqrt{2pi}} frac{x}{x^2 + 1} e^{-x^2/2}$$
(to show this, integrate by parts one more time in the proof of the first upper bound you state).
One reason you are failing is the following: with the proof strategy, the following statement would also follow: Consider independent Gaussians $Z_n sim mathcal{N}(0,1).$ Then ${Z_n ge (1+epsilon)sqrt{2log log n} }$ occurs only finitely often a.s. It is intuitive that this is false (for a proof, use the lower bound above and B-C). The LIL truly uses the fact that one is dealing with a sum of Gaussians, by exploting the correlations between $S_n$ and $S_{(1+c)n}$ for small $c$.
That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
– Xiaomi
Dec 3 at 6:41
Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
– stochasticboy321
Dec 3 at 21:24
add a comment |
up vote
2
down vote
accepted
(Too long for a comment)
This cannot be shown in the way you're trying. Indeed, the best possible this way is $Omega(sqrt{log n})$. This is because a commensurate lower bound to the gaussian CDF exists:
$$1 - Phi(x) ge frac{1}{sqrt{2pi}} frac{x}{x^2 + 1} e^{-x^2/2}$$
(to show this, integrate by parts one more time in the proof of the first upper bound you state).
One reason you are failing is the following: with the proof strategy, the following statement would also follow: Consider independent Gaussians $Z_n sim mathcal{N}(0,1).$ Then ${Z_n ge (1+epsilon)sqrt{2log log n} }$ occurs only finitely often a.s. It is intuitive that this is false (for a proof, use the lower bound above and B-C). The LIL truly uses the fact that one is dealing with a sum of Gaussians, by exploting the correlations between $S_n$ and $S_{(1+c)n}$ for small $c$.
That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
– Xiaomi
Dec 3 at 6:41
Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
– stochasticboy321
Dec 3 at 21:24
add a comment |
up vote
2
down vote
accepted
up vote
2
down vote
accepted
(Too long for a comment)
This cannot be shown in the way you're trying. Indeed, the best possible this way is $Omega(sqrt{log n})$. This is because a commensurate lower bound to the gaussian CDF exists:
$$1 - Phi(x) ge frac{1}{sqrt{2pi}} frac{x}{x^2 + 1} e^{-x^2/2}$$
(to show this, integrate by parts one more time in the proof of the first upper bound you state).
One reason you are failing is the following: with the proof strategy, the following statement would also follow: Consider independent Gaussians $Z_n sim mathcal{N}(0,1).$ Then ${Z_n ge (1+epsilon)sqrt{2log log n} }$ occurs only finitely often a.s. It is intuitive that this is false (for a proof, use the lower bound above and B-C). The LIL truly uses the fact that one is dealing with a sum of Gaussians, by exploting the correlations between $S_n$ and $S_{(1+c)n}$ for small $c$.
(Too long for a comment)
This cannot be shown in the way you're trying. Indeed, the best possible this way is $Omega(sqrt{log n})$. This is because a commensurate lower bound to the gaussian CDF exists:
$$1 - Phi(x) ge frac{1}{sqrt{2pi}} frac{x}{x^2 + 1} e^{-x^2/2}$$
(to show this, integrate by parts one more time in the proof of the first upper bound you state).
One reason you are failing is the following: with the proof strategy, the following statement would also follow: Consider independent Gaussians $Z_n sim mathcal{N}(0,1).$ Then ${Z_n ge (1+epsilon)sqrt{2log log n} }$ occurs only finitely often a.s. It is intuitive that this is false (for a proof, use the lower bound above and B-C). The LIL truly uses the fact that one is dealing with a sum of Gaussians, by exploting the correlations between $S_n$ and $S_{(1+c)n}$ for small $c$.
answered Dec 3 at 5:44
stochasticboy321
2,477617
2,477617
That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
– Xiaomi
Dec 3 at 6:41
Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
– stochasticboy321
Dec 3 at 21:24
add a comment |
That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
– Xiaomi
Dec 3 at 6:41
Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
– stochasticboy321
Dec 3 at 21:24
That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
– Xiaomi
Dec 3 at 6:41
That makes sense. But that is strange, as this is an old exam question, and I don't think it would be that difficult.
– Xiaomi
Dec 3 at 6:41
Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
– stochasticboy321
Dec 3 at 21:24
Well, maybe, since it's an exam question and since the LIL is pretty standard material for a grad prob. class, it's possible that they expect one to reproduce the upper bound proof from this?
– stochasticboy321
Dec 3 at 21:24
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3023667%2fshowing-this-sum-converges-sum-of-normal-tail-probabilities%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
In the first inequality for $1-Phi(x)$ you have stated there is also an inequality in the opposite direction with a different constant. It appears therefore that the result you are trying to prove is actually false.
– Kavi Rama Murthy
Dec 3 at 5:44