Distance from root of random walk on regular tree
up vote
1
down vote
favorite
Let $G$ be the $k$-regular tree (a tree where every vertex has degree $k$), and let $X_n$ be a random walk on $G$ that follows the transition probabilities induced by the edge weights of $G$, which are uniformly 1. Let $d_n$ be the distance of $X_n$ from it's starting position after $n$ steps.
I am trying to show that $d_n/n$ almost surely converges to some limit, and I am also trying to find this limit (as a function of $k$).
I tried computing the case where $X_n$ starts from the root by finding the number of ways that $d_n = m$ for some $m < n$, and using that to get an expression for $E(d_n)$. I ended up getting an incredibly complicated expression, that was not at all illustrative in terms of solving the above problem.
I figure that I must be approaching this problem wrong, and that there must be some other way to look at it, which is why I am posting here, hoping that somebody can point me in the right direction.
probability probability-theory graph-theory markov-chains random-walk
add a comment |
up vote
1
down vote
favorite
Let $G$ be the $k$-regular tree (a tree where every vertex has degree $k$), and let $X_n$ be a random walk on $G$ that follows the transition probabilities induced by the edge weights of $G$, which are uniformly 1. Let $d_n$ be the distance of $X_n$ from it's starting position after $n$ steps.
I am trying to show that $d_n/n$ almost surely converges to some limit, and I am also trying to find this limit (as a function of $k$).
I tried computing the case where $X_n$ starts from the root by finding the number of ways that $d_n = m$ for some $m < n$, and using that to get an expression for $E(d_n)$. I ended up getting an incredibly complicated expression, that was not at all illustrative in terms of solving the above problem.
I figure that I must be approaching this problem wrong, and that there must be some other way to look at it, which is why I am posting here, hoping that somebody can point me in the right direction.
probability probability-theory graph-theory markov-chains random-walk
What are the edge weights of $G$? You mention that you follow the transition probabilities they induce, but you never say what they are.
– Misha Lavrov
Dec 2 at 6:53
Sorry. We can take all edge weights to be 1.
– jackson5
Dec 2 at 18:00
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Let $G$ be the $k$-regular tree (a tree where every vertex has degree $k$), and let $X_n$ be a random walk on $G$ that follows the transition probabilities induced by the edge weights of $G$, which are uniformly 1. Let $d_n$ be the distance of $X_n$ from it's starting position after $n$ steps.
I am trying to show that $d_n/n$ almost surely converges to some limit, and I am also trying to find this limit (as a function of $k$).
I tried computing the case where $X_n$ starts from the root by finding the number of ways that $d_n = m$ for some $m < n$, and using that to get an expression for $E(d_n)$. I ended up getting an incredibly complicated expression, that was not at all illustrative in terms of solving the above problem.
I figure that I must be approaching this problem wrong, and that there must be some other way to look at it, which is why I am posting here, hoping that somebody can point me in the right direction.
probability probability-theory graph-theory markov-chains random-walk
Let $G$ be the $k$-regular tree (a tree where every vertex has degree $k$), and let $X_n$ be a random walk on $G$ that follows the transition probabilities induced by the edge weights of $G$, which are uniformly 1. Let $d_n$ be the distance of $X_n$ from it's starting position after $n$ steps.
I am trying to show that $d_n/n$ almost surely converges to some limit, and I am also trying to find this limit (as a function of $k$).
I tried computing the case where $X_n$ starts from the root by finding the number of ways that $d_n = m$ for some $m < n$, and using that to get an expression for $E(d_n)$. I ended up getting an incredibly complicated expression, that was not at all illustrative in terms of solving the above problem.
I figure that I must be approaching this problem wrong, and that there must be some other way to look at it, which is why I am posting here, hoping that somebody can point me in the right direction.
probability probability-theory graph-theory markov-chains random-walk
probability probability-theory graph-theory markov-chains random-walk
edited Dec 2 at 18:01
asked Dec 2 at 4:13
jackson5
596312
596312
What are the edge weights of $G$? You mention that you follow the transition probabilities they induce, but you never say what they are.
– Misha Lavrov
Dec 2 at 6:53
Sorry. We can take all edge weights to be 1.
– jackson5
Dec 2 at 18:00
add a comment |
What are the edge weights of $G$? You mention that you follow the transition probabilities they induce, but you never say what they are.
– Misha Lavrov
Dec 2 at 6:53
Sorry. We can take all edge weights to be 1.
– jackson5
Dec 2 at 18:00
What are the edge weights of $G$? You mention that you follow the transition probabilities they induce, but you never say what they are.
– Misha Lavrov
Dec 2 at 6:53
What are the edge weights of $G$? You mention that you follow the transition probabilities they induce, but you never say what they are.
– Misha Lavrov
Dec 2 at 6:53
Sorry. We can take all edge weights to be 1.
– jackson5
Dec 2 at 18:00
Sorry. We can take all edge weights to be 1.
– jackson5
Dec 2 at 18:00
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
Finding an exact expression for $d_n$ is invariably going to lead to a messy formula that's probably the one you discovered, but it's not necessary to know the exact value of $d_n$ to find the limit.
First, there is a way to simplify the random walk. For our purposes, all vertices of $G$ at distance $d$ from the root are functionally identical: they all have a $frac1k$ probability of going up to a vertex at distance $d-1$ from the root, and a $1-frac1k$ probability of going down to a vertex at distance $d+1$ from the root. So instead of considering a random walk on $G$, we can consider a biased random walk on the states ${0,1,2,dots}$ where each step is $+1$ with a probability of $1-frac1k$ and $-1$ with a probability of $frac1k$. (Except at $0$, which goes to $1$ with probability $1$.)
It's easier to solve for the behavior of a related Markov chain, which is the same random walk, but on the integers ${dots,-2,-1,0,1,2,dots}$, with no awkward boundary condition. Here, the expected state after $n$ steps is $frac nk cdot (-1) + n cdot(1-frac1k) cdot (+1) = n(1 - frac2k)$. To go from this to the statement $lim_{n to infty} frac{d_n}{n} = 1 -frac2k$, we need some concentration inequality. Chebyshev's inequality should be sufficient, but in this case, we can use a Chernoff-type bound if we want to get tighter concentration, as well.
The random walk we got for analyzing the tree is slightly different: when we're at $0$, we don't have a $frac1k$ chance of moving to $-1$. However, we can argue that the same analysis applies: in both random walks, you only visit $0$ finitely many times, with probability $1$ (and in the random walk on the integers, with probability $1$ we eventually stay in the positive states forever). So the limit should be $1 - frac2k$ in your case as well.
I am having a hard time formally arguing that I get the same limit as the chain on the integers, can you elaborate on this a bit?
– jackson5
Dec 2 at 21:39
Couple the two chains so that whatever the integer chain does, so does the tree chain unless it's at the root. If, in the first $n$ steps, the tree chain visits the root $r_n$ times, then its behavior differs from the integer chain's by at most $2r_n$. Since $r_n$ is bounded by a constant with probability $1$, $lim_{nto infty} frac{2r_n}{n} = 0$ (again, with probability $1$) and so you can ignore this effect.
– Misha Lavrov
Dec 2 at 21:48
(To be more precise, "$r_n$ is bounded by a constant with probability $1$ is bad phrasing; I mean to say that $lim_{n to infty} r_n$ is finite with probability $1$. How this works out in practice is that every time you leave the root there's a probability depending on $k$ that you never return.)
– Misha Lavrov
Dec 2 at 22:37
Yes, that makes sense!
– jackson5
Dec 2 at 22:39
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
Finding an exact expression for $d_n$ is invariably going to lead to a messy formula that's probably the one you discovered, but it's not necessary to know the exact value of $d_n$ to find the limit.
First, there is a way to simplify the random walk. For our purposes, all vertices of $G$ at distance $d$ from the root are functionally identical: they all have a $frac1k$ probability of going up to a vertex at distance $d-1$ from the root, and a $1-frac1k$ probability of going down to a vertex at distance $d+1$ from the root. So instead of considering a random walk on $G$, we can consider a biased random walk on the states ${0,1,2,dots}$ where each step is $+1$ with a probability of $1-frac1k$ and $-1$ with a probability of $frac1k$. (Except at $0$, which goes to $1$ with probability $1$.)
It's easier to solve for the behavior of a related Markov chain, which is the same random walk, but on the integers ${dots,-2,-1,0,1,2,dots}$, with no awkward boundary condition. Here, the expected state after $n$ steps is $frac nk cdot (-1) + n cdot(1-frac1k) cdot (+1) = n(1 - frac2k)$. To go from this to the statement $lim_{n to infty} frac{d_n}{n} = 1 -frac2k$, we need some concentration inequality. Chebyshev's inequality should be sufficient, but in this case, we can use a Chernoff-type bound if we want to get tighter concentration, as well.
The random walk we got for analyzing the tree is slightly different: when we're at $0$, we don't have a $frac1k$ chance of moving to $-1$. However, we can argue that the same analysis applies: in both random walks, you only visit $0$ finitely many times, with probability $1$ (and in the random walk on the integers, with probability $1$ we eventually stay in the positive states forever). So the limit should be $1 - frac2k$ in your case as well.
I am having a hard time formally arguing that I get the same limit as the chain on the integers, can you elaborate on this a bit?
– jackson5
Dec 2 at 21:39
Couple the two chains so that whatever the integer chain does, so does the tree chain unless it's at the root. If, in the first $n$ steps, the tree chain visits the root $r_n$ times, then its behavior differs from the integer chain's by at most $2r_n$. Since $r_n$ is bounded by a constant with probability $1$, $lim_{nto infty} frac{2r_n}{n} = 0$ (again, with probability $1$) and so you can ignore this effect.
– Misha Lavrov
Dec 2 at 21:48
(To be more precise, "$r_n$ is bounded by a constant with probability $1$ is bad phrasing; I mean to say that $lim_{n to infty} r_n$ is finite with probability $1$. How this works out in practice is that every time you leave the root there's a probability depending on $k$ that you never return.)
– Misha Lavrov
Dec 2 at 22:37
Yes, that makes sense!
– jackson5
Dec 2 at 22:39
add a comment |
up vote
1
down vote
accepted
Finding an exact expression for $d_n$ is invariably going to lead to a messy formula that's probably the one you discovered, but it's not necessary to know the exact value of $d_n$ to find the limit.
First, there is a way to simplify the random walk. For our purposes, all vertices of $G$ at distance $d$ from the root are functionally identical: they all have a $frac1k$ probability of going up to a vertex at distance $d-1$ from the root, and a $1-frac1k$ probability of going down to a vertex at distance $d+1$ from the root. So instead of considering a random walk on $G$, we can consider a biased random walk on the states ${0,1,2,dots}$ where each step is $+1$ with a probability of $1-frac1k$ and $-1$ with a probability of $frac1k$. (Except at $0$, which goes to $1$ with probability $1$.)
It's easier to solve for the behavior of a related Markov chain, which is the same random walk, but on the integers ${dots,-2,-1,0,1,2,dots}$, with no awkward boundary condition. Here, the expected state after $n$ steps is $frac nk cdot (-1) + n cdot(1-frac1k) cdot (+1) = n(1 - frac2k)$. To go from this to the statement $lim_{n to infty} frac{d_n}{n} = 1 -frac2k$, we need some concentration inequality. Chebyshev's inequality should be sufficient, but in this case, we can use a Chernoff-type bound if we want to get tighter concentration, as well.
The random walk we got for analyzing the tree is slightly different: when we're at $0$, we don't have a $frac1k$ chance of moving to $-1$. However, we can argue that the same analysis applies: in both random walks, you only visit $0$ finitely many times, with probability $1$ (and in the random walk on the integers, with probability $1$ we eventually stay in the positive states forever). So the limit should be $1 - frac2k$ in your case as well.
I am having a hard time formally arguing that I get the same limit as the chain on the integers, can you elaborate on this a bit?
– jackson5
Dec 2 at 21:39
Couple the two chains so that whatever the integer chain does, so does the tree chain unless it's at the root. If, in the first $n$ steps, the tree chain visits the root $r_n$ times, then its behavior differs from the integer chain's by at most $2r_n$. Since $r_n$ is bounded by a constant with probability $1$, $lim_{nto infty} frac{2r_n}{n} = 0$ (again, with probability $1$) and so you can ignore this effect.
– Misha Lavrov
Dec 2 at 21:48
(To be more precise, "$r_n$ is bounded by a constant with probability $1$ is bad phrasing; I mean to say that $lim_{n to infty} r_n$ is finite with probability $1$. How this works out in practice is that every time you leave the root there's a probability depending on $k$ that you never return.)
– Misha Lavrov
Dec 2 at 22:37
Yes, that makes sense!
– jackson5
Dec 2 at 22:39
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
Finding an exact expression for $d_n$ is invariably going to lead to a messy formula that's probably the one you discovered, but it's not necessary to know the exact value of $d_n$ to find the limit.
First, there is a way to simplify the random walk. For our purposes, all vertices of $G$ at distance $d$ from the root are functionally identical: they all have a $frac1k$ probability of going up to a vertex at distance $d-1$ from the root, and a $1-frac1k$ probability of going down to a vertex at distance $d+1$ from the root. So instead of considering a random walk on $G$, we can consider a biased random walk on the states ${0,1,2,dots}$ where each step is $+1$ with a probability of $1-frac1k$ and $-1$ with a probability of $frac1k$. (Except at $0$, which goes to $1$ with probability $1$.)
It's easier to solve for the behavior of a related Markov chain, which is the same random walk, but on the integers ${dots,-2,-1,0,1,2,dots}$, with no awkward boundary condition. Here, the expected state after $n$ steps is $frac nk cdot (-1) + n cdot(1-frac1k) cdot (+1) = n(1 - frac2k)$. To go from this to the statement $lim_{n to infty} frac{d_n}{n} = 1 -frac2k$, we need some concentration inequality. Chebyshev's inequality should be sufficient, but in this case, we can use a Chernoff-type bound if we want to get tighter concentration, as well.
The random walk we got for analyzing the tree is slightly different: when we're at $0$, we don't have a $frac1k$ chance of moving to $-1$. However, we can argue that the same analysis applies: in both random walks, you only visit $0$ finitely many times, with probability $1$ (and in the random walk on the integers, with probability $1$ we eventually stay in the positive states forever). So the limit should be $1 - frac2k$ in your case as well.
Finding an exact expression for $d_n$ is invariably going to lead to a messy formula that's probably the one you discovered, but it's not necessary to know the exact value of $d_n$ to find the limit.
First, there is a way to simplify the random walk. For our purposes, all vertices of $G$ at distance $d$ from the root are functionally identical: they all have a $frac1k$ probability of going up to a vertex at distance $d-1$ from the root, and a $1-frac1k$ probability of going down to a vertex at distance $d+1$ from the root. So instead of considering a random walk on $G$, we can consider a biased random walk on the states ${0,1,2,dots}$ where each step is $+1$ with a probability of $1-frac1k$ and $-1$ with a probability of $frac1k$. (Except at $0$, which goes to $1$ with probability $1$.)
It's easier to solve for the behavior of a related Markov chain, which is the same random walk, but on the integers ${dots,-2,-1,0,1,2,dots}$, with no awkward boundary condition. Here, the expected state after $n$ steps is $frac nk cdot (-1) + n cdot(1-frac1k) cdot (+1) = n(1 - frac2k)$. To go from this to the statement $lim_{n to infty} frac{d_n}{n} = 1 -frac2k$, we need some concentration inequality. Chebyshev's inequality should be sufficient, but in this case, we can use a Chernoff-type bound if we want to get tighter concentration, as well.
The random walk we got for analyzing the tree is slightly different: when we're at $0$, we don't have a $frac1k$ chance of moving to $-1$. However, we can argue that the same analysis applies: in both random walks, you only visit $0$ finitely many times, with probability $1$ (and in the random walk on the integers, with probability $1$ we eventually stay in the positive states forever). So the limit should be $1 - frac2k$ in your case as well.
answered Dec 2 at 19:34
Misha Lavrov
42.5k555101
42.5k555101
I am having a hard time formally arguing that I get the same limit as the chain on the integers, can you elaborate on this a bit?
– jackson5
Dec 2 at 21:39
Couple the two chains so that whatever the integer chain does, so does the tree chain unless it's at the root. If, in the first $n$ steps, the tree chain visits the root $r_n$ times, then its behavior differs from the integer chain's by at most $2r_n$. Since $r_n$ is bounded by a constant with probability $1$, $lim_{nto infty} frac{2r_n}{n} = 0$ (again, with probability $1$) and so you can ignore this effect.
– Misha Lavrov
Dec 2 at 21:48
(To be more precise, "$r_n$ is bounded by a constant with probability $1$ is bad phrasing; I mean to say that $lim_{n to infty} r_n$ is finite with probability $1$. How this works out in practice is that every time you leave the root there's a probability depending on $k$ that you never return.)
– Misha Lavrov
Dec 2 at 22:37
Yes, that makes sense!
– jackson5
Dec 2 at 22:39
add a comment |
I am having a hard time formally arguing that I get the same limit as the chain on the integers, can you elaborate on this a bit?
– jackson5
Dec 2 at 21:39
Couple the two chains so that whatever the integer chain does, so does the tree chain unless it's at the root. If, in the first $n$ steps, the tree chain visits the root $r_n$ times, then its behavior differs from the integer chain's by at most $2r_n$. Since $r_n$ is bounded by a constant with probability $1$, $lim_{nto infty} frac{2r_n}{n} = 0$ (again, with probability $1$) and so you can ignore this effect.
– Misha Lavrov
Dec 2 at 21:48
(To be more precise, "$r_n$ is bounded by a constant with probability $1$ is bad phrasing; I mean to say that $lim_{n to infty} r_n$ is finite with probability $1$. How this works out in practice is that every time you leave the root there's a probability depending on $k$ that you never return.)
– Misha Lavrov
Dec 2 at 22:37
Yes, that makes sense!
– jackson5
Dec 2 at 22:39
I am having a hard time formally arguing that I get the same limit as the chain on the integers, can you elaborate on this a bit?
– jackson5
Dec 2 at 21:39
I am having a hard time formally arguing that I get the same limit as the chain on the integers, can you elaborate on this a bit?
– jackson5
Dec 2 at 21:39
Couple the two chains so that whatever the integer chain does, so does the tree chain unless it's at the root. If, in the first $n$ steps, the tree chain visits the root $r_n$ times, then its behavior differs from the integer chain's by at most $2r_n$. Since $r_n$ is bounded by a constant with probability $1$, $lim_{nto infty} frac{2r_n}{n} = 0$ (again, with probability $1$) and so you can ignore this effect.
– Misha Lavrov
Dec 2 at 21:48
Couple the two chains so that whatever the integer chain does, so does the tree chain unless it's at the root. If, in the first $n$ steps, the tree chain visits the root $r_n$ times, then its behavior differs from the integer chain's by at most $2r_n$. Since $r_n$ is bounded by a constant with probability $1$, $lim_{nto infty} frac{2r_n}{n} = 0$ (again, with probability $1$) and so you can ignore this effect.
– Misha Lavrov
Dec 2 at 21:48
(To be more precise, "$r_n$ is bounded by a constant with probability $1$ is bad phrasing; I mean to say that $lim_{n to infty} r_n$ is finite with probability $1$. How this works out in practice is that every time you leave the root there's a probability depending on $k$ that you never return.)
– Misha Lavrov
Dec 2 at 22:37
(To be more precise, "$r_n$ is bounded by a constant with probability $1$ is bad phrasing; I mean to say that $lim_{n to infty} r_n$ is finite with probability $1$. How this works out in practice is that every time you leave the root there's a probability depending on $k$ that you never return.)
– Misha Lavrov
Dec 2 at 22:37
Yes, that makes sense!
– jackson5
Dec 2 at 22:39
Yes, that makes sense!
– jackson5
Dec 2 at 22:39
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3022216%2fdistance-from-root-of-random-walk-on-regular-tree%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
What are the edge weights of $G$? You mention that you follow the transition probabilities they induce, but you never say what they are.
– Misha Lavrov
Dec 2 at 6:53
Sorry. We can take all edge weights to be 1.
– jackson5
Dec 2 at 18:00