Why are inter arrival times in the continuous version of discrete-time Markov chains always exponentially...












3












$begingroup$


I am curious whether there exist continuous time Markov processes for which the times between jumping times (which I call inter arrival times) are not exponentially distributed, but have some other distribution (e.g. uniform distribution).



As people call the version in which inter arrival times occur in an exponential fashion the continuous time Markov chain, it seems me to suggest that continuous time Markov processes can only have exponentially distributed jumping times. Is this true?



Background information for visual thinking mathematicians (reading is optional)

In discrete-time markov chains transitions appear (taking arrows) on integer times. (roughly speaking you could say they happen at t = 1, t = 2, t = 3, t = 4, ...) . The times at which we take arrows in not depicted in the picture. Just think as if you are seeing a dynamical picture in which you are in one of the states of this system and that you change state you are in every second (after this second you are back in the same state, or in an other state, depending on which transition you chose). The time until we transition is a known quantity, and let us say it is 1 for simplicity.



enter image description here



In the picture above you can see such a discrete-time markov chain. Now in the continuous case, transitioning (= taking an arrow), can also appear on non-integer times. E.g. on time 1.22344354, 5.546346, 23.2345234, 28.45 and so on. The time it takes to take the next arrow is not deterministic anymore as in the previous case (it always took us time 1 to jump) but a random time. Thus it can be that we jump after t=1.334, t=3543.1, t=0.345, t=2.3535 etc. The time for the next jump we choose to be exponentially distributed in continuous time-markov chains (see http://en.wikipedia.org/wiki/Continuous-time_Markov_chain). On what ground did we choose for an exponential distribution? Is the exponential distribution the only possible solution?



In my initial question I also spoke about inter arrival times and jumping. With the inter arrival time I mean the time between two transitions. So you take an arrow. Now you start your chronometer, and just before you take again an arrow you stop your chronometer. The inter arrival times tells you after how many seconds you needed to jump to a new state. The jumping times are the times at which the actual transition happened. In the discrete case the jumping times were 1,2,3,4,5, ..., in the text above the jumping times were 1.22344354, 5.546346, 23.2345234, 28.45 and so on.










share|cite|improve this question











$endgroup$












  • $begingroup$
    The exponential distribution is the unique continuous probability distribution which enjoys the memoryless property (en.wikipedia.org/wiki/Memorylessness). This guarantees that the chronometers are "independent of each other". Any other distribution for these chronometers creates dependencies, correlations between their outcomes.
    $endgroup$
    – alezok
    Mar 4 '15 at 23:16










  • $begingroup$
    But is there anything wrong with having correlations between the outcomes? If you condition on a jumping time you can perfectly predict what the future outcomes will be? Or am I wrong?
    $endgroup$
    – Pedro
    Mar 4 '15 at 23:20










  • $begingroup$
    There is nothing wrong about correlations. However, knowing only the current state of the Markov chain is not enough to fully describe the system and describe its evolution. But maybe I misunderstood and you have in mind a unique random chronometer which samples the interarrival times independently of the current state of the Markov chain? In this case the system can be fully described by such a clock and a discrete-time Markov chain, usually called "jump chain". But then again in this case there will be no dependencies between the jumps and when they occur.
    $endgroup$
    – alezok
    Mar 5 '15 at 8:38
















3












$begingroup$


I am curious whether there exist continuous time Markov processes for which the times between jumping times (which I call inter arrival times) are not exponentially distributed, but have some other distribution (e.g. uniform distribution).



As people call the version in which inter arrival times occur in an exponential fashion the continuous time Markov chain, it seems me to suggest that continuous time Markov processes can only have exponentially distributed jumping times. Is this true?



Background information for visual thinking mathematicians (reading is optional)

In discrete-time markov chains transitions appear (taking arrows) on integer times. (roughly speaking you could say they happen at t = 1, t = 2, t = 3, t = 4, ...) . The times at which we take arrows in not depicted in the picture. Just think as if you are seeing a dynamical picture in which you are in one of the states of this system and that you change state you are in every second (after this second you are back in the same state, or in an other state, depending on which transition you chose). The time until we transition is a known quantity, and let us say it is 1 for simplicity.



enter image description here



In the picture above you can see such a discrete-time markov chain. Now in the continuous case, transitioning (= taking an arrow), can also appear on non-integer times. E.g. on time 1.22344354, 5.546346, 23.2345234, 28.45 and so on. The time it takes to take the next arrow is not deterministic anymore as in the previous case (it always took us time 1 to jump) but a random time. Thus it can be that we jump after t=1.334, t=3543.1, t=0.345, t=2.3535 etc. The time for the next jump we choose to be exponentially distributed in continuous time-markov chains (see http://en.wikipedia.org/wiki/Continuous-time_Markov_chain). On what ground did we choose for an exponential distribution? Is the exponential distribution the only possible solution?



In my initial question I also spoke about inter arrival times and jumping. With the inter arrival time I mean the time between two transitions. So you take an arrow. Now you start your chronometer, and just before you take again an arrow you stop your chronometer. The inter arrival times tells you after how many seconds you needed to jump to a new state. The jumping times are the times at which the actual transition happened. In the discrete case the jumping times were 1,2,3,4,5, ..., in the text above the jumping times were 1.22344354, 5.546346, 23.2345234, 28.45 and so on.










share|cite|improve this question











$endgroup$












  • $begingroup$
    The exponential distribution is the unique continuous probability distribution which enjoys the memoryless property (en.wikipedia.org/wiki/Memorylessness). This guarantees that the chronometers are "independent of each other". Any other distribution for these chronometers creates dependencies, correlations between their outcomes.
    $endgroup$
    – alezok
    Mar 4 '15 at 23:16










  • $begingroup$
    But is there anything wrong with having correlations between the outcomes? If you condition on a jumping time you can perfectly predict what the future outcomes will be? Or am I wrong?
    $endgroup$
    – Pedro
    Mar 4 '15 at 23:20










  • $begingroup$
    There is nothing wrong about correlations. However, knowing only the current state of the Markov chain is not enough to fully describe the system and describe its evolution. But maybe I misunderstood and you have in mind a unique random chronometer which samples the interarrival times independently of the current state of the Markov chain? In this case the system can be fully described by such a clock and a discrete-time Markov chain, usually called "jump chain". But then again in this case there will be no dependencies between the jumps and when they occur.
    $endgroup$
    – alezok
    Mar 5 '15 at 8:38














3












3








3





$begingroup$


I am curious whether there exist continuous time Markov processes for which the times between jumping times (which I call inter arrival times) are not exponentially distributed, but have some other distribution (e.g. uniform distribution).



As people call the version in which inter arrival times occur in an exponential fashion the continuous time Markov chain, it seems me to suggest that continuous time Markov processes can only have exponentially distributed jumping times. Is this true?



Background information for visual thinking mathematicians (reading is optional)

In discrete-time markov chains transitions appear (taking arrows) on integer times. (roughly speaking you could say they happen at t = 1, t = 2, t = 3, t = 4, ...) . The times at which we take arrows in not depicted in the picture. Just think as if you are seeing a dynamical picture in which you are in one of the states of this system and that you change state you are in every second (after this second you are back in the same state, or in an other state, depending on which transition you chose). The time until we transition is a known quantity, and let us say it is 1 for simplicity.



enter image description here



In the picture above you can see such a discrete-time markov chain. Now in the continuous case, transitioning (= taking an arrow), can also appear on non-integer times. E.g. on time 1.22344354, 5.546346, 23.2345234, 28.45 and so on. The time it takes to take the next arrow is not deterministic anymore as in the previous case (it always took us time 1 to jump) but a random time. Thus it can be that we jump after t=1.334, t=3543.1, t=0.345, t=2.3535 etc. The time for the next jump we choose to be exponentially distributed in continuous time-markov chains (see http://en.wikipedia.org/wiki/Continuous-time_Markov_chain). On what ground did we choose for an exponential distribution? Is the exponential distribution the only possible solution?



In my initial question I also spoke about inter arrival times and jumping. With the inter arrival time I mean the time between two transitions. So you take an arrow. Now you start your chronometer, and just before you take again an arrow you stop your chronometer. The inter arrival times tells you after how many seconds you needed to jump to a new state. The jumping times are the times at which the actual transition happened. In the discrete case the jumping times were 1,2,3,4,5, ..., in the text above the jumping times were 1.22344354, 5.546346, 23.2345234, 28.45 and so on.










share|cite|improve this question











$endgroup$




I am curious whether there exist continuous time Markov processes for which the times between jumping times (which I call inter arrival times) are not exponentially distributed, but have some other distribution (e.g. uniform distribution).



As people call the version in which inter arrival times occur in an exponential fashion the continuous time Markov chain, it seems me to suggest that continuous time Markov processes can only have exponentially distributed jumping times. Is this true?



Background information for visual thinking mathematicians (reading is optional)

In discrete-time markov chains transitions appear (taking arrows) on integer times. (roughly speaking you could say they happen at t = 1, t = 2, t = 3, t = 4, ...) . The times at which we take arrows in not depicted in the picture. Just think as if you are seeing a dynamical picture in which you are in one of the states of this system and that you change state you are in every second (after this second you are back in the same state, or in an other state, depending on which transition you chose). The time until we transition is a known quantity, and let us say it is 1 for simplicity.



enter image description here



In the picture above you can see such a discrete-time markov chain. Now in the continuous case, transitioning (= taking an arrow), can also appear on non-integer times. E.g. on time 1.22344354, 5.546346, 23.2345234, 28.45 and so on. The time it takes to take the next arrow is not deterministic anymore as in the previous case (it always took us time 1 to jump) but a random time. Thus it can be that we jump after t=1.334, t=3543.1, t=0.345, t=2.3535 etc. The time for the next jump we choose to be exponentially distributed in continuous time-markov chains (see http://en.wikipedia.org/wiki/Continuous-time_Markov_chain). On what ground did we choose for an exponential distribution? Is the exponential distribution the only possible solution?



In my initial question I also spoke about inter arrival times and jumping. With the inter arrival time I mean the time between two transitions. So you take an arrow. Now you start your chronometer, and just before you take again an arrow you stop your chronometer. The inter arrival times tells you after how many seconds you needed to jump to a new state. The jumping times are the times at which the actual transition happened. In the discrete case the jumping times were 1,2,3,4,5, ..., in the text above the jumping times were 1.22344354, 5.546346, 23.2345234, 28.45 and so on.







stochastic-processes markov-chains markov-process visualization






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 4 '15 at 23:16







Pedro

















asked Mar 4 '15 at 22:13









PedroPedro

8201618




8201618












  • $begingroup$
    The exponential distribution is the unique continuous probability distribution which enjoys the memoryless property (en.wikipedia.org/wiki/Memorylessness). This guarantees that the chronometers are "independent of each other". Any other distribution for these chronometers creates dependencies, correlations between their outcomes.
    $endgroup$
    – alezok
    Mar 4 '15 at 23:16










  • $begingroup$
    But is there anything wrong with having correlations between the outcomes? If you condition on a jumping time you can perfectly predict what the future outcomes will be? Or am I wrong?
    $endgroup$
    – Pedro
    Mar 4 '15 at 23:20










  • $begingroup$
    There is nothing wrong about correlations. However, knowing only the current state of the Markov chain is not enough to fully describe the system and describe its evolution. But maybe I misunderstood and you have in mind a unique random chronometer which samples the interarrival times independently of the current state of the Markov chain? In this case the system can be fully described by such a clock and a discrete-time Markov chain, usually called "jump chain". But then again in this case there will be no dependencies between the jumps and when they occur.
    $endgroup$
    – alezok
    Mar 5 '15 at 8:38


















  • $begingroup$
    The exponential distribution is the unique continuous probability distribution which enjoys the memoryless property (en.wikipedia.org/wiki/Memorylessness). This guarantees that the chronometers are "independent of each other". Any other distribution for these chronometers creates dependencies, correlations between their outcomes.
    $endgroup$
    – alezok
    Mar 4 '15 at 23:16










  • $begingroup$
    But is there anything wrong with having correlations between the outcomes? If you condition on a jumping time you can perfectly predict what the future outcomes will be? Or am I wrong?
    $endgroup$
    – Pedro
    Mar 4 '15 at 23:20










  • $begingroup$
    There is nothing wrong about correlations. However, knowing only the current state of the Markov chain is not enough to fully describe the system and describe its evolution. But maybe I misunderstood and you have in mind a unique random chronometer which samples the interarrival times independently of the current state of the Markov chain? In this case the system can be fully described by such a clock and a discrete-time Markov chain, usually called "jump chain". But then again in this case there will be no dependencies between the jumps and when they occur.
    $endgroup$
    – alezok
    Mar 5 '15 at 8:38
















$begingroup$
The exponential distribution is the unique continuous probability distribution which enjoys the memoryless property (en.wikipedia.org/wiki/Memorylessness). This guarantees that the chronometers are "independent of each other". Any other distribution for these chronometers creates dependencies, correlations between their outcomes.
$endgroup$
– alezok
Mar 4 '15 at 23:16




$begingroup$
The exponential distribution is the unique continuous probability distribution which enjoys the memoryless property (en.wikipedia.org/wiki/Memorylessness). This guarantees that the chronometers are "independent of each other". Any other distribution for these chronometers creates dependencies, correlations between their outcomes.
$endgroup$
– alezok
Mar 4 '15 at 23:16












$begingroup$
But is there anything wrong with having correlations between the outcomes? If you condition on a jumping time you can perfectly predict what the future outcomes will be? Or am I wrong?
$endgroup$
– Pedro
Mar 4 '15 at 23:20




$begingroup$
But is there anything wrong with having correlations between the outcomes? If you condition on a jumping time you can perfectly predict what the future outcomes will be? Or am I wrong?
$endgroup$
– Pedro
Mar 4 '15 at 23:20












$begingroup$
There is nothing wrong about correlations. However, knowing only the current state of the Markov chain is not enough to fully describe the system and describe its evolution. But maybe I misunderstood and you have in mind a unique random chronometer which samples the interarrival times independently of the current state of the Markov chain? In this case the system can be fully described by such a clock and a discrete-time Markov chain, usually called "jump chain". But then again in this case there will be no dependencies between the jumps and when they occur.
$endgroup$
– alezok
Mar 5 '15 at 8:38




$begingroup$
There is nothing wrong about correlations. However, knowing only the current state of the Markov chain is not enough to fully describe the system and describe its evolution. But maybe I misunderstood and you have in mind a unique random chronometer which samples the interarrival times independently of the current state of the Markov chain? In this case the system can be fully described by such a clock and a discrete-time Markov chain, usually called "jump chain". But then again in this case there will be no dependencies between the jumps and when they occur.
$endgroup$
– alezok
Mar 5 '15 at 8:38










1 Answer
1






active

oldest

votes


















1












$begingroup$

Old question but I'll take a stab at it...



The following is adapted from the 1975 paper by Cinlar on Markov renewal theory. We assume herein that the state space is countable.



Let $Y(t)$ denote the state of a continuous time Markov chain (CTMC) at time $t$ (hence, by definition, the inter-arrival times are exponentially distributed). Then one can show that (equation (1.15) in the linked paper):
$$
Prbig[Y(t+s)=j | Y(u)text{ for all }uin[0,t]big]=Pr[Y(t+s)=j | Y(t)]
$$

for all states $j$, and scalars $s,tgeqslant0$. In other words, the Markov property holds. The state at a future time $t+s$ depends only on the state at time $t$, not before. If the inter-arrival times are not exponentially distributed, then the above property doesn't hold for all times $t$, it only holds at the jump times (notated $T_n$ in the paper).



TL;DR: By assuming the inter-arrival times are exponential, the Markov property holds at all times, not just some special times.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    David M Thank you. This makes a lot of sense to me.
    $endgroup$
    – Pedro
    Jan 27 at 22:48












Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1175777%2fwhy-are-inter-arrival-times-in-the-continuous-version-of-discrete-time-markov-ch%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Old question but I'll take a stab at it...



The following is adapted from the 1975 paper by Cinlar on Markov renewal theory. We assume herein that the state space is countable.



Let $Y(t)$ denote the state of a continuous time Markov chain (CTMC) at time $t$ (hence, by definition, the inter-arrival times are exponentially distributed). Then one can show that (equation (1.15) in the linked paper):
$$
Prbig[Y(t+s)=j | Y(u)text{ for all }uin[0,t]big]=Pr[Y(t+s)=j | Y(t)]
$$

for all states $j$, and scalars $s,tgeqslant0$. In other words, the Markov property holds. The state at a future time $t+s$ depends only on the state at time $t$, not before. If the inter-arrival times are not exponentially distributed, then the above property doesn't hold for all times $t$, it only holds at the jump times (notated $T_n$ in the paper).



TL;DR: By assuming the inter-arrival times are exponential, the Markov property holds at all times, not just some special times.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    David M Thank you. This makes a lot of sense to me.
    $endgroup$
    – Pedro
    Jan 27 at 22:48
















1












$begingroup$

Old question but I'll take a stab at it...



The following is adapted from the 1975 paper by Cinlar on Markov renewal theory. We assume herein that the state space is countable.



Let $Y(t)$ denote the state of a continuous time Markov chain (CTMC) at time $t$ (hence, by definition, the inter-arrival times are exponentially distributed). Then one can show that (equation (1.15) in the linked paper):
$$
Prbig[Y(t+s)=j | Y(u)text{ for all }uin[0,t]big]=Pr[Y(t+s)=j | Y(t)]
$$

for all states $j$, and scalars $s,tgeqslant0$. In other words, the Markov property holds. The state at a future time $t+s$ depends only on the state at time $t$, not before. If the inter-arrival times are not exponentially distributed, then the above property doesn't hold for all times $t$, it only holds at the jump times (notated $T_n$ in the paper).



TL;DR: By assuming the inter-arrival times are exponential, the Markov property holds at all times, not just some special times.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    David M Thank you. This makes a lot of sense to me.
    $endgroup$
    – Pedro
    Jan 27 at 22:48














1












1








1





$begingroup$

Old question but I'll take a stab at it...



The following is adapted from the 1975 paper by Cinlar on Markov renewal theory. We assume herein that the state space is countable.



Let $Y(t)$ denote the state of a continuous time Markov chain (CTMC) at time $t$ (hence, by definition, the inter-arrival times are exponentially distributed). Then one can show that (equation (1.15) in the linked paper):
$$
Prbig[Y(t+s)=j | Y(u)text{ for all }uin[0,t]big]=Pr[Y(t+s)=j | Y(t)]
$$

for all states $j$, and scalars $s,tgeqslant0$. In other words, the Markov property holds. The state at a future time $t+s$ depends only on the state at time $t$, not before. If the inter-arrival times are not exponentially distributed, then the above property doesn't hold for all times $t$, it only holds at the jump times (notated $T_n$ in the paper).



TL;DR: By assuming the inter-arrival times are exponential, the Markov property holds at all times, not just some special times.






share|cite|improve this answer









$endgroup$



Old question but I'll take a stab at it...



The following is adapted from the 1975 paper by Cinlar on Markov renewal theory. We assume herein that the state space is countable.



Let $Y(t)$ denote the state of a continuous time Markov chain (CTMC) at time $t$ (hence, by definition, the inter-arrival times are exponentially distributed). Then one can show that (equation (1.15) in the linked paper):
$$
Prbig[Y(t+s)=j | Y(u)text{ for all }uin[0,t]big]=Pr[Y(t+s)=j | Y(t)]
$$

for all states $j$, and scalars $s,tgeqslant0$. In other words, the Markov property holds. The state at a future time $t+s$ depends only on the state at time $t$, not before. If the inter-arrival times are not exponentially distributed, then the above property doesn't hold for all times $t$, it only holds at the jump times (notated $T_n$ in the paper).



TL;DR: By assuming the inter-arrival times are exponential, the Markov property holds at all times, not just some special times.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Jan 14 at 1:08









David M.David M.

2,232421




2,232421












  • $begingroup$
    David M Thank you. This makes a lot of sense to me.
    $endgroup$
    – Pedro
    Jan 27 at 22:48


















  • $begingroup$
    David M Thank you. This makes a lot of sense to me.
    $endgroup$
    – Pedro
    Jan 27 at 22:48
















$begingroup$
David M Thank you. This makes a lot of sense to me.
$endgroup$
– Pedro
Jan 27 at 22:48




$begingroup$
David M Thank you. This makes a lot of sense to me.
$endgroup$
– Pedro
Jan 27 at 22:48


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1175777%2fwhy-are-inter-arrival-times-in-the-continuous-version-of-discrete-time-markov-ch%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Cabo Verde

Gyllenstierna