Local error per unit step
$begingroup$
The solution of the ODE $$ y' = f(t,y)$$ is being seeked.
Let $u_{m}$ be the numerical solution of a one step method and $y(t_m)$ its true solution. The local error $e_{loc} $ is then defined as $$e_{loc}(t_m):= y(t_m) - u_m$$
Related to this this definition, is the local error per unit step, defined as
$$e_{loc}(t+h)/h, quad h text{ being the step size.}$$
What is the deeper meaning of the defintion of the local error per unit step?
Any help would be greatly appreciated.
ordinary-differential-equations truncation-error
$endgroup$
add a comment |
$begingroup$
The solution of the ODE $$ y' = f(t,y)$$ is being seeked.
Let $u_{m}$ be the numerical solution of a one step method and $y(t_m)$ its true solution. The local error $e_{loc} $ is then defined as $$e_{loc}(t_m):= y(t_m) - u_m$$
Related to this this definition, is the local error per unit step, defined as
$$e_{loc}(t+h)/h, quad h text{ being the step size.}$$
What is the deeper meaning of the defintion of the local error per unit step?
Any help would be greatly appreciated.
ordinary-differential-equations truncation-error
$endgroup$
$begingroup$
Welcome to MSE. I believe that when comparing various errors, they be "normalized" in some way so the comparison is reasonable (e.g., comparing an error over a length of $1$ compared to over a length of a billion doesn't usually provide much useful information). Dividing by $h$ means that the result is what the local error is roughly on average per unit change of $t$.
$endgroup$
– John Omielan
Dec 30 '18 at 22:06
$begingroup$
Could you add more context? One area where this is important is with adaptive step sizes. One criterion to control the step size is to hold the "local error per unit step" constant at the level of the provided tolerances, if $e_{loc}(t,h)=C(t)h^{p+1}+...$, one would strive to get $e_{loc}(t,h)/h=C(t)h^{p}=epsilon$ so that the global error computes approximately as $sum C(t_k)h_k^{p+1}=epsilonsum h_k=epsilon(t_{final}-t_{init})$.
$endgroup$
– LutzL
Dec 31 '18 at 12:11
$begingroup$
Thanks so much for the very quick answers so far. It is a very general context actually. It is used in a book for the analysis of the consistency of a general one step method for solving numerically ode's.
$endgroup$
– philo1707
Dec 31 '18 at 15:51
$begingroup$
Unfortunately, the book is written in german. The author is Karl Strehmel, from his book "Numerik gewöhnlicher Differentialgleichungen" page 24. springer.com/de/book/9783834818478
$endgroup$
– philo1707
Dec 31 '18 at 15:53
add a comment |
$begingroup$
The solution of the ODE $$ y' = f(t,y)$$ is being seeked.
Let $u_{m}$ be the numerical solution of a one step method and $y(t_m)$ its true solution. The local error $e_{loc} $ is then defined as $$e_{loc}(t_m):= y(t_m) - u_m$$
Related to this this definition, is the local error per unit step, defined as
$$e_{loc}(t+h)/h, quad h text{ being the step size.}$$
What is the deeper meaning of the defintion of the local error per unit step?
Any help would be greatly appreciated.
ordinary-differential-equations truncation-error
$endgroup$
The solution of the ODE $$ y' = f(t,y)$$ is being seeked.
Let $u_{m}$ be the numerical solution of a one step method and $y(t_m)$ its true solution. The local error $e_{loc} $ is then defined as $$e_{loc}(t_m):= y(t_m) - u_m$$
Related to this this definition, is the local error per unit step, defined as
$$e_{loc}(t+h)/h, quad h text{ being the step size.}$$
What is the deeper meaning of the defintion of the local error per unit step?
Any help would be greatly appreciated.
ordinary-differential-equations truncation-error
ordinary-differential-equations truncation-error
edited Dec 31 '18 at 12:05
LutzL
59.1k42056
59.1k42056
asked Dec 30 '18 at 22:01
philo1707philo1707
1
1
$begingroup$
Welcome to MSE. I believe that when comparing various errors, they be "normalized" in some way so the comparison is reasonable (e.g., comparing an error over a length of $1$ compared to over a length of a billion doesn't usually provide much useful information). Dividing by $h$ means that the result is what the local error is roughly on average per unit change of $t$.
$endgroup$
– John Omielan
Dec 30 '18 at 22:06
$begingroup$
Could you add more context? One area where this is important is with adaptive step sizes. One criterion to control the step size is to hold the "local error per unit step" constant at the level of the provided tolerances, if $e_{loc}(t,h)=C(t)h^{p+1}+...$, one would strive to get $e_{loc}(t,h)/h=C(t)h^{p}=epsilon$ so that the global error computes approximately as $sum C(t_k)h_k^{p+1}=epsilonsum h_k=epsilon(t_{final}-t_{init})$.
$endgroup$
– LutzL
Dec 31 '18 at 12:11
$begingroup$
Thanks so much for the very quick answers so far. It is a very general context actually. It is used in a book for the analysis of the consistency of a general one step method for solving numerically ode's.
$endgroup$
– philo1707
Dec 31 '18 at 15:51
$begingroup$
Unfortunately, the book is written in german. The author is Karl Strehmel, from his book "Numerik gewöhnlicher Differentialgleichungen" page 24. springer.com/de/book/9783834818478
$endgroup$
– philo1707
Dec 31 '18 at 15:53
add a comment |
$begingroup$
Welcome to MSE. I believe that when comparing various errors, they be "normalized" in some way so the comparison is reasonable (e.g., comparing an error over a length of $1$ compared to over a length of a billion doesn't usually provide much useful information). Dividing by $h$ means that the result is what the local error is roughly on average per unit change of $t$.
$endgroup$
– John Omielan
Dec 30 '18 at 22:06
$begingroup$
Could you add more context? One area where this is important is with adaptive step sizes. One criterion to control the step size is to hold the "local error per unit step" constant at the level of the provided tolerances, if $e_{loc}(t,h)=C(t)h^{p+1}+...$, one would strive to get $e_{loc}(t,h)/h=C(t)h^{p}=epsilon$ so that the global error computes approximately as $sum C(t_k)h_k^{p+1}=epsilonsum h_k=epsilon(t_{final}-t_{init})$.
$endgroup$
– LutzL
Dec 31 '18 at 12:11
$begingroup$
Thanks so much for the very quick answers so far. It is a very general context actually. It is used in a book for the analysis of the consistency of a general one step method for solving numerically ode's.
$endgroup$
– philo1707
Dec 31 '18 at 15:51
$begingroup$
Unfortunately, the book is written in german. The author is Karl Strehmel, from his book "Numerik gewöhnlicher Differentialgleichungen" page 24. springer.com/de/book/9783834818478
$endgroup$
– philo1707
Dec 31 '18 at 15:53
$begingroup$
Welcome to MSE. I believe that when comparing various errors, they be "normalized" in some way so the comparison is reasonable (e.g., comparing an error over a length of $1$ compared to over a length of a billion doesn't usually provide much useful information). Dividing by $h$ means that the result is what the local error is roughly on average per unit change of $t$.
$endgroup$
– John Omielan
Dec 30 '18 at 22:06
$begingroup$
Welcome to MSE. I believe that when comparing various errors, they be "normalized" in some way so the comparison is reasonable (e.g., comparing an error over a length of $1$ compared to over a length of a billion doesn't usually provide much useful information). Dividing by $h$ means that the result is what the local error is roughly on average per unit change of $t$.
$endgroup$
– John Omielan
Dec 30 '18 at 22:06
$begingroup$
Could you add more context? One area where this is important is with adaptive step sizes. One criterion to control the step size is to hold the "local error per unit step" constant at the level of the provided tolerances, if $e_{loc}(t,h)=C(t)h^{p+1}+...$, one would strive to get $e_{loc}(t,h)/h=C(t)h^{p}=epsilon$ so that the global error computes approximately as $sum C(t_k)h_k^{p+1}=epsilonsum h_k=epsilon(t_{final}-t_{init})$.
$endgroup$
– LutzL
Dec 31 '18 at 12:11
$begingroup$
Could you add more context? One area where this is important is with adaptive step sizes. One criterion to control the step size is to hold the "local error per unit step" constant at the level of the provided tolerances, if $e_{loc}(t,h)=C(t)h^{p+1}+...$, one would strive to get $e_{loc}(t,h)/h=C(t)h^{p}=epsilon$ so that the global error computes approximately as $sum C(t_k)h_k^{p+1}=epsilonsum h_k=epsilon(t_{final}-t_{init})$.
$endgroup$
– LutzL
Dec 31 '18 at 12:11
$begingroup$
Thanks so much for the very quick answers so far. It is a very general context actually. It is used in a book for the analysis of the consistency of a general one step method for solving numerically ode's.
$endgroup$
– philo1707
Dec 31 '18 at 15:51
$begingroup$
Thanks so much for the very quick answers so far. It is a very general context actually. It is used in a book for the analysis of the consistency of a general one step method for solving numerically ode's.
$endgroup$
– philo1707
Dec 31 '18 at 15:51
$begingroup$
Unfortunately, the book is written in german. The author is Karl Strehmel, from his book "Numerik gewöhnlicher Differentialgleichungen" page 24. springer.com/de/book/9783834818478
$endgroup$
– philo1707
Dec 31 '18 at 15:53
$begingroup$
Unfortunately, the book is written in german. The author is Karl Strehmel, from his book "Numerik gewöhnlicher Differentialgleichungen" page 24. springer.com/de/book/9783834818478
$endgroup$
– philo1707
Dec 31 '18 at 15:53
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The local error measures how much error you make taking one step of length $h$. If you cut the step size in half you need to take twice as many steps to cover a given distance, so unless the local error is cut in half as well you are doing worse to take the shorter steps. Dividing by the step size normalizes this, so the local error per unit step is an approximation of the error you make while you advance the independent variable by $1$ regardless of the step size you take.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3057236%2flocal-error-per-unit-step%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The local error measures how much error you make taking one step of length $h$. If you cut the step size in half you need to take twice as many steps to cover a given distance, so unless the local error is cut in half as well you are doing worse to take the shorter steps. Dividing by the step size normalizes this, so the local error per unit step is an approximation of the error you make while you advance the independent variable by $1$ regardless of the step size you take.
$endgroup$
add a comment |
$begingroup$
The local error measures how much error you make taking one step of length $h$. If you cut the step size in half you need to take twice as many steps to cover a given distance, so unless the local error is cut in half as well you are doing worse to take the shorter steps. Dividing by the step size normalizes this, so the local error per unit step is an approximation of the error you make while you advance the independent variable by $1$ regardless of the step size you take.
$endgroup$
add a comment |
$begingroup$
The local error measures how much error you make taking one step of length $h$. If you cut the step size in half you need to take twice as many steps to cover a given distance, so unless the local error is cut in half as well you are doing worse to take the shorter steps. Dividing by the step size normalizes this, so the local error per unit step is an approximation of the error you make while you advance the independent variable by $1$ regardless of the step size you take.
$endgroup$
The local error measures how much error you make taking one step of length $h$. If you cut the step size in half you need to take twice as many steps to cover a given distance, so unless the local error is cut in half as well you are doing worse to take the shorter steps. Dividing by the step size normalizes this, so the local error per unit step is an approximation of the error you make while you advance the independent variable by $1$ regardless of the step size you take.
answered Dec 31 '18 at 0:08
Ross MillikanRoss Millikan
297k23198371
297k23198371
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3057236%2flocal-error-per-unit-step%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Welcome to MSE. I believe that when comparing various errors, they be "normalized" in some way so the comparison is reasonable (e.g., comparing an error over a length of $1$ compared to over a length of a billion doesn't usually provide much useful information). Dividing by $h$ means that the result is what the local error is roughly on average per unit change of $t$.
$endgroup$
– John Omielan
Dec 30 '18 at 22:06
$begingroup$
Could you add more context? One area where this is important is with adaptive step sizes. One criterion to control the step size is to hold the "local error per unit step" constant at the level of the provided tolerances, if $e_{loc}(t,h)=C(t)h^{p+1}+...$, one would strive to get $e_{loc}(t,h)/h=C(t)h^{p}=epsilon$ so that the global error computes approximately as $sum C(t_k)h_k^{p+1}=epsilonsum h_k=epsilon(t_{final}-t_{init})$.
$endgroup$
– LutzL
Dec 31 '18 at 12:11
$begingroup$
Thanks so much for the very quick answers so far. It is a very general context actually. It is used in a book for the analysis of the consistency of a general one step method for solving numerically ode's.
$endgroup$
– philo1707
Dec 31 '18 at 15:51
$begingroup$
Unfortunately, the book is written in german. The author is Karl Strehmel, from his book "Numerik gewöhnlicher Differentialgleichungen" page 24. springer.com/de/book/9783834818478
$endgroup$
– philo1707
Dec 31 '18 at 15:53