Euler's Method Global Error: How to calculate $C_1$ if $error = C_1 h$
$begingroup$
My textbook claims that, for small step size $h$, Euler's method has a global error which is at most proportional to $h$ such that error $= C_1h$. It is then claimed that $C_1$ depends on the initial value problem, but no explanation is given as to how one finds $C_1$.
So if I know $h$, then how can I deduce $C_1$ from the IVP?
ordinary-differential-equations numerical-methods eulers-method
$endgroup$
add a comment |
$begingroup$
My textbook claims that, for small step size $h$, Euler's method has a global error which is at most proportional to $h$ such that error $= C_1h$. It is then claimed that $C_1$ depends on the initial value problem, but no explanation is given as to how one finds $C_1$.
So if I know $h$, then how can I deduce $C_1$ from the IVP?
ordinary-differential-equations numerical-methods eulers-method
$endgroup$
add a comment |
$begingroup$
My textbook claims that, for small step size $h$, Euler's method has a global error which is at most proportional to $h$ such that error $= C_1h$. It is then claimed that $C_1$ depends on the initial value problem, but no explanation is given as to how one finds $C_1$.
So if I know $h$, then how can I deduce $C_1$ from the IVP?
ordinary-differential-equations numerical-methods eulers-method
$endgroup$
My textbook claims that, for small step size $h$, Euler's method has a global error which is at most proportional to $h$ such that error $= C_1h$. It is then claimed that $C_1$ depends on the initial value problem, but no explanation is given as to how one finds $C_1$.
So if I know $h$, then how can I deduce $C_1$ from the IVP?
ordinary-differential-equations numerical-methods eulers-method
ordinary-differential-equations numerical-methods eulers-method
edited Jul 21 '17 at 9:05
The Pointer
asked Jul 21 '17 at 8:59
The PointerThe Pointer
2,51231641
2,51231641
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Given an IVP:
$$frac{dy}{dt}=f(t,y), y(a)=y_0, tin [a,b].$$
Here is a Theorem from Numerical Analysis by Sauer:
Assume that $f(t,y)$ has a Lipschitz constant $L$ for the variable $y$ and that the solution $y_i$ of the initial value problem at $t_i$ is approximated by $w_i$, using Euler's method. Let $M$ be an upper bound for $|y''(t)|$ on $[a,b]$. Then $$|w_i-y_i|le frac{Mh}{2L}(e^{L(t_i-a)}-1).$$
The proof is based on the following lemma:
Assume that $f(t,y)$ is Lipschitz in the variable $y$ on the set $S=[a,b]times [alpha,beta]$. If $Y(t)$ and $Z(t)$ are solutions in $S$ of the differential equation $y'=f(t,y)$ with initial conditions $Y(a)$ and $Z(a)$ respectively, then $$|Y(t)-Z(t)|le e^{L(t-a)}|Y(a)-Z(a)|.$$
Sketch of proof of the first theorem:
Let $g_i$ be the global error, $e_i$ be the local truncation error, $z_i$ satisfy the local IVP:
$$z_i'=f(t,z_i),z_i(t_i)=w_i, tin [t_i,t_{i+1}].$$
Then
$$g_i=|w_i-y_i|=|w_i-z_i(t)+z_i(t)-y_i|le |w_i-z_i(t)|+|z_i(t)-y_i|\
le e_i+e^{Lh}g_{i-1}\
le e_i+e^{Lh}(e_{i-1}+e^{Lh}g_{i-2})le cdots\
le e_i+e^{Lh}e_{i-1}+e^{2Lh}e_{i-2}+cdots +e^{(i-1)Lh}e_1.$$
Since each $e_ile frac{h^2M}{2}$, we have
$$g_ile frac{h^2M}{2}(1+e^{Lh}+cdots+e^{(i-1)Lh})=frac{h^2M(e^{iLh}-1)}{2(e^{Lh}-1)}le frac{Mh}{2L}(e^{L(t_i-a)}-1).$$
Hope this helps.
$endgroup$
add a comment |
$begingroup$
Consider the IVP $y'=f(t,y)$, $y(t_0)=y_0$. Let $t_k=t_0+kh$, $y_k$ computed by the Euler method. We know that the error order of the Euler method is one. Thus the iterates $y_k$ have an error relative to the exact solution $y(t)$ of the form $$y_k=y(t_k)+c_kh$$ with some coefficients $c_k$ that will be closer determined in the course of this answer.
Now insert this representation of $y_k$ into the Euler step and apply Taylor expansion where appropriate
begin{align}
[y(t_{k+1})+c_{k+1}h]&=[y(t_k)+c_kh]+hf(t_k,[y(t_k)+c_kh])\
&=y(t_k)+c_kh+hBigl(f(t_k,y(t_k))+h,∂_yf(t_k,y(t_k)),c_k+O(h^2)Bigr)\
y(t_k)+hy'(t_k)+tfrac12h^2y''(t_k)+O(h^3)&=y(t_k)+hy'(t_k)+hBigl[c_k+h,∂_yf(t_k,y(t_k)),c_kBigr]
end{align}
where $∂_y=frac{partial}{partial y}$ and later $∂_t=frac{partial}{partial t}$.
In the Taylor series for $y(t_{k+1})=y(t_k+h)$ on the left side the first two terms cancel against the same terms on the right side. The second derivative can be written as
$$
y''(t)=frac{d}{dt}f(t,y(t))
=∂_tf(t,y(t))+∂_yf(t,y(t)),f(t,y(t))
overset{rm Def}=Df(t,y(t)).
$$
Divide the remaining equation by $h$ and re-arrange to get a difference equation for $c_k$
$$
c_{k+1}=c_k+hBigl[∂_yf(t_k,y(t_k))c_k-tfrac12Df(t_k,y(t_k))Bigr]+O(h^2).
$$
This looks like the Euler method for the linear ODE for a continuous differentiable function $c$,
$$
c'(t)=∂_yf(t,y(t))c(t)-tfrac12Df(t,y(t)),~~text{ with }~~c(t_0)=0.
$$
Again by the first order of the Euler method, $c_k$ and $c(t_k)$ will have a difference $O(h)$, so that the error we aim to estimate is
$$y_k-y(t_k)=c(t_k)h+O(h^2).$$
Now if $L$ is a bound for $∂_yf$, the $y$-Lipschitz constant, and $M$ is a bound for $Df=∂_tf+∂_yf,f$, or the second derivative, then by Grönwall's lemma
$$
|c'|le L|c|+frac12Mimplies |c(t)|le frac{M(e^{L|t-t_0|}-1)}{2L}
$$
which reproduces the usual specific estimate of the coefficient of the error term.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2365839%2feulers-method-global-error-how-to-calculate-c-1-if-error-c-1-h%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Given an IVP:
$$frac{dy}{dt}=f(t,y), y(a)=y_0, tin [a,b].$$
Here is a Theorem from Numerical Analysis by Sauer:
Assume that $f(t,y)$ has a Lipschitz constant $L$ for the variable $y$ and that the solution $y_i$ of the initial value problem at $t_i$ is approximated by $w_i$, using Euler's method. Let $M$ be an upper bound for $|y''(t)|$ on $[a,b]$. Then $$|w_i-y_i|le frac{Mh}{2L}(e^{L(t_i-a)}-1).$$
The proof is based on the following lemma:
Assume that $f(t,y)$ is Lipschitz in the variable $y$ on the set $S=[a,b]times [alpha,beta]$. If $Y(t)$ and $Z(t)$ are solutions in $S$ of the differential equation $y'=f(t,y)$ with initial conditions $Y(a)$ and $Z(a)$ respectively, then $$|Y(t)-Z(t)|le e^{L(t-a)}|Y(a)-Z(a)|.$$
Sketch of proof of the first theorem:
Let $g_i$ be the global error, $e_i$ be the local truncation error, $z_i$ satisfy the local IVP:
$$z_i'=f(t,z_i),z_i(t_i)=w_i, tin [t_i,t_{i+1}].$$
Then
$$g_i=|w_i-y_i|=|w_i-z_i(t)+z_i(t)-y_i|le |w_i-z_i(t)|+|z_i(t)-y_i|\
le e_i+e^{Lh}g_{i-1}\
le e_i+e^{Lh}(e_{i-1}+e^{Lh}g_{i-2})le cdots\
le e_i+e^{Lh}e_{i-1}+e^{2Lh}e_{i-2}+cdots +e^{(i-1)Lh}e_1.$$
Since each $e_ile frac{h^2M}{2}$, we have
$$g_ile frac{h^2M}{2}(1+e^{Lh}+cdots+e^{(i-1)Lh})=frac{h^2M(e^{iLh}-1)}{2(e^{Lh}-1)}le frac{Mh}{2L}(e^{L(t_i-a)}-1).$$
Hope this helps.
$endgroup$
add a comment |
$begingroup$
Given an IVP:
$$frac{dy}{dt}=f(t,y), y(a)=y_0, tin [a,b].$$
Here is a Theorem from Numerical Analysis by Sauer:
Assume that $f(t,y)$ has a Lipschitz constant $L$ for the variable $y$ and that the solution $y_i$ of the initial value problem at $t_i$ is approximated by $w_i$, using Euler's method. Let $M$ be an upper bound for $|y''(t)|$ on $[a,b]$. Then $$|w_i-y_i|le frac{Mh}{2L}(e^{L(t_i-a)}-1).$$
The proof is based on the following lemma:
Assume that $f(t,y)$ is Lipschitz in the variable $y$ on the set $S=[a,b]times [alpha,beta]$. If $Y(t)$ and $Z(t)$ are solutions in $S$ of the differential equation $y'=f(t,y)$ with initial conditions $Y(a)$ and $Z(a)$ respectively, then $$|Y(t)-Z(t)|le e^{L(t-a)}|Y(a)-Z(a)|.$$
Sketch of proof of the first theorem:
Let $g_i$ be the global error, $e_i$ be the local truncation error, $z_i$ satisfy the local IVP:
$$z_i'=f(t,z_i),z_i(t_i)=w_i, tin [t_i,t_{i+1}].$$
Then
$$g_i=|w_i-y_i|=|w_i-z_i(t)+z_i(t)-y_i|le |w_i-z_i(t)|+|z_i(t)-y_i|\
le e_i+e^{Lh}g_{i-1}\
le e_i+e^{Lh}(e_{i-1}+e^{Lh}g_{i-2})le cdots\
le e_i+e^{Lh}e_{i-1}+e^{2Lh}e_{i-2}+cdots +e^{(i-1)Lh}e_1.$$
Since each $e_ile frac{h^2M}{2}$, we have
$$g_ile frac{h^2M}{2}(1+e^{Lh}+cdots+e^{(i-1)Lh})=frac{h^2M(e^{iLh}-1)}{2(e^{Lh}-1)}le frac{Mh}{2L}(e^{L(t_i-a)}-1).$$
Hope this helps.
$endgroup$
add a comment |
$begingroup$
Given an IVP:
$$frac{dy}{dt}=f(t,y), y(a)=y_0, tin [a,b].$$
Here is a Theorem from Numerical Analysis by Sauer:
Assume that $f(t,y)$ has a Lipschitz constant $L$ for the variable $y$ and that the solution $y_i$ of the initial value problem at $t_i$ is approximated by $w_i$, using Euler's method. Let $M$ be an upper bound for $|y''(t)|$ on $[a,b]$. Then $$|w_i-y_i|le frac{Mh}{2L}(e^{L(t_i-a)}-1).$$
The proof is based on the following lemma:
Assume that $f(t,y)$ is Lipschitz in the variable $y$ on the set $S=[a,b]times [alpha,beta]$. If $Y(t)$ and $Z(t)$ are solutions in $S$ of the differential equation $y'=f(t,y)$ with initial conditions $Y(a)$ and $Z(a)$ respectively, then $$|Y(t)-Z(t)|le e^{L(t-a)}|Y(a)-Z(a)|.$$
Sketch of proof of the first theorem:
Let $g_i$ be the global error, $e_i$ be the local truncation error, $z_i$ satisfy the local IVP:
$$z_i'=f(t,z_i),z_i(t_i)=w_i, tin [t_i,t_{i+1}].$$
Then
$$g_i=|w_i-y_i|=|w_i-z_i(t)+z_i(t)-y_i|le |w_i-z_i(t)|+|z_i(t)-y_i|\
le e_i+e^{Lh}g_{i-1}\
le e_i+e^{Lh}(e_{i-1}+e^{Lh}g_{i-2})le cdots\
le e_i+e^{Lh}e_{i-1}+e^{2Lh}e_{i-2}+cdots +e^{(i-1)Lh}e_1.$$
Since each $e_ile frac{h^2M}{2}$, we have
$$g_ile frac{h^2M}{2}(1+e^{Lh}+cdots+e^{(i-1)Lh})=frac{h^2M(e^{iLh}-1)}{2(e^{Lh}-1)}le frac{Mh}{2L}(e^{L(t_i-a)}-1).$$
Hope this helps.
$endgroup$
Given an IVP:
$$frac{dy}{dt}=f(t,y), y(a)=y_0, tin [a,b].$$
Here is a Theorem from Numerical Analysis by Sauer:
Assume that $f(t,y)$ has a Lipschitz constant $L$ for the variable $y$ and that the solution $y_i$ of the initial value problem at $t_i$ is approximated by $w_i$, using Euler's method. Let $M$ be an upper bound for $|y''(t)|$ on $[a,b]$. Then $$|w_i-y_i|le frac{Mh}{2L}(e^{L(t_i-a)}-1).$$
The proof is based on the following lemma:
Assume that $f(t,y)$ is Lipschitz in the variable $y$ on the set $S=[a,b]times [alpha,beta]$. If $Y(t)$ and $Z(t)$ are solutions in $S$ of the differential equation $y'=f(t,y)$ with initial conditions $Y(a)$ and $Z(a)$ respectively, then $$|Y(t)-Z(t)|le e^{L(t-a)}|Y(a)-Z(a)|.$$
Sketch of proof of the first theorem:
Let $g_i$ be the global error, $e_i$ be the local truncation error, $z_i$ satisfy the local IVP:
$$z_i'=f(t,z_i),z_i(t_i)=w_i, tin [t_i,t_{i+1}].$$
Then
$$g_i=|w_i-y_i|=|w_i-z_i(t)+z_i(t)-y_i|le |w_i-z_i(t)|+|z_i(t)-y_i|\
le e_i+e^{Lh}g_{i-1}\
le e_i+e^{Lh}(e_{i-1}+e^{Lh}g_{i-2})le cdots\
le e_i+e^{Lh}e_{i-1}+e^{2Lh}e_{i-2}+cdots +e^{(i-1)Lh}e_1.$$
Since each $e_ile frac{h^2M}{2}$, we have
$$g_ile frac{h^2M}{2}(1+e^{Lh}+cdots+e^{(i-1)Lh})=frac{h^2M(e^{iLh}-1)}{2(e^{Lh}-1)}le frac{Mh}{2L}(e^{L(t_i-a)}-1).$$
Hope this helps.
answered Jul 21 '17 at 10:17
KittyLKittyL
13.9k31636
13.9k31636
add a comment |
add a comment |
$begingroup$
Consider the IVP $y'=f(t,y)$, $y(t_0)=y_0$. Let $t_k=t_0+kh$, $y_k$ computed by the Euler method. We know that the error order of the Euler method is one. Thus the iterates $y_k$ have an error relative to the exact solution $y(t)$ of the form $$y_k=y(t_k)+c_kh$$ with some coefficients $c_k$ that will be closer determined in the course of this answer.
Now insert this representation of $y_k$ into the Euler step and apply Taylor expansion where appropriate
begin{align}
[y(t_{k+1})+c_{k+1}h]&=[y(t_k)+c_kh]+hf(t_k,[y(t_k)+c_kh])\
&=y(t_k)+c_kh+hBigl(f(t_k,y(t_k))+h,∂_yf(t_k,y(t_k)),c_k+O(h^2)Bigr)\
y(t_k)+hy'(t_k)+tfrac12h^2y''(t_k)+O(h^3)&=y(t_k)+hy'(t_k)+hBigl[c_k+h,∂_yf(t_k,y(t_k)),c_kBigr]
end{align}
where $∂_y=frac{partial}{partial y}$ and later $∂_t=frac{partial}{partial t}$.
In the Taylor series for $y(t_{k+1})=y(t_k+h)$ on the left side the first two terms cancel against the same terms on the right side. The second derivative can be written as
$$
y''(t)=frac{d}{dt}f(t,y(t))
=∂_tf(t,y(t))+∂_yf(t,y(t)),f(t,y(t))
overset{rm Def}=Df(t,y(t)).
$$
Divide the remaining equation by $h$ and re-arrange to get a difference equation for $c_k$
$$
c_{k+1}=c_k+hBigl[∂_yf(t_k,y(t_k))c_k-tfrac12Df(t_k,y(t_k))Bigr]+O(h^2).
$$
This looks like the Euler method for the linear ODE for a continuous differentiable function $c$,
$$
c'(t)=∂_yf(t,y(t))c(t)-tfrac12Df(t,y(t)),~~text{ with }~~c(t_0)=0.
$$
Again by the first order of the Euler method, $c_k$ and $c(t_k)$ will have a difference $O(h)$, so that the error we aim to estimate is
$$y_k-y(t_k)=c(t_k)h+O(h^2).$$
Now if $L$ is a bound for $∂_yf$, the $y$-Lipschitz constant, and $M$ is a bound for $Df=∂_tf+∂_yf,f$, or the second derivative, then by Grönwall's lemma
$$
|c'|le L|c|+frac12Mimplies |c(t)|le frac{M(e^{L|t-t_0|}-1)}{2L}
$$
which reproduces the usual specific estimate of the coefficient of the error term.
$endgroup$
add a comment |
$begingroup$
Consider the IVP $y'=f(t,y)$, $y(t_0)=y_0$. Let $t_k=t_0+kh$, $y_k$ computed by the Euler method. We know that the error order of the Euler method is one. Thus the iterates $y_k$ have an error relative to the exact solution $y(t)$ of the form $$y_k=y(t_k)+c_kh$$ with some coefficients $c_k$ that will be closer determined in the course of this answer.
Now insert this representation of $y_k$ into the Euler step and apply Taylor expansion where appropriate
begin{align}
[y(t_{k+1})+c_{k+1}h]&=[y(t_k)+c_kh]+hf(t_k,[y(t_k)+c_kh])\
&=y(t_k)+c_kh+hBigl(f(t_k,y(t_k))+h,∂_yf(t_k,y(t_k)),c_k+O(h^2)Bigr)\
y(t_k)+hy'(t_k)+tfrac12h^2y''(t_k)+O(h^3)&=y(t_k)+hy'(t_k)+hBigl[c_k+h,∂_yf(t_k,y(t_k)),c_kBigr]
end{align}
where $∂_y=frac{partial}{partial y}$ and later $∂_t=frac{partial}{partial t}$.
In the Taylor series for $y(t_{k+1})=y(t_k+h)$ on the left side the first two terms cancel against the same terms on the right side. The second derivative can be written as
$$
y''(t)=frac{d}{dt}f(t,y(t))
=∂_tf(t,y(t))+∂_yf(t,y(t)),f(t,y(t))
overset{rm Def}=Df(t,y(t)).
$$
Divide the remaining equation by $h$ and re-arrange to get a difference equation for $c_k$
$$
c_{k+1}=c_k+hBigl[∂_yf(t_k,y(t_k))c_k-tfrac12Df(t_k,y(t_k))Bigr]+O(h^2).
$$
This looks like the Euler method for the linear ODE for a continuous differentiable function $c$,
$$
c'(t)=∂_yf(t,y(t))c(t)-tfrac12Df(t,y(t)),~~text{ with }~~c(t_0)=0.
$$
Again by the first order of the Euler method, $c_k$ and $c(t_k)$ will have a difference $O(h)$, so that the error we aim to estimate is
$$y_k-y(t_k)=c(t_k)h+O(h^2).$$
Now if $L$ is a bound for $∂_yf$, the $y$-Lipschitz constant, and $M$ is a bound for $Df=∂_tf+∂_yf,f$, or the second derivative, then by Grönwall's lemma
$$
|c'|le L|c|+frac12Mimplies |c(t)|le frac{M(e^{L|t-t_0|}-1)}{2L}
$$
which reproduces the usual specific estimate of the coefficient of the error term.
$endgroup$
add a comment |
$begingroup$
Consider the IVP $y'=f(t,y)$, $y(t_0)=y_0$. Let $t_k=t_0+kh$, $y_k$ computed by the Euler method. We know that the error order of the Euler method is one. Thus the iterates $y_k$ have an error relative to the exact solution $y(t)$ of the form $$y_k=y(t_k)+c_kh$$ with some coefficients $c_k$ that will be closer determined in the course of this answer.
Now insert this representation of $y_k$ into the Euler step and apply Taylor expansion where appropriate
begin{align}
[y(t_{k+1})+c_{k+1}h]&=[y(t_k)+c_kh]+hf(t_k,[y(t_k)+c_kh])\
&=y(t_k)+c_kh+hBigl(f(t_k,y(t_k))+h,∂_yf(t_k,y(t_k)),c_k+O(h^2)Bigr)\
y(t_k)+hy'(t_k)+tfrac12h^2y''(t_k)+O(h^3)&=y(t_k)+hy'(t_k)+hBigl[c_k+h,∂_yf(t_k,y(t_k)),c_kBigr]
end{align}
where $∂_y=frac{partial}{partial y}$ and later $∂_t=frac{partial}{partial t}$.
In the Taylor series for $y(t_{k+1})=y(t_k+h)$ on the left side the first two terms cancel against the same terms on the right side. The second derivative can be written as
$$
y''(t)=frac{d}{dt}f(t,y(t))
=∂_tf(t,y(t))+∂_yf(t,y(t)),f(t,y(t))
overset{rm Def}=Df(t,y(t)).
$$
Divide the remaining equation by $h$ and re-arrange to get a difference equation for $c_k$
$$
c_{k+1}=c_k+hBigl[∂_yf(t_k,y(t_k))c_k-tfrac12Df(t_k,y(t_k))Bigr]+O(h^2).
$$
This looks like the Euler method for the linear ODE for a continuous differentiable function $c$,
$$
c'(t)=∂_yf(t,y(t))c(t)-tfrac12Df(t,y(t)),~~text{ with }~~c(t_0)=0.
$$
Again by the first order of the Euler method, $c_k$ and $c(t_k)$ will have a difference $O(h)$, so that the error we aim to estimate is
$$y_k-y(t_k)=c(t_k)h+O(h^2).$$
Now if $L$ is a bound for $∂_yf$, the $y$-Lipschitz constant, and $M$ is a bound for $Df=∂_tf+∂_yf,f$, or the second derivative, then by Grönwall's lemma
$$
|c'|le L|c|+frac12Mimplies |c(t)|le frac{M(e^{L|t-t_0|}-1)}{2L}
$$
which reproduces the usual specific estimate of the coefficient of the error term.
$endgroup$
Consider the IVP $y'=f(t,y)$, $y(t_0)=y_0$. Let $t_k=t_0+kh$, $y_k$ computed by the Euler method. We know that the error order of the Euler method is one. Thus the iterates $y_k$ have an error relative to the exact solution $y(t)$ of the form $$y_k=y(t_k)+c_kh$$ with some coefficients $c_k$ that will be closer determined in the course of this answer.
Now insert this representation of $y_k$ into the Euler step and apply Taylor expansion where appropriate
begin{align}
[y(t_{k+1})+c_{k+1}h]&=[y(t_k)+c_kh]+hf(t_k,[y(t_k)+c_kh])\
&=y(t_k)+c_kh+hBigl(f(t_k,y(t_k))+h,∂_yf(t_k,y(t_k)),c_k+O(h^2)Bigr)\
y(t_k)+hy'(t_k)+tfrac12h^2y''(t_k)+O(h^3)&=y(t_k)+hy'(t_k)+hBigl[c_k+h,∂_yf(t_k,y(t_k)),c_kBigr]
end{align}
where $∂_y=frac{partial}{partial y}$ and later $∂_t=frac{partial}{partial t}$.
In the Taylor series for $y(t_{k+1})=y(t_k+h)$ on the left side the first two terms cancel against the same terms on the right side. The second derivative can be written as
$$
y''(t)=frac{d}{dt}f(t,y(t))
=∂_tf(t,y(t))+∂_yf(t,y(t)),f(t,y(t))
overset{rm Def}=Df(t,y(t)).
$$
Divide the remaining equation by $h$ and re-arrange to get a difference equation for $c_k$
$$
c_{k+1}=c_k+hBigl[∂_yf(t_k,y(t_k))c_k-tfrac12Df(t_k,y(t_k))Bigr]+O(h^2).
$$
This looks like the Euler method for the linear ODE for a continuous differentiable function $c$,
$$
c'(t)=∂_yf(t,y(t))c(t)-tfrac12Df(t,y(t)),~~text{ with }~~c(t_0)=0.
$$
Again by the first order of the Euler method, $c_k$ and $c(t_k)$ will have a difference $O(h)$, so that the error we aim to estimate is
$$y_k-y(t_k)=c(t_k)h+O(h^2).$$
Now if $L$ is a bound for $∂_yf$, the $y$-Lipschitz constant, and $M$ is a bound for $Df=∂_tf+∂_yf,f$, or the second derivative, then by Grönwall's lemma
$$
|c'|le L|c|+frac12Mimplies |c(t)|le frac{M(e^{L|t-t_0|}-1)}{2L}
$$
which reproduces the usual specific estimate of the coefficient of the error term.
edited Jan 16 at 18:23
answered Jan 15 at 18:49
LutzLLutzL
61.1k42157
61.1k42157
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2365839%2feulers-method-global-error-how-to-calculate-c-1-if-error-c-1-h%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown