“Standard methods of the calculus of variations” or “Do you read German?”












1












$begingroup$


I'm trying to understand an article of Reinsch (1967), on smoothing spline functions. The author uses some rules that unfortunately I couldn't find.



The minimized functional is:
$$
int_{x_0}^{x_n} g''(x)^2 dx + p leftlbrace sum_{i=0}^nleft(frac{g(x_i)-y_i}{delta y_i}right)^2 + z^2 -S rightrbrace
$$



With this, the author deduces that:
$$
forall i, f^{(3)}(x_i)_{-} - f^{(3)}(x_i)_{+} = 2p frac{f(x_i)-y_i}{delta y_i}
$$

where $f^{(3)}$ is undefined in each $x_i$, $f^{(3)}(x_i)_{-}$ is the inferior limit in $x_i$, and $f^{(3)}(x_i)_{+}$ the superior limit.



This is the point I don't get. I understand how to compute the derivative of a functional when it is formulated as an integral, but this one isn't. I guess there is some rules I missed.



The author cite a book (Variationsrechnung und ihre Anwendung in Physik und Technik, Funk, 1962), but unfortunately I can't read German, and I couldn't find any source to corroborate Reinsch in his reasoning.



What is the derivation rule I missed? Is there a source (in English or in French) to corroborate the author computation?



Thanks!










share|cite|improve this question











$endgroup$












  • $begingroup$
    You should probably note that Reinsch specifically deduces that condition by considering the Euler-Lagrange equations and you might want to look at en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation as well
    $endgroup$
    – postmortes
    Dec 19 '18 at 17:06
















1












$begingroup$


I'm trying to understand an article of Reinsch (1967), on smoothing spline functions. The author uses some rules that unfortunately I couldn't find.



The minimized functional is:
$$
int_{x_0}^{x_n} g''(x)^2 dx + p leftlbrace sum_{i=0}^nleft(frac{g(x_i)-y_i}{delta y_i}right)^2 + z^2 -S rightrbrace
$$



With this, the author deduces that:
$$
forall i, f^{(3)}(x_i)_{-} - f^{(3)}(x_i)_{+} = 2p frac{f(x_i)-y_i}{delta y_i}
$$

where $f^{(3)}$ is undefined in each $x_i$, $f^{(3)}(x_i)_{-}$ is the inferior limit in $x_i$, and $f^{(3)}(x_i)_{+}$ the superior limit.



This is the point I don't get. I understand how to compute the derivative of a functional when it is formulated as an integral, but this one isn't. I guess there is some rules I missed.



The author cite a book (Variationsrechnung und ihre Anwendung in Physik und Technik, Funk, 1962), but unfortunately I can't read German, and I couldn't find any source to corroborate Reinsch in his reasoning.



What is the derivation rule I missed? Is there a source (in English or in French) to corroborate the author computation?



Thanks!










share|cite|improve this question











$endgroup$












  • $begingroup$
    You should probably note that Reinsch specifically deduces that condition by considering the Euler-Lagrange equations and you might want to look at en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation as well
    $endgroup$
    – postmortes
    Dec 19 '18 at 17:06














1












1








1





$begingroup$


I'm trying to understand an article of Reinsch (1967), on smoothing spline functions. The author uses some rules that unfortunately I couldn't find.



The minimized functional is:
$$
int_{x_0}^{x_n} g''(x)^2 dx + p leftlbrace sum_{i=0}^nleft(frac{g(x_i)-y_i}{delta y_i}right)^2 + z^2 -S rightrbrace
$$



With this, the author deduces that:
$$
forall i, f^{(3)}(x_i)_{-} - f^{(3)}(x_i)_{+} = 2p frac{f(x_i)-y_i}{delta y_i}
$$

where $f^{(3)}$ is undefined in each $x_i$, $f^{(3)}(x_i)_{-}$ is the inferior limit in $x_i$, and $f^{(3)}(x_i)_{+}$ the superior limit.



This is the point I don't get. I understand how to compute the derivative of a functional when it is formulated as an integral, but this one isn't. I guess there is some rules I missed.



The author cite a book (Variationsrechnung und ihre Anwendung in Physik und Technik, Funk, 1962), but unfortunately I can't read German, and I couldn't find any source to corroborate Reinsch in his reasoning.



What is the derivation rule I missed? Is there a source (in English or in French) to corroborate the author computation?



Thanks!










share|cite|improve this question











$endgroup$




I'm trying to understand an article of Reinsch (1967), on smoothing spline functions. The author uses some rules that unfortunately I couldn't find.



The minimized functional is:
$$
int_{x_0}^{x_n} g''(x)^2 dx + p leftlbrace sum_{i=0}^nleft(frac{g(x_i)-y_i}{delta y_i}right)^2 + z^2 -S rightrbrace
$$



With this, the author deduces that:
$$
forall i, f^{(3)}(x_i)_{-} - f^{(3)}(x_i)_{+} = 2p frac{f(x_i)-y_i}{delta y_i}
$$

where $f^{(3)}$ is undefined in each $x_i$, $f^{(3)}(x_i)_{-}$ is the inferior limit in $x_i$, and $f^{(3)}(x_i)_{+}$ the superior limit.



This is the point I don't get. I understand how to compute the derivative of a functional when it is formulated as an integral, but this one isn't. I guess there is some rules I missed.



The author cite a book (Variationsrechnung und ihre Anwendung in Physik und Technik, Funk, 1962), but unfortunately I can't read German, and I couldn't find any source to corroborate Reinsch in his reasoning.



What is the derivation rule I missed? Is there a source (in English or in French) to corroborate the author computation?



Thanks!







functional-analysis calculus-of-variations spline






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 19 '18 at 16:38







Ibujah

















asked Dec 19 '18 at 16:19









IbujahIbujah

786




786












  • $begingroup$
    You should probably note that Reinsch specifically deduces that condition by considering the Euler-Lagrange equations and you might want to look at en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation as well
    $endgroup$
    – postmortes
    Dec 19 '18 at 17:06


















  • $begingroup$
    You should probably note that Reinsch specifically deduces that condition by considering the Euler-Lagrange equations and you might want to look at en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation as well
    $endgroup$
    – postmortes
    Dec 19 '18 at 17:06
















$begingroup$
You should probably note that Reinsch specifically deduces that condition by considering the Euler-Lagrange equations and you might want to look at en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation as well
$endgroup$
– postmortes
Dec 19 '18 at 17:06




$begingroup$
You should probably note that Reinsch specifically deduces that condition by considering the Euler-Lagrange equations and you might want to look at en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation as well
$endgroup$
– postmortes
Dec 19 '18 at 17:06










1 Answer
1






active

oldest

votes


















1












$begingroup$

Consider the functional
$$
F(g)=int_{x_{0}}^{x_{n}}(g^{primeprime}(x))^{2}dx+psum_{i=0}^{n}left(
frac{g(x_{i})-y_{i}}{delta y_{i}}right) ^{2}
$$

and let $f$ be a minimum over all function $gin C^{2}([x_{0},x_{n}])$. Then
taking $f+th$, for $tinmathbb{R}$ and $hin C^{2}([x_{0},x_{n}])$, you have
that
$$
F(f+th)geq F(f)
$$

and so the one variable function $k(t)=F(f+th)$ has a minimum at $t=0$. Hence,
$k^{prime}(0)=0$. So if we now differentiate under the integral sign, we get
begin{align*}
k^{prime}(t) & =frac{d}{dt}(F(f+ht))=frac{d}{dt}int_{x_{0}}^{x_{n}
}(f^{primeprime}(x)+th^{primeprime}(x))^{2}dx\&quad+pfrac{d}{dt}sum_{i=0}
^{n}left( frac{f(x_{i})+th(x_{i})-y_{i}}{delta y_{i}}right) ^{2}\
& =int_{x_{0}}^{x_{n}}2(f^{primeprime}(x)+th^{primeprime}(x))h^{prime
prime}(x),dx\&quad+2psum_{i=0}^{n}frac{(f(x_{i})+th(x_{i})-y_{i})h(x_{i}
)}{(delta y_{i})^{2}}.
end{align*}

Taking $t=0$ gives
$$
0=k^{prime}(0)=int_{x_{0}}^{x_{n}}2f^{primeprime}(x)h^{primeprime
}(x),dx+2psum_{i=0}^{n}frac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
$$

This is true for all $hin C^{2}([x_{0},x_{n}])$. We now play with $h$. Fix
$i$ and consider functions $h$ which are zero except on $(x_{i-1},x_{i})$.
Then
$$
0=int_{x_{i-1}}^{x_{i}}f^{primeprime}(x)h^{primeprime}(x),dx.
$$

By Weyl's lemma, this implies that $f^{primeprime}$ has two derivatives in
$(x_{i-1},x_{i})$ and that $f^{primeprimeprimeprime}(x)=0$ in each
interval $(x_{i-1},x_{i})$. Thus, $f^{primeprimeprime}$ is constant in each
interval $(x_{i-1},x_{i})$ but can jump at each $x_{i}$. To find the
constants, fix $1<i<n$ and take $hin C^{4}([x_{0},x_{n}])$ which is zero outside
of $(x_{i}-delta,x_{i}+delta)$, where $delta<min{x_{i}-x_{i-1}%
,x_{i+1}-x_{i}}$
. Then
begin{align*}
0 & =int_{x_{i}-delta}^{x_{i}+delta}f^{primeprime}(x)h^{primeprime
}(x),dx+pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}\
& =int_{x_{i}-delta}^{x_{i}}f^{primeprime}(x)h^{primeprime}
(x),dx+int_{x_{i}}^{x_{i}+delta}f^{primeprime}(x)h^{primeprime
}(x),dx+pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
end{align*}

Integrating by parts twice both integrals and using the fact that
$f^{primeprimeprime}=0$ in each open interval and that $h$ and its
derivatives up to order 4 are zero at $x_{i}pmdelta$ we get
begin{align*}
int_{x_{i}-delta}^{x_{i}}&f^{primeprime}(x)h^{primeprime}(x),dx
=-int_{x_{i}-delta}^{x_{i}}f^{primeprimeprime}(x)h^{prime}
(x),dx+f^{primeprime}(x_{i})h^{prime}(x_{i})-0\
& =int_{x_{i}-delta}^{x_{i}}f^{primeprimeprimeprime}(x)h(x),dx+0-f_{-}
^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}(x_{i})h^{prime}
(x_{i})\
& =0-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}(x_{i}
)h^{prime}(x_{i}).
end{align*}

Similarly,
begin{align*}
int_{x_{i}}^{x_{i}+delta}&f^{primeprime}(x)h^{primeprime}(x),dx
=-int_{x_{i}}^{x_{i}+delta}f^{primeprimeprime}(x)h^{prime}
(x),dx+0-f^{primeprime}(x_{i})h^{prime}(x_{i})\
& =int_{x_{i}}^{x_{i}+delta}f^{primeprimeprimeprime}(x)h(x),dx-0+f_{+}
^{primeprimeprime}(x_{i})h(x_{i})-f^{primeprime}(x_{i})h^{prime}
(x_{i})\
& =0+f_{+}^{primeprimeprime}(x_{i})h(x_{i})-f^{primeprime}(x_{i}
)h^{prime}(x_{i}).
end{align*}

Hence, if we combine the last three equations we get
begin{align*}
0 & =-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}
(x_{i})h^{prime}(x_{i})+f_{+}^{primeprimeprime}(x_{i})h(x_{i}
)-f^{primeprime}(x_{i})h^{prime}(x_{i})\&quad+2pfrac{(f(x_{i})-y_{i})h(x_{i}
)}{(delta y_{i})^{2}}\
& =-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f_{+}^{primeprimeprime}
(x_{i})h(x_{i})+2pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
end{align*}

Now take $h$ such that $h(x_{i})=1$ and you get
$$
0=-f_{-}^{primeprimeprime}(x_{i})+f_{+}^{primeprimeprime}(x_{i}
)+2pfrac{f(x_{i})-y_{i}}{(delta y_{i})^{2}}.
$$

For $i=1$ and $i=n$ you do something similar.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    That was very helpful, thank you!
    $endgroup$
    – Ibujah
    Dec 20 '18 at 9:06











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3046567%2fstandard-methods-of-the-calculus-of-variations-or-do-you-read-german%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Consider the functional
$$
F(g)=int_{x_{0}}^{x_{n}}(g^{primeprime}(x))^{2}dx+psum_{i=0}^{n}left(
frac{g(x_{i})-y_{i}}{delta y_{i}}right) ^{2}
$$

and let $f$ be a minimum over all function $gin C^{2}([x_{0},x_{n}])$. Then
taking $f+th$, for $tinmathbb{R}$ and $hin C^{2}([x_{0},x_{n}])$, you have
that
$$
F(f+th)geq F(f)
$$

and so the one variable function $k(t)=F(f+th)$ has a minimum at $t=0$. Hence,
$k^{prime}(0)=0$. So if we now differentiate under the integral sign, we get
begin{align*}
k^{prime}(t) & =frac{d}{dt}(F(f+ht))=frac{d}{dt}int_{x_{0}}^{x_{n}
}(f^{primeprime}(x)+th^{primeprime}(x))^{2}dx\&quad+pfrac{d}{dt}sum_{i=0}
^{n}left( frac{f(x_{i})+th(x_{i})-y_{i}}{delta y_{i}}right) ^{2}\
& =int_{x_{0}}^{x_{n}}2(f^{primeprime}(x)+th^{primeprime}(x))h^{prime
prime}(x),dx\&quad+2psum_{i=0}^{n}frac{(f(x_{i})+th(x_{i})-y_{i})h(x_{i}
)}{(delta y_{i})^{2}}.
end{align*}

Taking $t=0$ gives
$$
0=k^{prime}(0)=int_{x_{0}}^{x_{n}}2f^{primeprime}(x)h^{primeprime
}(x),dx+2psum_{i=0}^{n}frac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
$$

This is true for all $hin C^{2}([x_{0},x_{n}])$. We now play with $h$. Fix
$i$ and consider functions $h$ which are zero except on $(x_{i-1},x_{i})$.
Then
$$
0=int_{x_{i-1}}^{x_{i}}f^{primeprime}(x)h^{primeprime}(x),dx.
$$

By Weyl's lemma, this implies that $f^{primeprime}$ has two derivatives in
$(x_{i-1},x_{i})$ and that $f^{primeprimeprimeprime}(x)=0$ in each
interval $(x_{i-1},x_{i})$. Thus, $f^{primeprimeprime}$ is constant in each
interval $(x_{i-1},x_{i})$ but can jump at each $x_{i}$. To find the
constants, fix $1<i<n$ and take $hin C^{4}([x_{0},x_{n}])$ which is zero outside
of $(x_{i}-delta,x_{i}+delta)$, where $delta<min{x_{i}-x_{i-1}%
,x_{i+1}-x_{i}}$
. Then
begin{align*}
0 & =int_{x_{i}-delta}^{x_{i}+delta}f^{primeprime}(x)h^{primeprime
}(x),dx+pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}\
& =int_{x_{i}-delta}^{x_{i}}f^{primeprime}(x)h^{primeprime}
(x),dx+int_{x_{i}}^{x_{i}+delta}f^{primeprime}(x)h^{primeprime
}(x),dx+pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
end{align*}

Integrating by parts twice both integrals and using the fact that
$f^{primeprimeprime}=0$ in each open interval and that $h$ and its
derivatives up to order 4 are zero at $x_{i}pmdelta$ we get
begin{align*}
int_{x_{i}-delta}^{x_{i}}&f^{primeprime}(x)h^{primeprime}(x),dx
=-int_{x_{i}-delta}^{x_{i}}f^{primeprimeprime}(x)h^{prime}
(x),dx+f^{primeprime}(x_{i})h^{prime}(x_{i})-0\
& =int_{x_{i}-delta}^{x_{i}}f^{primeprimeprimeprime}(x)h(x),dx+0-f_{-}
^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}(x_{i})h^{prime}
(x_{i})\
& =0-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}(x_{i}
)h^{prime}(x_{i}).
end{align*}

Similarly,
begin{align*}
int_{x_{i}}^{x_{i}+delta}&f^{primeprime}(x)h^{primeprime}(x),dx
=-int_{x_{i}}^{x_{i}+delta}f^{primeprimeprime}(x)h^{prime}
(x),dx+0-f^{primeprime}(x_{i})h^{prime}(x_{i})\
& =int_{x_{i}}^{x_{i}+delta}f^{primeprimeprimeprime}(x)h(x),dx-0+f_{+}
^{primeprimeprime}(x_{i})h(x_{i})-f^{primeprime}(x_{i})h^{prime}
(x_{i})\
& =0+f_{+}^{primeprimeprime}(x_{i})h(x_{i})-f^{primeprime}(x_{i}
)h^{prime}(x_{i}).
end{align*}

Hence, if we combine the last three equations we get
begin{align*}
0 & =-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}
(x_{i})h^{prime}(x_{i})+f_{+}^{primeprimeprime}(x_{i})h(x_{i}
)-f^{primeprime}(x_{i})h^{prime}(x_{i})\&quad+2pfrac{(f(x_{i})-y_{i})h(x_{i}
)}{(delta y_{i})^{2}}\
& =-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f_{+}^{primeprimeprime}
(x_{i})h(x_{i})+2pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
end{align*}

Now take $h$ such that $h(x_{i})=1$ and you get
$$
0=-f_{-}^{primeprimeprime}(x_{i})+f_{+}^{primeprimeprime}(x_{i}
)+2pfrac{f(x_{i})-y_{i}}{(delta y_{i})^{2}}.
$$

For $i=1$ and $i=n$ you do something similar.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    That was very helpful, thank you!
    $endgroup$
    – Ibujah
    Dec 20 '18 at 9:06
















1












$begingroup$

Consider the functional
$$
F(g)=int_{x_{0}}^{x_{n}}(g^{primeprime}(x))^{2}dx+psum_{i=0}^{n}left(
frac{g(x_{i})-y_{i}}{delta y_{i}}right) ^{2}
$$

and let $f$ be a minimum over all function $gin C^{2}([x_{0},x_{n}])$. Then
taking $f+th$, for $tinmathbb{R}$ and $hin C^{2}([x_{0},x_{n}])$, you have
that
$$
F(f+th)geq F(f)
$$

and so the one variable function $k(t)=F(f+th)$ has a minimum at $t=0$. Hence,
$k^{prime}(0)=0$. So if we now differentiate under the integral sign, we get
begin{align*}
k^{prime}(t) & =frac{d}{dt}(F(f+ht))=frac{d}{dt}int_{x_{0}}^{x_{n}
}(f^{primeprime}(x)+th^{primeprime}(x))^{2}dx\&quad+pfrac{d}{dt}sum_{i=0}
^{n}left( frac{f(x_{i})+th(x_{i})-y_{i}}{delta y_{i}}right) ^{2}\
& =int_{x_{0}}^{x_{n}}2(f^{primeprime}(x)+th^{primeprime}(x))h^{prime
prime}(x),dx\&quad+2psum_{i=0}^{n}frac{(f(x_{i})+th(x_{i})-y_{i})h(x_{i}
)}{(delta y_{i})^{2}}.
end{align*}

Taking $t=0$ gives
$$
0=k^{prime}(0)=int_{x_{0}}^{x_{n}}2f^{primeprime}(x)h^{primeprime
}(x),dx+2psum_{i=0}^{n}frac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
$$

This is true for all $hin C^{2}([x_{0},x_{n}])$. We now play with $h$. Fix
$i$ and consider functions $h$ which are zero except on $(x_{i-1},x_{i})$.
Then
$$
0=int_{x_{i-1}}^{x_{i}}f^{primeprime}(x)h^{primeprime}(x),dx.
$$

By Weyl's lemma, this implies that $f^{primeprime}$ has two derivatives in
$(x_{i-1},x_{i})$ and that $f^{primeprimeprimeprime}(x)=0$ in each
interval $(x_{i-1},x_{i})$. Thus, $f^{primeprimeprime}$ is constant in each
interval $(x_{i-1},x_{i})$ but can jump at each $x_{i}$. To find the
constants, fix $1<i<n$ and take $hin C^{4}([x_{0},x_{n}])$ which is zero outside
of $(x_{i}-delta,x_{i}+delta)$, where $delta<min{x_{i}-x_{i-1}%
,x_{i+1}-x_{i}}$
. Then
begin{align*}
0 & =int_{x_{i}-delta}^{x_{i}+delta}f^{primeprime}(x)h^{primeprime
}(x),dx+pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}\
& =int_{x_{i}-delta}^{x_{i}}f^{primeprime}(x)h^{primeprime}
(x),dx+int_{x_{i}}^{x_{i}+delta}f^{primeprime}(x)h^{primeprime
}(x),dx+pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
end{align*}

Integrating by parts twice both integrals and using the fact that
$f^{primeprimeprime}=0$ in each open interval and that $h$ and its
derivatives up to order 4 are zero at $x_{i}pmdelta$ we get
begin{align*}
int_{x_{i}-delta}^{x_{i}}&f^{primeprime}(x)h^{primeprime}(x),dx
=-int_{x_{i}-delta}^{x_{i}}f^{primeprimeprime}(x)h^{prime}
(x),dx+f^{primeprime}(x_{i})h^{prime}(x_{i})-0\
& =int_{x_{i}-delta}^{x_{i}}f^{primeprimeprimeprime}(x)h(x),dx+0-f_{-}
^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}(x_{i})h^{prime}
(x_{i})\
& =0-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}(x_{i}
)h^{prime}(x_{i}).
end{align*}

Similarly,
begin{align*}
int_{x_{i}}^{x_{i}+delta}&f^{primeprime}(x)h^{primeprime}(x),dx
=-int_{x_{i}}^{x_{i}+delta}f^{primeprimeprime}(x)h^{prime}
(x),dx+0-f^{primeprime}(x_{i})h^{prime}(x_{i})\
& =int_{x_{i}}^{x_{i}+delta}f^{primeprimeprimeprime}(x)h(x),dx-0+f_{+}
^{primeprimeprime}(x_{i})h(x_{i})-f^{primeprime}(x_{i})h^{prime}
(x_{i})\
& =0+f_{+}^{primeprimeprime}(x_{i})h(x_{i})-f^{primeprime}(x_{i}
)h^{prime}(x_{i}).
end{align*}

Hence, if we combine the last three equations we get
begin{align*}
0 & =-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}
(x_{i})h^{prime}(x_{i})+f_{+}^{primeprimeprime}(x_{i})h(x_{i}
)-f^{primeprime}(x_{i})h^{prime}(x_{i})\&quad+2pfrac{(f(x_{i})-y_{i})h(x_{i}
)}{(delta y_{i})^{2}}\
& =-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f_{+}^{primeprimeprime}
(x_{i})h(x_{i})+2pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
end{align*}

Now take $h$ such that $h(x_{i})=1$ and you get
$$
0=-f_{-}^{primeprimeprime}(x_{i})+f_{+}^{primeprimeprime}(x_{i}
)+2pfrac{f(x_{i})-y_{i}}{(delta y_{i})^{2}}.
$$

For $i=1$ and $i=n$ you do something similar.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    That was very helpful, thank you!
    $endgroup$
    – Ibujah
    Dec 20 '18 at 9:06














1












1








1





$begingroup$

Consider the functional
$$
F(g)=int_{x_{0}}^{x_{n}}(g^{primeprime}(x))^{2}dx+psum_{i=0}^{n}left(
frac{g(x_{i})-y_{i}}{delta y_{i}}right) ^{2}
$$

and let $f$ be a minimum over all function $gin C^{2}([x_{0},x_{n}])$. Then
taking $f+th$, for $tinmathbb{R}$ and $hin C^{2}([x_{0},x_{n}])$, you have
that
$$
F(f+th)geq F(f)
$$

and so the one variable function $k(t)=F(f+th)$ has a minimum at $t=0$. Hence,
$k^{prime}(0)=0$. So if we now differentiate under the integral sign, we get
begin{align*}
k^{prime}(t) & =frac{d}{dt}(F(f+ht))=frac{d}{dt}int_{x_{0}}^{x_{n}
}(f^{primeprime}(x)+th^{primeprime}(x))^{2}dx\&quad+pfrac{d}{dt}sum_{i=0}
^{n}left( frac{f(x_{i})+th(x_{i})-y_{i}}{delta y_{i}}right) ^{2}\
& =int_{x_{0}}^{x_{n}}2(f^{primeprime}(x)+th^{primeprime}(x))h^{prime
prime}(x),dx\&quad+2psum_{i=0}^{n}frac{(f(x_{i})+th(x_{i})-y_{i})h(x_{i}
)}{(delta y_{i})^{2}}.
end{align*}

Taking $t=0$ gives
$$
0=k^{prime}(0)=int_{x_{0}}^{x_{n}}2f^{primeprime}(x)h^{primeprime
}(x),dx+2psum_{i=0}^{n}frac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
$$

This is true for all $hin C^{2}([x_{0},x_{n}])$. We now play with $h$. Fix
$i$ and consider functions $h$ which are zero except on $(x_{i-1},x_{i})$.
Then
$$
0=int_{x_{i-1}}^{x_{i}}f^{primeprime}(x)h^{primeprime}(x),dx.
$$

By Weyl's lemma, this implies that $f^{primeprime}$ has two derivatives in
$(x_{i-1},x_{i})$ and that $f^{primeprimeprimeprime}(x)=0$ in each
interval $(x_{i-1},x_{i})$. Thus, $f^{primeprimeprime}$ is constant in each
interval $(x_{i-1},x_{i})$ but can jump at each $x_{i}$. To find the
constants, fix $1<i<n$ and take $hin C^{4}([x_{0},x_{n}])$ which is zero outside
of $(x_{i}-delta,x_{i}+delta)$, where $delta<min{x_{i}-x_{i-1}%
,x_{i+1}-x_{i}}$
. Then
begin{align*}
0 & =int_{x_{i}-delta}^{x_{i}+delta}f^{primeprime}(x)h^{primeprime
}(x),dx+pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}\
& =int_{x_{i}-delta}^{x_{i}}f^{primeprime}(x)h^{primeprime}
(x),dx+int_{x_{i}}^{x_{i}+delta}f^{primeprime}(x)h^{primeprime
}(x),dx+pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
end{align*}

Integrating by parts twice both integrals and using the fact that
$f^{primeprimeprime}=0$ in each open interval and that $h$ and its
derivatives up to order 4 are zero at $x_{i}pmdelta$ we get
begin{align*}
int_{x_{i}-delta}^{x_{i}}&f^{primeprime}(x)h^{primeprime}(x),dx
=-int_{x_{i}-delta}^{x_{i}}f^{primeprimeprime}(x)h^{prime}
(x),dx+f^{primeprime}(x_{i})h^{prime}(x_{i})-0\
& =int_{x_{i}-delta}^{x_{i}}f^{primeprimeprimeprime}(x)h(x),dx+0-f_{-}
^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}(x_{i})h^{prime}
(x_{i})\
& =0-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}(x_{i}
)h^{prime}(x_{i}).
end{align*}

Similarly,
begin{align*}
int_{x_{i}}^{x_{i}+delta}&f^{primeprime}(x)h^{primeprime}(x),dx
=-int_{x_{i}}^{x_{i}+delta}f^{primeprimeprime}(x)h^{prime}
(x),dx+0-f^{primeprime}(x_{i})h^{prime}(x_{i})\
& =int_{x_{i}}^{x_{i}+delta}f^{primeprimeprimeprime}(x)h(x),dx-0+f_{+}
^{primeprimeprime}(x_{i})h(x_{i})-f^{primeprime}(x_{i})h^{prime}
(x_{i})\
& =0+f_{+}^{primeprimeprime}(x_{i})h(x_{i})-f^{primeprime}(x_{i}
)h^{prime}(x_{i}).
end{align*}

Hence, if we combine the last three equations we get
begin{align*}
0 & =-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}
(x_{i})h^{prime}(x_{i})+f_{+}^{primeprimeprime}(x_{i})h(x_{i}
)-f^{primeprime}(x_{i})h^{prime}(x_{i})\&quad+2pfrac{(f(x_{i})-y_{i})h(x_{i}
)}{(delta y_{i})^{2}}\
& =-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f_{+}^{primeprimeprime}
(x_{i})h(x_{i})+2pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
end{align*}

Now take $h$ such that $h(x_{i})=1$ and you get
$$
0=-f_{-}^{primeprimeprime}(x_{i})+f_{+}^{primeprimeprime}(x_{i}
)+2pfrac{f(x_{i})-y_{i}}{(delta y_{i})^{2}}.
$$

For $i=1$ and $i=n$ you do something similar.






share|cite|improve this answer











$endgroup$



Consider the functional
$$
F(g)=int_{x_{0}}^{x_{n}}(g^{primeprime}(x))^{2}dx+psum_{i=0}^{n}left(
frac{g(x_{i})-y_{i}}{delta y_{i}}right) ^{2}
$$

and let $f$ be a minimum over all function $gin C^{2}([x_{0},x_{n}])$. Then
taking $f+th$, for $tinmathbb{R}$ and $hin C^{2}([x_{0},x_{n}])$, you have
that
$$
F(f+th)geq F(f)
$$

and so the one variable function $k(t)=F(f+th)$ has a minimum at $t=0$. Hence,
$k^{prime}(0)=0$. So if we now differentiate under the integral sign, we get
begin{align*}
k^{prime}(t) & =frac{d}{dt}(F(f+ht))=frac{d}{dt}int_{x_{0}}^{x_{n}
}(f^{primeprime}(x)+th^{primeprime}(x))^{2}dx\&quad+pfrac{d}{dt}sum_{i=0}
^{n}left( frac{f(x_{i})+th(x_{i})-y_{i}}{delta y_{i}}right) ^{2}\
& =int_{x_{0}}^{x_{n}}2(f^{primeprime}(x)+th^{primeprime}(x))h^{prime
prime}(x),dx\&quad+2psum_{i=0}^{n}frac{(f(x_{i})+th(x_{i})-y_{i})h(x_{i}
)}{(delta y_{i})^{2}}.
end{align*}

Taking $t=0$ gives
$$
0=k^{prime}(0)=int_{x_{0}}^{x_{n}}2f^{primeprime}(x)h^{primeprime
}(x),dx+2psum_{i=0}^{n}frac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
$$

This is true for all $hin C^{2}([x_{0},x_{n}])$. We now play with $h$. Fix
$i$ and consider functions $h$ which are zero except on $(x_{i-1},x_{i})$.
Then
$$
0=int_{x_{i-1}}^{x_{i}}f^{primeprime}(x)h^{primeprime}(x),dx.
$$

By Weyl's lemma, this implies that $f^{primeprime}$ has two derivatives in
$(x_{i-1},x_{i})$ and that $f^{primeprimeprimeprime}(x)=0$ in each
interval $(x_{i-1},x_{i})$. Thus, $f^{primeprimeprime}$ is constant in each
interval $(x_{i-1},x_{i})$ but can jump at each $x_{i}$. To find the
constants, fix $1<i<n$ and take $hin C^{4}([x_{0},x_{n}])$ which is zero outside
of $(x_{i}-delta,x_{i}+delta)$, where $delta<min{x_{i}-x_{i-1}%
,x_{i+1}-x_{i}}$
. Then
begin{align*}
0 & =int_{x_{i}-delta}^{x_{i}+delta}f^{primeprime}(x)h^{primeprime
}(x),dx+pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}\
& =int_{x_{i}-delta}^{x_{i}}f^{primeprime}(x)h^{primeprime}
(x),dx+int_{x_{i}}^{x_{i}+delta}f^{primeprime}(x)h^{primeprime
}(x),dx+pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
end{align*}

Integrating by parts twice both integrals and using the fact that
$f^{primeprimeprime}=0$ in each open interval and that $h$ and its
derivatives up to order 4 are zero at $x_{i}pmdelta$ we get
begin{align*}
int_{x_{i}-delta}^{x_{i}}&f^{primeprime}(x)h^{primeprime}(x),dx
=-int_{x_{i}-delta}^{x_{i}}f^{primeprimeprime}(x)h^{prime}
(x),dx+f^{primeprime}(x_{i})h^{prime}(x_{i})-0\
& =int_{x_{i}-delta}^{x_{i}}f^{primeprimeprimeprime}(x)h(x),dx+0-f_{-}
^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}(x_{i})h^{prime}
(x_{i})\
& =0-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}(x_{i}
)h^{prime}(x_{i}).
end{align*}

Similarly,
begin{align*}
int_{x_{i}}^{x_{i}+delta}&f^{primeprime}(x)h^{primeprime}(x),dx
=-int_{x_{i}}^{x_{i}+delta}f^{primeprimeprime}(x)h^{prime}
(x),dx+0-f^{primeprime}(x_{i})h^{prime}(x_{i})\
& =int_{x_{i}}^{x_{i}+delta}f^{primeprimeprimeprime}(x)h(x),dx-0+f_{+}
^{primeprimeprime}(x_{i})h(x_{i})-f^{primeprime}(x_{i})h^{prime}
(x_{i})\
& =0+f_{+}^{primeprimeprime}(x_{i})h(x_{i})-f^{primeprime}(x_{i}
)h^{prime}(x_{i}).
end{align*}

Hence, if we combine the last three equations we get
begin{align*}
0 & =-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f^{primeprime}
(x_{i})h^{prime}(x_{i})+f_{+}^{primeprimeprime}(x_{i})h(x_{i}
)-f^{primeprime}(x_{i})h^{prime}(x_{i})\&quad+2pfrac{(f(x_{i})-y_{i})h(x_{i}
)}{(delta y_{i})^{2}}\
& =-f_{-}^{primeprimeprime}(x_{i})h(x_{i})+f_{+}^{primeprimeprime}
(x_{i})h(x_{i})+2pfrac{(f(x_{i})-y_{i})h(x_{i})}{(delta y_{i})^{2}}.
end{align*}

Now take $h$ such that $h(x_{i})=1$ and you get
$$
0=-f_{-}^{primeprimeprime}(x_{i})+f_{+}^{primeprimeprime}(x_{i}
)+2pfrac{f(x_{i})-y_{i}}{(delta y_{i})^{2}}.
$$

For $i=1$ and $i=n$ you do something similar.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Dec 19 '18 at 19:58

























answered Dec 19 '18 at 19:52









Gio67Gio67

12.6k1627




12.6k1627












  • $begingroup$
    That was very helpful, thank you!
    $endgroup$
    – Ibujah
    Dec 20 '18 at 9:06


















  • $begingroup$
    That was very helpful, thank you!
    $endgroup$
    – Ibujah
    Dec 20 '18 at 9:06
















$begingroup$
That was very helpful, thank you!
$endgroup$
– Ibujah
Dec 20 '18 at 9:06




$begingroup$
That was very helpful, thank you!
$endgroup$
– Ibujah
Dec 20 '18 at 9:06


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3046567%2fstandard-methods-of-the-calculus-of-variations-or-do-you-read-german%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Cabo Verde

Gyllenstierna