Conditions for local and global optimality












2












$begingroup$


Consider the everywhere twice differentiable function $f:mathbb R^nto mathbb R$, the closed and convex set $mathcal S$, and the convex optimization problem



$$
min_{xin mathcal S} ; f(x).
$$



Is there an easy / intuitive way of proving both statements?




  1. $x = x^*$ is a local minimizer if $nabla f(x^*) = 0$ and $nabla^2 f(x^*) succ 0$. Specifically, the condition $nabla^2 f(x^*) succeq 0$ is not sufficient, since as a counterexample we can consider $f(x) = x^3$, where at $x = 0$, $nabla^2 f(0) = 0$ and $nabla f(0) = 0$, but $0$ is not a minimum.


  2. $x = x^*$ is a global minimizer if $nabla f(x^*) = 0$ and $nabla^2 f(x) succeq 0$ for all $xin mathcal S$.



The second statement in particular is quite well-known in convex optimization literature. However, I wonder if there is a nice proof, to reassure ourselves that there are no corner cases (like the one found in case 1).










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    I'm a bit confused. Is $f$ convex?
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:14










  • $begingroup$
    In the second case, yes (but not strictly convex). in the first, no. But the only assumptions I'm stating is that $f$ is twice differentiable over $xin mathcal S$; the rest should be from the assumptions.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:17










  • $begingroup$
    What is $S$? Is $S$ convex, closed, bounded, etc,etc.
    $endgroup$
    – copper.hat
    Dec 23 '18 at 3:54










  • $begingroup$
    Let's assume $S$ is closed and convex, not necessarily bounded. (question amended)
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:55










  • $begingroup$
    So I think the answer might be as simple as: all stationary points are either local min, local max, or saddle points. If the function has a PSD hessian everywhere, then in the interior of S the stationary points must be local mins. Clearly at the boundary, saddles can occur, but the "descending" part must happen outside of S. Anyway, the question is probably unnecessarily pedantic; I mostly just wanted to clarify my understanding here to make sure there weren't any strange corner cases in case 2.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 4:01
















2












$begingroup$


Consider the everywhere twice differentiable function $f:mathbb R^nto mathbb R$, the closed and convex set $mathcal S$, and the convex optimization problem



$$
min_{xin mathcal S} ; f(x).
$$



Is there an easy / intuitive way of proving both statements?




  1. $x = x^*$ is a local minimizer if $nabla f(x^*) = 0$ and $nabla^2 f(x^*) succ 0$. Specifically, the condition $nabla^2 f(x^*) succeq 0$ is not sufficient, since as a counterexample we can consider $f(x) = x^3$, where at $x = 0$, $nabla^2 f(0) = 0$ and $nabla f(0) = 0$, but $0$ is not a minimum.


  2. $x = x^*$ is a global minimizer if $nabla f(x^*) = 0$ and $nabla^2 f(x) succeq 0$ for all $xin mathcal S$.



The second statement in particular is quite well-known in convex optimization literature. However, I wonder if there is a nice proof, to reassure ourselves that there are no corner cases (like the one found in case 1).










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    I'm a bit confused. Is $f$ convex?
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:14










  • $begingroup$
    In the second case, yes (but not strictly convex). in the first, no. But the only assumptions I'm stating is that $f$ is twice differentiable over $xin mathcal S$; the rest should be from the assumptions.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:17










  • $begingroup$
    What is $S$? Is $S$ convex, closed, bounded, etc,etc.
    $endgroup$
    – copper.hat
    Dec 23 '18 at 3:54










  • $begingroup$
    Let's assume $S$ is closed and convex, not necessarily bounded. (question amended)
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:55










  • $begingroup$
    So I think the answer might be as simple as: all stationary points are either local min, local max, or saddle points. If the function has a PSD hessian everywhere, then in the interior of S the stationary points must be local mins. Clearly at the boundary, saddles can occur, but the "descending" part must happen outside of S. Anyway, the question is probably unnecessarily pedantic; I mostly just wanted to clarify my understanding here to make sure there weren't any strange corner cases in case 2.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 4:01














2












2








2


1



$begingroup$


Consider the everywhere twice differentiable function $f:mathbb R^nto mathbb R$, the closed and convex set $mathcal S$, and the convex optimization problem



$$
min_{xin mathcal S} ; f(x).
$$



Is there an easy / intuitive way of proving both statements?




  1. $x = x^*$ is a local minimizer if $nabla f(x^*) = 0$ and $nabla^2 f(x^*) succ 0$. Specifically, the condition $nabla^2 f(x^*) succeq 0$ is not sufficient, since as a counterexample we can consider $f(x) = x^3$, where at $x = 0$, $nabla^2 f(0) = 0$ and $nabla f(0) = 0$, but $0$ is not a minimum.


  2. $x = x^*$ is a global minimizer if $nabla f(x^*) = 0$ and $nabla^2 f(x) succeq 0$ for all $xin mathcal S$.



The second statement in particular is quite well-known in convex optimization literature. However, I wonder if there is a nice proof, to reassure ourselves that there are no corner cases (like the one found in case 1).










share|cite|improve this question











$endgroup$




Consider the everywhere twice differentiable function $f:mathbb R^nto mathbb R$, the closed and convex set $mathcal S$, and the convex optimization problem



$$
min_{xin mathcal S} ; f(x).
$$



Is there an easy / intuitive way of proving both statements?




  1. $x = x^*$ is a local minimizer if $nabla f(x^*) = 0$ and $nabla^2 f(x^*) succ 0$. Specifically, the condition $nabla^2 f(x^*) succeq 0$ is not sufficient, since as a counterexample we can consider $f(x) = x^3$, where at $x = 0$, $nabla^2 f(0) = 0$ and $nabla f(0) = 0$, but $0$ is not a minimum.


  2. $x = x^*$ is a global minimizer if $nabla f(x^*) = 0$ and $nabla^2 f(x) succeq 0$ for all $xin mathcal S$.



The second statement in particular is quite well-known in convex optimization literature. However, I wonder if there is a nice proof, to reassure ourselves that there are no corner cases (like the one found in case 1).







optimization convex-optimization






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 23 '18 at 3:55







Y. S.

















asked Dec 23 '18 at 3:10









Y. S.Y. S.

660310




660310








  • 1




    $begingroup$
    I'm a bit confused. Is $f$ convex?
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:14










  • $begingroup$
    In the second case, yes (but not strictly convex). in the first, no. But the only assumptions I'm stating is that $f$ is twice differentiable over $xin mathcal S$; the rest should be from the assumptions.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:17










  • $begingroup$
    What is $S$? Is $S$ convex, closed, bounded, etc,etc.
    $endgroup$
    – copper.hat
    Dec 23 '18 at 3:54










  • $begingroup$
    Let's assume $S$ is closed and convex, not necessarily bounded. (question amended)
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:55










  • $begingroup$
    So I think the answer might be as simple as: all stationary points are either local min, local max, or saddle points. If the function has a PSD hessian everywhere, then in the interior of S the stationary points must be local mins. Clearly at the boundary, saddles can occur, but the "descending" part must happen outside of S. Anyway, the question is probably unnecessarily pedantic; I mostly just wanted to clarify my understanding here to make sure there weren't any strange corner cases in case 2.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 4:01














  • 1




    $begingroup$
    I'm a bit confused. Is $f$ convex?
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:14










  • $begingroup$
    In the second case, yes (but not strictly convex). in the first, no. But the only assumptions I'm stating is that $f$ is twice differentiable over $xin mathcal S$; the rest should be from the assumptions.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:17










  • $begingroup$
    What is $S$? Is $S$ convex, closed, bounded, etc,etc.
    $endgroup$
    – copper.hat
    Dec 23 '18 at 3:54










  • $begingroup$
    Let's assume $S$ is closed and convex, not necessarily bounded. (question amended)
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:55










  • $begingroup$
    So I think the answer might be as simple as: all stationary points are either local min, local max, or saddle points. If the function has a PSD hessian everywhere, then in the interior of S the stationary points must be local mins. Clearly at the boundary, saddles can occur, but the "descending" part must happen outside of S. Anyway, the question is probably unnecessarily pedantic; I mostly just wanted to clarify my understanding here to make sure there weren't any strange corner cases in case 2.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 4:01








1




1




$begingroup$
I'm a bit confused. Is $f$ convex?
$endgroup$
– BigbearZzz
Dec 23 '18 at 3:14




$begingroup$
I'm a bit confused. Is $f$ convex?
$endgroup$
– BigbearZzz
Dec 23 '18 at 3:14












$begingroup$
In the second case, yes (but not strictly convex). in the first, no. But the only assumptions I'm stating is that $f$ is twice differentiable over $xin mathcal S$; the rest should be from the assumptions.
$endgroup$
– Y. S.
Dec 23 '18 at 3:17




$begingroup$
In the second case, yes (but not strictly convex). in the first, no. But the only assumptions I'm stating is that $f$ is twice differentiable over $xin mathcal S$; the rest should be from the assumptions.
$endgroup$
– Y. S.
Dec 23 '18 at 3:17












$begingroup$
What is $S$? Is $S$ convex, closed, bounded, etc,etc.
$endgroup$
– copper.hat
Dec 23 '18 at 3:54




$begingroup$
What is $S$? Is $S$ convex, closed, bounded, etc,etc.
$endgroup$
– copper.hat
Dec 23 '18 at 3:54












$begingroup$
Let's assume $S$ is closed and convex, not necessarily bounded. (question amended)
$endgroup$
– Y. S.
Dec 23 '18 at 3:55




$begingroup$
Let's assume $S$ is closed and convex, not necessarily bounded. (question amended)
$endgroup$
– Y. S.
Dec 23 '18 at 3:55












$begingroup$
So I think the answer might be as simple as: all stationary points are either local min, local max, or saddle points. If the function has a PSD hessian everywhere, then in the interior of S the stationary points must be local mins. Clearly at the boundary, saddles can occur, but the "descending" part must happen outside of S. Anyway, the question is probably unnecessarily pedantic; I mostly just wanted to clarify my understanding here to make sure there weren't any strange corner cases in case 2.
$endgroup$
– Y. S.
Dec 23 '18 at 4:01




$begingroup$
So I think the answer might be as simple as: all stationary points are either local min, local max, or saddle points. If the function has a PSD hessian everywhere, then in the interior of S the stationary points must be local mins. Clearly at the boundary, saddles can occur, but the "descending" part must happen outside of S. Anyway, the question is probably unnecessarily pedantic; I mostly just wanted to clarify my understanding here to make sure there weren't any strange corner cases in case 2.
$endgroup$
– Y. S.
Dec 23 '18 at 4:01










1 Answer
1






active

oldest

votes


















1












$begingroup$

Yes to both of the questions (assuming that $nabla^2 f$ is continuous for the first question).



The result follows from the multivariables Taylor theorem:
$$
f(x+v) = f(x) + nabla f(x)cdot v + (nabla^2f(x+theta v): votimes v )
$$

for some $thetain(0,1)$. By letting $x=x^*$ this reduces to
$$
f(x^*+v) - f(x^*) = (nabla^2f(x^*+theta v): votimes v ),
$$

which obviously implies (2.).



For (1.), the assumption that $nabla^2 f(x^*) succ 0$ and continuity of $nabla^2 f(x^*)$ means that $nabla^2 f(x^*+theta v) succ 0$ for sufficiently small $v$ (see this question), thus the above formula shows that $f(x^*+v) - f(x^*) >0$ in a small neighborhood.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Taylor's theorem only gives inequality for some $thetain (0,1)$, not for all $thetain (0,1)$, which is not sufficient to say that $f(x)$ is a minimizer (local or global). To be more specific, the possible corner case is $nabla^2 f(x^*) succeq 0$ but not $nabla^2 f(x^*) succ 0$, which isn't really resolved by this (since $f(x+v) leq f(x)$ without really requiring global convexity).
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:36












  • $begingroup$
    What do you mean? For each $yinBbb R^n$ we can let $v=y-x^*$ and for each such $v$ there's a corresponding $theta = theta(v)$.
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:39










  • $begingroup$
    So ok, restricting to case 1, for local optimality we need to say that there exists some $epsilon$ where for ALL $y$ where $|y-x|leq epsilon$, $f(y) leq f(x)$. This only says that there exists SOME $y$ in the $epsilon$ ball around $x$.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:41








  • 1




    $begingroup$
    That's why I said we require continuity of $nabla^2 f$ at $x^*$. The link I gave is about the openess of the set of positive definite matrices. Together they imply that $nabla^2 f(y) > 0$ in a small neighborhood of $x^*$.
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:43












  • $begingroup$
    Sure, ok. I am probably being unnecessarily pedantic about case 1; with a proof for open set PD-ness, I agree it works. But what about case 2? I think you can't use the same argument here.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:45











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3050028%2fconditions-for-local-and-global-optimality%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Yes to both of the questions (assuming that $nabla^2 f$ is continuous for the first question).



The result follows from the multivariables Taylor theorem:
$$
f(x+v) = f(x) + nabla f(x)cdot v + (nabla^2f(x+theta v): votimes v )
$$

for some $thetain(0,1)$. By letting $x=x^*$ this reduces to
$$
f(x^*+v) - f(x^*) = (nabla^2f(x^*+theta v): votimes v ),
$$

which obviously implies (2.).



For (1.), the assumption that $nabla^2 f(x^*) succ 0$ and continuity of $nabla^2 f(x^*)$ means that $nabla^2 f(x^*+theta v) succ 0$ for sufficiently small $v$ (see this question), thus the above formula shows that $f(x^*+v) - f(x^*) >0$ in a small neighborhood.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Taylor's theorem only gives inequality for some $thetain (0,1)$, not for all $thetain (0,1)$, which is not sufficient to say that $f(x)$ is a minimizer (local or global). To be more specific, the possible corner case is $nabla^2 f(x^*) succeq 0$ but not $nabla^2 f(x^*) succ 0$, which isn't really resolved by this (since $f(x+v) leq f(x)$ without really requiring global convexity).
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:36












  • $begingroup$
    What do you mean? For each $yinBbb R^n$ we can let $v=y-x^*$ and for each such $v$ there's a corresponding $theta = theta(v)$.
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:39










  • $begingroup$
    So ok, restricting to case 1, for local optimality we need to say that there exists some $epsilon$ where for ALL $y$ where $|y-x|leq epsilon$, $f(y) leq f(x)$. This only says that there exists SOME $y$ in the $epsilon$ ball around $x$.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:41








  • 1




    $begingroup$
    That's why I said we require continuity of $nabla^2 f$ at $x^*$. The link I gave is about the openess of the set of positive definite matrices. Together they imply that $nabla^2 f(y) > 0$ in a small neighborhood of $x^*$.
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:43












  • $begingroup$
    Sure, ok. I am probably being unnecessarily pedantic about case 1; with a proof for open set PD-ness, I agree it works. But what about case 2? I think you can't use the same argument here.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:45
















1












$begingroup$

Yes to both of the questions (assuming that $nabla^2 f$ is continuous for the first question).



The result follows from the multivariables Taylor theorem:
$$
f(x+v) = f(x) + nabla f(x)cdot v + (nabla^2f(x+theta v): votimes v )
$$

for some $thetain(0,1)$. By letting $x=x^*$ this reduces to
$$
f(x^*+v) - f(x^*) = (nabla^2f(x^*+theta v): votimes v ),
$$

which obviously implies (2.).



For (1.), the assumption that $nabla^2 f(x^*) succ 0$ and continuity of $nabla^2 f(x^*)$ means that $nabla^2 f(x^*+theta v) succ 0$ for sufficiently small $v$ (see this question), thus the above formula shows that $f(x^*+v) - f(x^*) >0$ in a small neighborhood.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Taylor's theorem only gives inequality for some $thetain (0,1)$, not for all $thetain (0,1)$, which is not sufficient to say that $f(x)$ is a minimizer (local or global). To be more specific, the possible corner case is $nabla^2 f(x^*) succeq 0$ but not $nabla^2 f(x^*) succ 0$, which isn't really resolved by this (since $f(x+v) leq f(x)$ without really requiring global convexity).
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:36












  • $begingroup$
    What do you mean? For each $yinBbb R^n$ we can let $v=y-x^*$ and for each such $v$ there's a corresponding $theta = theta(v)$.
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:39










  • $begingroup$
    So ok, restricting to case 1, for local optimality we need to say that there exists some $epsilon$ where for ALL $y$ where $|y-x|leq epsilon$, $f(y) leq f(x)$. This only says that there exists SOME $y$ in the $epsilon$ ball around $x$.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:41








  • 1




    $begingroup$
    That's why I said we require continuity of $nabla^2 f$ at $x^*$. The link I gave is about the openess of the set of positive definite matrices. Together they imply that $nabla^2 f(y) > 0$ in a small neighborhood of $x^*$.
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:43












  • $begingroup$
    Sure, ok. I am probably being unnecessarily pedantic about case 1; with a proof for open set PD-ness, I agree it works. But what about case 2? I think you can't use the same argument here.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:45














1












1








1





$begingroup$

Yes to both of the questions (assuming that $nabla^2 f$ is continuous for the first question).



The result follows from the multivariables Taylor theorem:
$$
f(x+v) = f(x) + nabla f(x)cdot v + (nabla^2f(x+theta v): votimes v )
$$

for some $thetain(0,1)$. By letting $x=x^*$ this reduces to
$$
f(x^*+v) - f(x^*) = (nabla^2f(x^*+theta v): votimes v ),
$$

which obviously implies (2.).



For (1.), the assumption that $nabla^2 f(x^*) succ 0$ and continuity of $nabla^2 f(x^*)$ means that $nabla^2 f(x^*+theta v) succ 0$ for sufficiently small $v$ (see this question), thus the above formula shows that $f(x^*+v) - f(x^*) >0$ in a small neighborhood.






share|cite|improve this answer









$endgroup$



Yes to both of the questions (assuming that $nabla^2 f$ is continuous for the first question).



The result follows from the multivariables Taylor theorem:
$$
f(x+v) = f(x) + nabla f(x)cdot v + (nabla^2f(x+theta v): votimes v )
$$

for some $thetain(0,1)$. By letting $x=x^*$ this reduces to
$$
f(x^*+v) - f(x^*) = (nabla^2f(x^*+theta v): votimes v ),
$$

which obviously implies (2.).



For (1.), the assumption that $nabla^2 f(x^*) succ 0$ and continuity of $nabla^2 f(x^*)$ means that $nabla^2 f(x^*+theta v) succ 0$ for sufficiently small $v$ (see this question), thus the above formula shows that $f(x^*+v) - f(x^*) >0$ in a small neighborhood.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Dec 23 '18 at 3:31









BigbearZzzBigbearZzz

8,63121652




8,63121652












  • $begingroup$
    Taylor's theorem only gives inequality for some $thetain (0,1)$, not for all $thetain (0,1)$, which is not sufficient to say that $f(x)$ is a minimizer (local or global). To be more specific, the possible corner case is $nabla^2 f(x^*) succeq 0$ but not $nabla^2 f(x^*) succ 0$, which isn't really resolved by this (since $f(x+v) leq f(x)$ without really requiring global convexity).
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:36












  • $begingroup$
    What do you mean? For each $yinBbb R^n$ we can let $v=y-x^*$ and for each such $v$ there's a corresponding $theta = theta(v)$.
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:39










  • $begingroup$
    So ok, restricting to case 1, for local optimality we need to say that there exists some $epsilon$ where for ALL $y$ where $|y-x|leq epsilon$, $f(y) leq f(x)$. This only says that there exists SOME $y$ in the $epsilon$ ball around $x$.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:41








  • 1




    $begingroup$
    That's why I said we require continuity of $nabla^2 f$ at $x^*$. The link I gave is about the openess of the set of positive definite matrices. Together they imply that $nabla^2 f(y) > 0$ in a small neighborhood of $x^*$.
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:43












  • $begingroup$
    Sure, ok. I am probably being unnecessarily pedantic about case 1; with a proof for open set PD-ness, I agree it works. But what about case 2? I think you can't use the same argument here.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:45


















  • $begingroup$
    Taylor's theorem only gives inequality for some $thetain (0,1)$, not for all $thetain (0,1)$, which is not sufficient to say that $f(x)$ is a minimizer (local or global). To be more specific, the possible corner case is $nabla^2 f(x^*) succeq 0$ but not $nabla^2 f(x^*) succ 0$, which isn't really resolved by this (since $f(x+v) leq f(x)$ without really requiring global convexity).
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:36












  • $begingroup$
    What do you mean? For each $yinBbb R^n$ we can let $v=y-x^*$ and for each such $v$ there's a corresponding $theta = theta(v)$.
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:39










  • $begingroup$
    So ok, restricting to case 1, for local optimality we need to say that there exists some $epsilon$ where for ALL $y$ where $|y-x|leq epsilon$, $f(y) leq f(x)$. This only says that there exists SOME $y$ in the $epsilon$ ball around $x$.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:41








  • 1




    $begingroup$
    That's why I said we require continuity of $nabla^2 f$ at $x^*$. The link I gave is about the openess of the set of positive definite matrices. Together they imply that $nabla^2 f(y) > 0$ in a small neighborhood of $x^*$.
    $endgroup$
    – BigbearZzz
    Dec 23 '18 at 3:43












  • $begingroup$
    Sure, ok. I am probably being unnecessarily pedantic about case 1; with a proof for open set PD-ness, I agree it works. But what about case 2? I think you can't use the same argument here.
    $endgroup$
    – Y. S.
    Dec 23 '18 at 3:45
















$begingroup$
Taylor's theorem only gives inequality for some $thetain (0,1)$, not for all $thetain (0,1)$, which is not sufficient to say that $f(x)$ is a minimizer (local or global). To be more specific, the possible corner case is $nabla^2 f(x^*) succeq 0$ but not $nabla^2 f(x^*) succ 0$, which isn't really resolved by this (since $f(x+v) leq f(x)$ without really requiring global convexity).
$endgroup$
– Y. S.
Dec 23 '18 at 3:36






$begingroup$
Taylor's theorem only gives inequality for some $thetain (0,1)$, not for all $thetain (0,1)$, which is not sufficient to say that $f(x)$ is a minimizer (local or global). To be more specific, the possible corner case is $nabla^2 f(x^*) succeq 0$ but not $nabla^2 f(x^*) succ 0$, which isn't really resolved by this (since $f(x+v) leq f(x)$ without really requiring global convexity).
$endgroup$
– Y. S.
Dec 23 '18 at 3:36














$begingroup$
What do you mean? For each $yinBbb R^n$ we can let $v=y-x^*$ and for each such $v$ there's a corresponding $theta = theta(v)$.
$endgroup$
– BigbearZzz
Dec 23 '18 at 3:39




$begingroup$
What do you mean? For each $yinBbb R^n$ we can let $v=y-x^*$ and for each such $v$ there's a corresponding $theta = theta(v)$.
$endgroup$
– BigbearZzz
Dec 23 '18 at 3:39












$begingroup$
So ok, restricting to case 1, for local optimality we need to say that there exists some $epsilon$ where for ALL $y$ where $|y-x|leq epsilon$, $f(y) leq f(x)$. This only says that there exists SOME $y$ in the $epsilon$ ball around $x$.
$endgroup$
– Y. S.
Dec 23 '18 at 3:41






$begingroup$
So ok, restricting to case 1, for local optimality we need to say that there exists some $epsilon$ where for ALL $y$ where $|y-x|leq epsilon$, $f(y) leq f(x)$. This only says that there exists SOME $y$ in the $epsilon$ ball around $x$.
$endgroup$
– Y. S.
Dec 23 '18 at 3:41






1




1




$begingroup$
That's why I said we require continuity of $nabla^2 f$ at $x^*$. The link I gave is about the openess of the set of positive definite matrices. Together they imply that $nabla^2 f(y) > 0$ in a small neighborhood of $x^*$.
$endgroup$
– BigbearZzz
Dec 23 '18 at 3:43






$begingroup$
That's why I said we require continuity of $nabla^2 f$ at $x^*$. The link I gave is about the openess of the set of positive definite matrices. Together they imply that $nabla^2 f(y) > 0$ in a small neighborhood of $x^*$.
$endgroup$
– BigbearZzz
Dec 23 '18 at 3:43














$begingroup$
Sure, ok. I am probably being unnecessarily pedantic about case 1; with a proof for open set PD-ness, I agree it works. But what about case 2? I think you can't use the same argument here.
$endgroup$
– Y. S.
Dec 23 '18 at 3:45




$begingroup$
Sure, ok. I am probably being unnecessarily pedantic about case 1; with a proof for open set PD-ness, I agree it works. But what about case 2? I think you can't use the same argument here.
$endgroup$
– Y. S.
Dec 23 '18 at 3:45


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3050028%2fconditions-for-local-and-global-optimality%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Cabo Verde

Gyllenstierna