Formulating the dual of an exponential cone optimization problem












1












$begingroup$


I have an optimization problem that is written
begin{align}
min_{{s_kappa} }& frac{1}{2}sum_{kappa} eta_kappa left( exp(r_kappa s_kappa) + expleft( frac{s^2_kappa}{2} right) right) \
text{s.t.} & sum_{kappa} expleft( frac{s_kappa}{2} right) leq C
end{align}

where $r_kappa < 0$, $C> 0$ and ${ s_kappa}$ is a finite set of decision variables.



I believe then that the Lagrangian is given by
begin{align}
mathcal{L}(s_kappa,lambda) = frac{1}{2}sum_{kappa} eta_kappa left( exp(r_kappa s_kappa) + expleft( frac{s^2_kappa}{2} right) right) + lambdaleft( sum_{kappa} expleft( frac{s_kappa}{2} right) - Cright)
end{align}

and the KKT conditions then give $lambda geq 0$ (dual feasibility),
$0 = lambda left( sum_{kappa} expleft( frac{s_kappa}{2} right) - C right)$ (complimentary slackness), and
begin{align}
0 &= frac{1}{2} eta_kappa left( r_kappa exp(r_kappa s_kappa) + s_kappa expleft(frac{s^2_kappa}{2}right) right) + frac{lambda exp(frac{s_kappa}{2})}{2}, forall kappa
end{align}



I would like to write the dual problem for this, which should be $g(lambda) = inf_{{s_kappa} } mathcal{L}(s_kappa,lambda)$. I would like to this because I have from other sources some properties on $lambda$ which I would like to translate into properties on the optimal objective.



My problem is that though I can write $lambda$ in terms of $s_kappa$ for a given $s_kappa$
begin{align}
lambda = - eta_kappa left( r_kappa exp(r_kappa s_kappa) + s_kappa expleft( frac{s^2_kappa}{2} right) right)
end{align}

I am struggling to write $s_kappa$ in terms of $lambda$ in order to substitute back into the Lagrangian to get the dual objective function. Any suggestions?










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    Given the presence of the term $s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)$, you'd likely need something like the $W$-Lambert function, or, more likely, you won't have a closed form expression for $s_{kappa}$. As an alternative to obtaining an explicit dual objective, you could just form the Wolfe Dual.
    $endgroup$
    – nathan.j.mcdougall
    Jan 15 at 0:49










  • $begingroup$
    An alternative to that term is to introduce a constraint of the form $$ s^2_kappa leq alpha, forall kappa$$ and then the $eta_kappa$ term becomes $eta_kappa r_kappa exp(r_kappa s_kappa)$. The problem here is then that there would be a new lagrange multiplier $mu_kappa$ associated with each of these constraints. Would something like this be more ameanable?
    $endgroup$
    – NeedsToKnowMoreMaths
    Jan 15 at 1:53












  • $begingroup$
    I'm afraid I don't really follow you. If you introduce those constraints into the primal, you now need to solve $$frac{1}{2}eta_{kappa}left[r_{kappa}exp(r_{kappa}s_{kappa})+s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)right]+frac{1}{2}lambdaexpleft(frac{s_{kappa}}{2}right)+2s_{kappa}mu_{kappa}=0$$ for $s_{kappa}$ in terms of both $lambda$ and $mu_{kappa}$, which seems more difficult rather than easier. I don't see how the $eta_{kappa}$ term changes.
    $endgroup$
    – nathan.j.mcdougall
    Jan 15 at 2:11












  • $begingroup$
    Sorry, I meant that I could change the objective to $$min sum_kappa eta_kappa exp(r_kappa s_kappa)$$ by adding in the additional constraints. The $s^2_kappa$ term came from trying to regularize to avoid the need for the constraints.
    $endgroup$
    – NeedsToKnowMoreMaths
    Jan 15 at 15:05
















1












$begingroup$


I have an optimization problem that is written
begin{align}
min_{{s_kappa} }& frac{1}{2}sum_{kappa} eta_kappa left( exp(r_kappa s_kappa) + expleft( frac{s^2_kappa}{2} right) right) \
text{s.t.} & sum_{kappa} expleft( frac{s_kappa}{2} right) leq C
end{align}

where $r_kappa < 0$, $C> 0$ and ${ s_kappa}$ is a finite set of decision variables.



I believe then that the Lagrangian is given by
begin{align}
mathcal{L}(s_kappa,lambda) = frac{1}{2}sum_{kappa} eta_kappa left( exp(r_kappa s_kappa) + expleft( frac{s^2_kappa}{2} right) right) + lambdaleft( sum_{kappa} expleft( frac{s_kappa}{2} right) - Cright)
end{align}

and the KKT conditions then give $lambda geq 0$ (dual feasibility),
$0 = lambda left( sum_{kappa} expleft( frac{s_kappa}{2} right) - C right)$ (complimentary slackness), and
begin{align}
0 &= frac{1}{2} eta_kappa left( r_kappa exp(r_kappa s_kappa) + s_kappa expleft(frac{s^2_kappa}{2}right) right) + frac{lambda exp(frac{s_kappa}{2})}{2}, forall kappa
end{align}



I would like to write the dual problem for this, which should be $g(lambda) = inf_{{s_kappa} } mathcal{L}(s_kappa,lambda)$. I would like to this because I have from other sources some properties on $lambda$ which I would like to translate into properties on the optimal objective.



My problem is that though I can write $lambda$ in terms of $s_kappa$ for a given $s_kappa$
begin{align}
lambda = - eta_kappa left( r_kappa exp(r_kappa s_kappa) + s_kappa expleft( frac{s^2_kappa}{2} right) right)
end{align}

I am struggling to write $s_kappa$ in terms of $lambda$ in order to substitute back into the Lagrangian to get the dual objective function. Any suggestions?










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    Given the presence of the term $s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)$, you'd likely need something like the $W$-Lambert function, or, more likely, you won't have a closed form expression for $s_{kappa}$. As an alternative to obtaining an explicit dual objective, you could just form the Wolfe Dual.
    $endgroup$
    – nathan.j.mcdougall
    Jan 15 at 0:49










  • $begingroup$
    An alternative to that term is to introduce a constraint of the form $$ s^2_kappa leq alpha, forall kappa$$ and then the $eta_kappa$ term becomes $eta_kappa r_kappa exp(r_kappa s_kappa)$. The problem here is then that there would be a new lagrange multiplier $mu_kappa$ associated with each of these constraints. Would something like this be more ameanable?
    $endgroup$
    – NeedsToKnowMoreMaths
    Jan 15 at 1:53












  • $begingroup$
    I'm afraid I don't really follow you. If you introduce those constraints into the primal, you now need to solve $$frac{1}{2}eta_{kappa}left[r_{kappa}exp(r_{kappa}s_{kappa})+s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)right]+frac{1}{2}lambdaexpleft(frac{s_{kappa}}{2}right)+2s_{kappa}mu_{kappa}=0$$ for $s_{kappa}$ in terms of both $lambda$ and $mu_{kappa}$, which seems more difficult rather than easier. I don't see how the $eta_{kappa}$ term changes.
    $endgroup$
    – nathan.j.mcdougall
    Jan 15 at 2:11












  • $begingroup$
    Sorry, I meant that I could change the objective to $$min sum_kappa eta_kappa exp(r_kappa s_kappa)$$ by adding in the additional constraints. The $s^2_kappa$ term came from trying to regularize to avoid the need for the constraints.
    $endgroup$
    – NeedsToKnowMoreMaths
    Jan 15 at 15:05














1












1








1


1



$begingroup$


I have an optimization problem that is written
begin{align}
min_{{s_kappa} }& frac{1}{2}sum_{kappa} eta_kappa left( exp(r_kappa s_kappa) + expleft( frac{s^2_kappa}{2} right) right) \
text{s.t.} & sum_{kappa} expleft( frac{s_kappa}{2} right) leq C
end{align}

where $r_kappa < 0$, $C> 0$ and ${ s_kappa}$ is a finite set of decision variables.



I believe then that the Lagrangian is given by
begin{align}
mathcal{L}(s_kappa,lambda) = frac{1}{2}sum_{kappa} eta_kappa left( exp(r_kappa s_kappa) + expleft( frac{s^2_kappa}{2} right) right) + lambdaleft( sum_{kappa} expleft( frac{s_kappa}{2} right) - Cright)
end{align}

and the KKT conditions then give $lambda geq 0$ (dual feasibility),
$0 = lambda left( sum_{kappa} expleft( frac{s_kappa}{2} right) - C right)$ (complimentary slackness), and
begin{align}
0 &= frac{1}{2} eta_kappa left( r_kappa exp(r_kappa s_kappa) + s_kappa expleft(frac{s^2_kappa}{2}right) right) + frac{lambda exp(frac{s_kappa}{2})}{2}, forall kappa
end{align}



I would like to write the dual problem for this, which should be $g(lambda) = inf_{{s_kappa} } mathcal{L}(s_kappa,lambda)$. I would like to this because I have from other sources some properties on $lambda$ which I would like to translate into properties on the optimal objective.



My problem is that though I can write $lambda$ in terms of $s_kappa$ for a given $s_kappa$
begin{align}
lambda = - eta_kappa left( r_kappa exp(r_kappa s_kappa) + s_kappa expleft( frac{s^2_kappa}{2} right) right)
end{align}

I am struggling to write $s_kappa$ in terms of $lambda$ in order to substitute back into the Lagrangian to get the dual objective function. Any suggestions?










share|cite|improve this question









$endgroup$




I have an optimization problem that is written
begin{align}
min_{{s_kappa} }& frac{1}{2}sum_{kappa} eta_kappa left( exp(r_kappa s_kappa) + expleft( frac{s^2_kappa}{2} right) right) \
text{s.t.} & sum_{kappa} expleft( frac{s_kappa}{2} right) leq C
end{align}

where $r_kappa < 0$, $C> 0$ and ${ s_kappa}$ is a finite set of decision variables.



I believe then that the Lagrangian is given by
begin{align}
mathcal{L}(s_kappa,lambda) = frac{1}{2}sum_{kappa} eta_kappa left( exp(r_kappa s_kappa) + expleft( frac{s^2_kappa}{2} right) right) + lambdaleft( sum_{kappa} expleft( frac{s_kappa}{2} right) - Cright)
end{align}

and the KKT conditions then give $lambda geq 0$ (dual feasibility),
$0 = lambda left( sum_{kappa} expleft( frac{s_kappa}{2} right) - C right)$ (complimentary slackness), and
begin{align}
0 &= frac{1}{2} eta_kappa left( r_kappa exp(r_kappa s_kappa) + s_kappa expleft(frac{s^2_kappa}{2}right) right) + frac{lambda exp(frac{s_kappa}{2})}{2}, forall kappa
end{align}



I would like to write the dual problem for this, which should be $g(lambda) = inf_{{s_kappa} } mathcal{L}(s_kappa,lambda)$. I would like to this because I have from other sources some properties on $lambda$ which I would like to translate into properties on the optimal objective.



My problem is that though I can write $lambda$ in terms of $s_kappa$ for a given $s_kappa$
begin{align}
lambda = - eta_kappa left( r_kappa exp(r_kappa s_kappa) + s_kappa expleft( frac{s^2_kappa}{2} right) right)
end{align}

I am struggling to write $s_kappa$ in terms of $lambda$ in order to substitute back into the Lagrangian to get the dual objective function. Any suggestions?







optimization convex-optimization duality-theorems






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 15 at 0:10









NeedsToKnowMoreMathsNeedsToKnowMoreMaths

728




728








  • 1




    $begingroup$
    Given the presence of the term $s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)$, you'd likely need something like the $W$-Lambert function, or, more likely, you won't have a closed form expression for $s_{kappa}$. As an alternative to obtaining an explicit dual objective, you could just form the Wolfe Dual.
    $endgroup$
    – nathan.j.mcdougall
    Jan 15 at 0:49










  • $begingroup$
    An alternative to that term is to introduce a constraint of the form $$ s^2_kappa leq alpha, forall kappa$$ and then the $eta_kappa$ term becomes $eta_kappa r_kappa exp(r_kappa s_kappa)$. The problem here is then that there would be a new lagrange multiplier $mu_kappa$ associated with each of these constraints. Would something like this be more ameanable?
    $endgroup$
    – NeedsToKnowMoreMaths
    Jan 15 at 1:53












  • $begingroup$
    I'm afraid I don't really follow you. If you introduce those constraints into the primal, you now need to solve $$frac{1}{2}eta_{kappa}left[r_{kappa}exp(r_{kappa}s_{kappa})+s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)right]+frac{1}{2}lambdaexpleft(frac{s_{kappa}}{2}right)+2s_{kappa}mu_{kappa}=0$$ for $s_{kappa}$ in terms of both $lambda$ and $mu_{kappa}$, which seems more difficult rather than easier. I don't see how the $eta_{kappa}$ term changes.
    $endgroup$
    – nathan.j.mcdougall
    Jan 15 at 2:11












  • $begingroup$
    Sorry, I meant that I could change the objective to $$min sum_kappa eta_kappa exp(r_kappa s_kappa)$$ by adding in the additional constraints. The $s^2_kappa$ term came from trying to regularize to avoid the need for the constraints.
    $endgroup$
    – NeedsToKnowMoreMaths
    Jan 15 at 15:05














  • 1




    $begingroup$
    Given the presence of the term $s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)$, you'd likely need something like the $W$-Lambert function, or, more likely, you won't have a closed form expression for $s_{kappa}$. As an alternative to obtaining an explicit dual objective, you could just form the Wolfe Dual.
    $endgroup$
    – nathan.j.mcdougall
    Jan 15 at 0:49










  • $begingroup$
    An alternative to that term is to introduce a constraint of the form $$ s^2_kappa leq alpha, forall kappa$$ and then the $eta_kappa$ term becomes $eta_kappa r_kappa exp(r_kappa s_kappa)$. The problem here is then that there would be a new lagrange multiplier $mu_kappa$ associated with each of these constraints. Would something like this be more ameanable?
    $endgroup$
    – NeedsToKnowMoreMaths
    Jan 15 at 1:53












  • $begingroup$
    I'm afraid I don't really follow you. If you introduce those constraints into the primal, you now need to solve $$frac{1}{2}eta_{kappa}left[r_{kappa}exp(r_{kappa}s_{kappa})+s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)right]+frac{1}{2}lambdaexpleft(frac{s_{kappa}}{2}right)+2s_{kappa}mu_{kappa}=0$$ for $s_{kappa}$ in terms of both $lambda$ and $mu_{kappa}$, which seems more difficult rather than easier. I don't see how the $eta_{kappa}$ term changes.
    $endgroup$
    – nathan.j.mcdougall
    Jan 15 at 2:11












  • $begingroup$
    Sorry, I meant that I could change the objective to $$min sum_kappa eta_kappa exp(r_kappa s_kappa)$$ by adding in the additional constraints. The $s^2_kappa$ term came from trying to regularize to avoid the need for the constraints.
    $endgroup$
    – NeedsToKnowMoreMaths
    Jan 15 at 15:05








1




1




$begingroup$
Given the presence of the term $s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)$, you'd likely need something like the $W$-Lambert function, or, more likely, you won't have a closed form expression for $s_{kappa}$. As an alternative to obtaining an explicit dual objective, you could just form the Wolfe Dual.
$endgroup$
– nathan.j.mcdougall
Jan 15 at 0:49




$begingroup$
Given the presence of the term $s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)$, you'd likely need something like the $W$-Lambert function, or, more likely, you won't have a closed form expression for $s_{kappa}$. As an alternative to obtaining an explicit dual objective, you could just form the Wolfe Dual.
$endgroup$
– nathan.j.mcdougall
Jan 15 at 0:49












$begingroup$
An alternative to that term is to introduce a constraint of the form $$ s^2_kappa leq alpha, forall kappa$$ and then the $eta_kappa$ term becomes $eta_kappa r_kappa exp(r_kappa s_kappa)$. The problem here is then that there would be a new lagrange multiplier $mu_kappa$ associated with each of these constraints. Would something like this be more ameanable?
$endgroup$
– NeedsToKnowMoreMaths
Jan 15 at 1:53






$begingroup$
An alternative to that term is to introduce a constraint of the form $$ s^2_kappa leq alpha, forall kappa$$ and then the $eta_kappa$ term becomes $eta_kappa r_kappa exp(r_kappa s_kappa)$. The problem here is then that there would be a new lagrange multiplier $mu_kappa$ associated with each of these constraints. Would something like this be more ameanable?
$endgroup$
– NeedsToKnowMoreMaths
Jan 15 at 1:53














$begingroup$
I'm afraid I don't really follow you. If you introduce those constraints into the primal, you now need to solve $$frac{1}{2}eta_{kappa}left[r_{kappa}exp(r_{kappa}s_{kappa})+s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)right]+frac{1}{2}lambdaexpleft(frac{s_{kappa}}{2}right)+2s_{kappa}mu_{kappa}=0$$ for $s_{kappa}$ in terms of both $lambda$ and $mu_{kappa}$, which seems more difficult rather than easier. I don't see how the $eta_{kappa}$ term changes.
$endgroup$
– nathan.j.mcdougall
Jan 15 at 2:11






$begingroup$
I'm afraid I don't really follow you. If you introduce those constraints into the primal, you now need to solve $$frac{1}{2}eta_{kappa}left[r_{kappa}exp(r_{kappa}s_{kappa})+s_{kappa}expleft(frac{{s_{kappa}}^2}{2}right)right]+frac{1}{2}lambdaexpleft(frac{s_{kappa}}{2}right)+2s_{kappa}mu_{kappa}=0$$ for $s_{kappa}$ in terms of both $lambda$ and $mu_{kappa}$, which seems more difficult rather than easier. I don't see how the $eta_{kappa}$ term changes.
$endgroup$
– nathan.j.mcdougall
Jan 15 at 2:11














$begingroup$
Sorry, I meant that I could change the objective to $$min sum_kappa eta_kappa exp(r_kappa s_kappa)$$ by adding in the additional constraints. The $s^2_kappa$ term came from trying to regularize to avoid the need for the constraints.
$endgroup$
– NeedsToKnowMoreMaths
Jan 15 at 15:05




$begingroup$
Sorry, I meant that I could change the objective to $$min sum_kappa eta_kappa exp(r_kappa s_kappa)$$ by adding in the additional constraints. The $s^2_kappa$ term came from trying to regularize to avoid the need for the constraints.
$endgroup$
– NeedsToKnowMoreMaths
Jan 15 at 15:05










1 Answer
1






active

oldest

votes


















0












$begingroup$

After making the modifications you've allowed in the comments, the optimization problem becomes



$$begin{align*}
min_{{s_{kappa}}}quad&sum_{kappa}eta_{kappa}exp(r_{kappa}s_{kappa})\
text{s.t.}quad&sum_{kappa}expleft(frac{1}{2}s_{kappa}right)leq C\
&s_{kappa}^2leq alpha.
end{align*}$$

We can make the transformation $x_{kappa}=exp(frac{1}{2}s_{kappa})$ to render this more linear. Especially, we can change the $s_{kappa}^2leq alpha$ constriants into pairs of linear constraints $x_{kappa}leq exp(sqrt{alpha})$, and $x_{kappa}geq exp(-sqrt{alpha})$. This gives a new problem
$$begin{align*}
min_{{x_{kappa}}}quad&sum_{kappa}eta_{kappa}(x_{kappa})^{2r_{kappa}}\
text{s.t.}quad&sum_{kappa}x_{kappa}leq C\
&x_{kappa}leq exp(sqrt{alpha})\
&x_{kappa}geq exp(-sqrt{alpha})\
end{align*}$$



The KKT stationary-point constraint for the transformed problem is given by
$$2eta_{kappa}r_{kappa}(x_{kappa})^{2r_{kappa}-1}+lambda+mu_{kappa}-nu_{kappa}=0.$$



Then, it is a simple rearrangement for $x_{kappa}$ to give
$$x_{kappa}^*=left[frac{lambda+mu_{kappa}-nu_{kappa}}{2eta_{kappa}(-r_{kappa})}right]^{frac{1}{2r_{kappa}-1}}.$$



The Lagrangian for the transformed problem is then
$$mathcal{L}(x_{kappa},lambda,mu,nu)=sum_{kappa}eta_{kappa}(x_{kappa})^{2r_{kappa}}+lambdaleft[sum_{kappa}x_{kappa}-Cright]+sum_{kappa}mu_{k}[x_{kappa}-exp(sqrt{alpha})]-sum_{kappa}nu_{k}[x_{kappa}-exp(-sqrt{alpha})].$$



I will leave the cumbersome exercise of substituing $x_{kappa}^*$ into the Lagrangian, but hopefully this approach will help you progress.






share|cite|improve this answer











$endgroup$














    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3073921%2fformulating-the-dual-of-an-exponential-cone-optimization-problem%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    After making the modifications you've allowed in the comments, the optimization problem becomes



    $$begin{align*}
    min_{{s_{kappa}}}quad&sum_{kappa}eta_{kappa}exp(r_{kappa}s_{kappa})\
    text{s.t.}quad&sum_{kappa}expleft(frac{1}{2}s_{kappa}right)leq C\
    &s_{kappa}^2leq alpha.
    end{align*}$$

    We can make the transformation $x_{kappa}=exp(frac{1}{2}s_{kappa})$ to render this more linear. Especially, we can change the $s_{kappa}^2leq alpha$ constriants into pairs of linear constraints $x_{kappa}leq exp(sqrt{alpha})$, and $x_{kappa}geq exp(-sqrt{alpha})$. This gives a new problem
    $$begin{align*}
    min_{{x_{kappa}}}quad&sum_{kappa}eta_{kappa}(x_{kappa})^{2r_{kappa}}\
    text{s.t.}quad&sum_{kappa}x_{kappa}leq C\
    &x_{kappa}leq exp(sqrt{alpha})\
    &x_{kappa}geq exp(-sqrt{alpha})\
    end{align*}$$



    The KKT stationary-point constraint for the transformed problem is given by
    $$2eta_{kappa}r_{kappa}(x_{kappa})^{2r_{kappa}-1}+lambda+mu_{kappa}-nu_{kappa}=0.$$



    Then, it is a simple rearrangement for $x_{kappa}$ to give
    $$x_{kappa}^*=left[frac{lambda+mu_{kappa}-nu_{kappa}}{2eta_{kappa}(-r_{kappa})}right]^{frac{1}{2r_{kappa}-1}}.$$



    The Lagrangian for the transformed problem is then
    $$mathcal{L}(x_{kappa},lambda,mu,nu)=sum_{kappa}eta_{kappa}(x_{kappa})^{2r_{kappa}}+lambdaleft[sum_{kappa}x_{kappa}-Cright]+sum_{kappa}mu_{k}[x_{kappa}-exp(sqrt{alpha})]-sum_{kappa}nu_{k}[x_{kappa}-exp(-sqrt{alpha})].$$



    I will leave the cumbersome exercise of substituing $x_{kappa}^*$ into the Lagrangian, but hopefully this approach will help you progress.






    share|cite|improve this answer











    $endgroup$


















      0












      $begingroup$

      After making the modifications you've allowed in the comments, the optimization problem becomes



      $$begin{align*}
      min_{{s_{kappa}}}quad&sum_{kappa}eta_{kappa}exp(r_{kappa}s_{kappa})\
      text{s.t.}quad&sum_{kappa}expleft(frac{1}{2}s_{kappa}right)leq C\
      &s_{kappa}^2leq alpha.
      end{align*}$$

      We can make the transformation $x_{kappa}=exp(frac{1}{2}s_{kappa})$ to render this more linear. Especially, we can change the $s_{kappa}^2leq alpha$ constriants into pairs of linear constraints $x_{kappa}leq exp(sqrt{alpha})$, and $x_{kappa}geq exp(-sqrt{alpha})$. This gives a new problem
      $$begin{align*}
      min_{{x_{kappa}}}quad&sum_{kappa}eta_{kappa}(x_{kappa})^{2r_{kappa}}\
      text{s.t.}quad&sum_{kappa}x_{kappa}leq C\
      &x_{kappa}leq exp(sqrt{alpha})\
      &x_{kappa}geq exp(-sqrt{alpha})\
      end{align*}$$



      The KKT stationary-point constraint for the transformed problem is given by
      $$2eta_{kappa}r_{kappa}(x_{kappa})^{2r_{kappa}-1}+lambda+mu_{kappa}-nu_{kappa}=0.$$



      Then, it is a simple rearrangement for $x_{kappa}$ to give
      $$x_{kappa}^*=left[frac{lambda+mu_{kappa}-nu_{kappa}}{2eta_{kappa}(-r_{kappa})}right]^{frac{1}{2r_{kappa}-1}}.$$



      The Lagrangian for the transformed problem is then
      $$mathcal{L}(x_{kappa},lambda,mu,nu)=sum_{kappa}eta_{kappa}(x_{kappa})^{2r_{kappa}}+lambdaleft[sum_{kappa}x_{kappa}-Cright]+sum_{kappa}mu_{k}[x_{kappa}-exp(sqrt{alpha})]-sum_{kappa}nu_{k}[x_{kappa}-exp(-sqrt{alpha})].$$



      I will leave the cumbersome exercise of substituing $x_{kappa}^*$ into the Lagrangian, but hopefully this approach will help you progress.






      share|cite|improve this answer











      $endgroup$
















        0












        0








        0





        $begingroup$

        After making the modifications you've allowed in the comments, the optimization problem becomes



        $$begin{align*}
        min_{{s_{kappa}}}quad&sum_{kappa}eta_{kappa}exp(r_{kappa}s_{kappa})\
        text{s.t.}quad&sum_{kappa}expleft(frac{1}{2}s_{kappa}right)leq C\
        &s_{kappa}^2leq alpha.
        end{align*}$$

        We can make the transformation $x_{kappa}=exp(frac{1}{2}s_{kappa})$ to render this more linear. Especially, we can change the $s_{kappa}^2leq alpha$ constriants into pairs of linear constraints $x_{kappa}leq exp(sqrt{alpha})$, and $x_{kappa}geq exp(-sqrt{alpha})$. This gives a new problem
        $$begin{align*}
        min_{{x_{kappa}}}quad&sum_{kappa}eta_{kappa}(x_{kappa})^{2r_{kappa}}\
        text{s.t.}quad&sum_{kappa}x_{kappa}leq C\
        &x_{kappa}leq exp(sqrt{alpha})\
        &x_{kappa}geq exp(-sqrt{alpha})\
        end{align*}$$



        The KKT stationary-point constraint for the transformed problem is given by
        $$2eta_{kappa}r_{kappa}(x_{kappa})^{2r_{kappa}-1}+lambda+mu_{kappa}-nu_{kappa}=0.$$



        Then, it is a simple rearrangement for $x_{kappa}$ to give
        $$x_{kappa}^*=left[frac{lambda+mu_{kappa}-nu_{kappa}}{2eta_{kappa}(-r_{kappa})}right]^{frac{1}{2r_{kappa}-1}}.$$



        The Lagrangian for the transformed problem is then
        $$mathcal{L}(x_{kappa},lambda,mu,nu)=sum_{kappa}eta_{kappa}(x_{kappa})^{2r_{kappa}}+lambdaleft[sum_{kappa}x_{kappa}-Cright]+sum_{kappa}mu_{k}[x_{kappa}-exp(sqrt{alpha})]-sum_{kappa}nu_{k}[x_{kappa}-exp(-sqrt{alpha})].$$



        I will leave the cumbersome exercise of substituing $x_{kappa}^*$ into the Lagrangian, but hopefully this approach will help you progress.






        share|cite|improve this answer











        $endgroup$



        After making the modifications you've allowed in the comments, the optimization problem becomes



        $$begin{align*}
        min_{{s_{kappa}}}quad&sum_{kappa}eta_{kappa}exp(r_{kappa}s_{kappa})\
        text{s.t.}quad&sum_{kappa}expleft(frac{1}{2}s_{kappa}right)leq C\
        &s_{kappa}^2leq alpha.
        end{align*}$$

        We can make the transformation $x_{kappa}=exp(frac{1}{2}s_{kappa})$ to render this more linear. Especially, we can change the $s_{kappa}^2leq alpha$ constriants into pairs of linear constraints $x_{kappa}leq exp(sqrt{alpha})$, and $x_{kappa}geq exp(-sqrt{alpha})$. This gives a new problem
        $$begin{align*}
        min_{{x_{kappa}}}quad&sum_{kappa}eta_{kappa}(x_{kappa})^{2r_{kappa}}\
        text{s.t.}quad&sum_{kappa}x_{kappa}leq C\
        &x_{kappa}leq exp(sqrt{alpha})\
        &x_{kappa}geq exp(-sqrt{alpha})\
        end{align*}$$



        The KKT stationary-point constraint for the transformed problem is given by
        $$2eta_{kappa}r_{kappa}(x_{kappa})^{2r_{kappa}-1}+lambda+mu_{kappa}-nu_{kappa}=0.$$



        Then, it is a simple rearrangement for $x_{kappa}$ to give
        $$x_{kappa}^*=left[frac{lambda+mu_{kappa}-nu_{kappa}}{2eta_{kappa}(-r_{kappa})}right]^{frac{1}{2r_{kappa}-1}}.$$



        The Lagrangian for the transformed problem is then
        $$mathcal{L}(x_{kappa},lambda,mu,nu)=sum_{kappa}eta_{kappa}(x_{kappa})^{2r_{kappa}}+lambdaleft[sum_{kappa}x_{kappa}-Cright]+sum_{kappa}mu_{k}[x_{kappa}-exp(sqrt{alpha})]-sum_{kappa}nu_{k}[x_{kappa}-exp(-sqrt{alpha})].$$



        I will leave the cumbersome exercise of substituing $x_{kappa}^*$ into the Lagrangian, but hopefully this approach will help you progress.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Jan 15 at 22:48

























        answered Jan 15 at 20:35









        nathan.j.mcdougallnathan.j.mcdougall

        1,519818




        1,519818






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3073921%2fformulating-the-dual-of-an-exponential-cone-optimization-problem%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Bressuire

            Cabo Verde

            Gyllenstierna