Maximum Entropy with bounded constraints











up vote
5
down vote

favorite












Assume we have the problem of estimating the probabilities ${p_1,p_2,p_3}$ subject to:



$$0 le p_1 le .5$$
$$0.2 le p_2 le .6$$
$$0.3 le p_3 le .4$$



with only the natural constraint of $p_1+p_2+p_3=1$



Found two compelling arguments using entropy, which paraphrasing for this problem:



Jaynes




We would like to maximize the Shannon Entropy



$$H_S(P)= - sum p_i log(p_i) $$



subject to the natural constraint and the $p$'s bounded by the
inequalities. As $p^*=1/3$ for all probabilities is both the global
optimum of the entropy function with the natural constraint and
satisfies all inequalities we would declare the answer



$p=(1/3, 1/3, 1/3)$




.



Kapur




Jayne's Principle of Maximum Entropy is only valid with linear
constraints, the use of inequalities is not directly applicable to
Shannon entropy. We would get the same answer as above for any set of
inequalities as long as $p^*=1/3$ is contained within each inequality.
The fact that $.3 le p_3 le .4$ or $0.33331 le p_3 le 0.33334$ would
be immaterial to the above although the last one is most informative.
Subject to only the natural constraint the principle of indifference
is not on the probabilities themselves, but on where they are in the
inequality. The inequality $0.33331 le p_3 le 0.33334 $ gives a lot
more information than $ 0.31 le p_3 le .34 $ than $.3 le p_3 le
0.4$
. We must build up a measure of uncertainty from first principles that implicitly takes those inequalities, and information they are
stating, into account. This is a special case of the generalized
maximum entropy principle with inequalities on each probability only



$$a_i le p_i le b_i $$



We should maximize



$$H_K(P)= left( - sum (p_i-a_i) log(p_i-a_i)) right) + left(
- sum (b_i-p_i) log(b_i-p_i)) right) $$



subject to the constraints. If the normalization constraint is the
only constraint the optimization reduces to the fact that
$(p_i-a_i)/(b_i-a_i)$ should be the same for all probabilities within
their respective inequalities. We are maximimally uncertain of where
in the inequality they should be, and by an extension of Laplace
Principle of Insufficient Reason we should have them all in the same
proportion within their intervals.



For the problem above we would have $(p_i-a_i)/(b_i-a_i)=0.5$ yielding



$p=(0.25, 0.4, 0.35)$



Each probability is in the same proportion within their interval, in
this case halfway.




In most optimization books and papers I've seen when discussing maximizing the entropy it is treated as any other convex optimization



: begin{align}
&underset{x}{operatorname{maximize}}& & f(x) \
&operatorname{subject;to}
& &lb_i le x_i le ub_i, quad i = 1,dots,m \
&&&h_i(x) = 0 , quad i = 1, dots,p.
end{align}



with $f(x)$ as Shannon Entropy and the inequalities box in the search space.



Kapur seems to argue that the bounds of the inequalities themselves provide information and should be taken into account, with a new optimization function subject to linear constraints



: begin{align}
&underset{x}{operatorname{maximize}}& & g(x) \
&operatorname{subject;to}
&&h_i(x) = 0 , quad i = 1, dots,p.
end{align}



Although we only used the natural constraint, both optimizations can be expressed in terms of Lagrange Multipliers for more additional constraints and more probabilities.



The question I have is when is either argument applicable? I can understand Jaynes argument, but it does seem to ignore the boundedness of the inequalities as long as the global minimum is contained within them. (If not contained the optimization would have some on the boundary of the inequality). Kapur also makes sense, the probabilities should be maximally uncertain where in the inequality they are, subject to the equality constraints.



Additionally, wouldn't all probabilities have the bounds $0 le p_i le 1$? Or is the upper limit implicit in the normalization constraint and $p_i ge 0$ inequality which is usually seen in Maximum Entropy problems. If $a_i=0$ and $b_i$ unspecified, it seems $H_K$ reduces to $H_S$



Sources:



Jaynes, Edwin T.; Probability theory: The logic of science; Cambridge university press, 2003.



Kapur, Kesavan; Entropy Optimization Principles with Applications; Academic Press 1992










share|cite|improve this question




























    up vote
    5
    down vote

    favorite












    Assume we have the problem of estimating the probabilities ${p_1,p_2,p_3}$ subject to:



    $$0 le p_1 le .5$$
    $$0.2 le p_2 le .6$$
    $$0.3 le p_3 le .4$$



    with only the natural constraint of $p_1+p_2+p_3=1$



    Found two compelling arguments using entropy, which paraphrasing for this problem:



    Jaynes




    We would like to maximize the Shannon Entropy



    $$H_S(P)= - sum p_i log(p_i) $$



    subject to the natural constraint and the $p$'s bounded by the
    inequalities. As $p^*=1/3$ for all probabilities is both the global
    optimum of the entropy function with the natural constraint and
    satisfies all inequalities we would declare the answer



    $p=(1/3, 1/3, 1/3)$




    .



    Kapur




    Jayne's Principle of Maximum Entropy is only valid with linear
    constraints, the use of inequalities is not directly applicable to
    Shannon entropy. We would get the same answer as above for any set of
    inequalities as long as $p^*=1/3$ is contained within each inequality.
    The fact that $.3 le p_3 le .4$ or $0.33331 le p_3 le 0.33334$ would
    be immaterial to the above although the last one is most informative.
    Subject to only the natural constraint the principle of indifference
    is not on the probabilities themselves, but on where they are in the
    inequality. The inequality $0.33331 le p_3 le 0.33334 $ gives a lot
    more information than $ 0.31 le p_3 le .34 $ than $.3 le p_3 le
    0.4$
    . We must build up a measure of uncertainty from first principles that implicitly takes those inequalities, and information they are
    stating, into account. This is a special case of the generalized
    maximum entropy principle with inequalities on each probability only



    $$a_i le p_i le b_i $$



    We should maximize



    $$H_K(P)= left( - sum (p_i-a_i) log(p_i-a_i)) right) + left(
    - sum (b_i-p_i) log(b_i-p_i)) right) $$



    subject to the constraints. If the normalization constraint is the
    only constraint the optimization reduces to the fact that
    $(p_i-a_i)/(b_i-a_i)$ should be the same for all probabilities within
    their respective inequalities. We are maximimally uncertain of where
    in the inequality they should be, and by an extension of Laplace
    Principle of Insufficient Reason we should have them all in the same
    proportion within their intervals.



    For the problem above we would have $(p_i-a_i)/(b_i-a_i)=0.5$ yielding



    $p=(0.25, 0.4, 0.35)$



    Each probability is in the same proportion within their interval, in
    this case halfway.




    In most optimization books and papers I've seen when discussing maximizing the entropy it is treated as any other convex optimization



    : begin{align}
    &underset{x}{operatorname{maximize}}& & f(x) \
    &operatorname{subject;to}
    & &lb_i le x_i le ub_i, quad i = 1,dots,m \
    &&&h_i(x) = 0 , quad i = 1, dots,p.
    end{align}



    with $f(x)$ as Shannon Entropy and the inequalities box in the search space.



    Kapur seems to argue that the bounds of the inequalities themselves provide information and should be taken into account, with a new optimization function subject to linear constraints



    : begin{align}
    &underset{x}{operatorname{maximize}}& & g(x) \
    &operatorname{subject;to}
    &&h_i(x) = 0 , quad i = 1, dots,p.
    end{align}



    Although we only used the natural constraint, both optimizations can be expressed in terms of Lagrange Multipliers for more additional constraints and more probabilities.



    The question I have is when is either argument applicable? I can understand Jaynes argument, but it does seem to ignore the boundedness of the inequalities as long as the global minimum is contained within them. (If not contained the optimization would have some on the boundary of the inequality). Kapur also makes sense, the probabilities should be maximally uncertain where in the inequality they are, subject to the equality constraints.



    Additionally, wouldn't all probabilities have the bounds $0 le p_i le 1$? Or is the upper limit implicit in the normalization constraint and $p_i ge 0$ inequality which is usually seen in Maximum Entropy problems. If $a_i=0$ and $b_i$ unspecified, it seems $H_K$ reduces to $H_S$



    Sources:



    Jaynes, Edwin T.; Probability theory: The logic of science; Cambridge university press, 2003.



    Kapur, Kesavan; Entropy Optimization Principles with Applications; Academic Press 1992










    share|cite|improve this question


























      up vote
      5
      down vote

      favorite









      up vote
      5
      down vote

      favorite











      Assume we have the problem of estimating the probabilities ${p_1,p_2,p_3}$ subject to:



      $$0 le p_1 le .5$$
      $$0.2 le p_2 le .6$$
      $$0.3 le p_3 le .4$$



      with only the natural constraint of $p_1+p_2+p_3=1$



      Found two compelling arguments using entropy, which paraphrasing for this problem:



      Jaynes




      We would like to maximize the Shannon Entropy



      $$H_S(P)= - sum p_i log(p_i) $$



      subject to the natural constraint and the $p$'s bounded by the
      inequalities. As $p^*=1/3$ for all probabilities is both the global
      optimum of the entropy function with the natural constraint and
      satisfies all inequalities we would declare the answer



      $p=(1/3, 1/3, 1/3)$




      .



      Kapur




      Jayne's Principle of Maximum Entropy is only valid with linear
      constraints, the use of inequalities is not directly applicable to
      Shannon entropy. We would get the same answer as above for any set of
      inequalities as long as $p^*=1/3$ is contained within each inequality.
      The fact that $.3 le p_3 le .4$ or $0.33331 le p_3 le 0.33334$ would
      be immaterial to the above although the last one is most informative.
      Subject to only the natural constraint the principle of indifference
      is not on the probabilities themselves, but on where they are in the
      inequality. The inequality $0.33331 le p_3 le 0.33334 $ gives a lot
      more information than $ 0.31 le p_3 le .34 $ than $.3 le p_3 le
      0.4$
      . We must build up a measure of uncertainty from first principles that implicitly takes those inequalities, and information they are
      stating, into account. This is a special case of the generalized
      maximum entropy principle with inequalities on each probability only



      $$a_i le p_i le b_i $$



      We should maximize



      $$H_K(P)= left( - sum (p_i-a_i) log(p_i-a_i)) right) + left(
      - sum (b_i-p_i) log(b_i-p_i)) right) $$



      subject to the constraints. If the normalization constraint is the
      only constraint the optimization reduces to the fact that
      $(p_i-a_i)/(b_i-a_i)$ should be the same for all probabilities within
      their respective inequalities. We are maximimally uncertain of where
      in the inequality they should be, and by an extension of Laplace
      Principle of Insufficient Reason we should have them all in the same
      proportion within their intervals.



      For the problem above we would have $(p_i-a_i)/(b_i-a_i)=0.5$ yielding



      $p=(0.25, 0.4, 0.35)$



      Each probability is in the same proportion within their interval, in
      this case halfway.




      In most optimization books and papers I've seen when discussing maximizing the entropy it is treated as any other convex optimization



      : begin{align}
      &underset{x}{operatorname{maximize}}& & f(x) \
      &operatorname{subject;to}
      & &lb_i le x_i le ub_i, quad i = 1,dots,m \
      &&&h_i(x) = 0 , quad i = 1, dots,p.
      end{align}



      with $f(x)$ as Shannon Entropy and the inequalities box in the search space.



      Kapur seems to argue that the bounds of the inequalities themselves provide information and should be taken into account, with a new optimization function subject to linear constraints



      : begin{align}
      &underset{x}{operatorname{maximize}}& & g(x) \
      &operatorname{subject;to}
      &&h_i(x) = 0 , quad i = 1, dots,p.
      end{align}



      Although we only used the natural constraint, both optimizations can be expressed in terms of Lagrange Multipliers for more additional constraints and more probabilities.



      The question I have is when is either argument applicable? I can understand Jaynes argument, but it does seem to ignore the boundedness of the inequalities as long as the global minimum is contained within them. (If not contained the optimization would have some on the boundary of the inequality). Kapur also makes sense, the probabilities should be maximally uncertain where in the inequality they are, subject to the equality constraints.



      Additionally, wouldn't all probabilities have the bounds $0 le p_i le 1$? Or is the upper limit implicit in the normalization constraint and $p_i ge 0$ inequality which is usually seen in Maximum Entropy problems. If $a_i=0$ and $b_i$ unspecified, it seems $H_K$ reduces to $H_S$



      Sources:



      Jaynes, Edwin T.; Probability theory: The logic of science; Cambridge university press, 2003.



      Kapur, Kesavan; Entropy Optimization Principles with Applications; Academic Press 1992










      share|cite|improve this question















      Assume we have the problem of estimating the probabilities ${p_1,p_2,p_3}$ subject to:



      $$0 le p_1 le .5$$
      $$0.2 le p_2 le .6$$
      $$0.3 le p_3 le .4$$



      with only the natural constraint of $p_1+p_2+p_3=1$



      Found two compelling arguments using entropy, which paraphrasing for this problem:



      Jaynes




      We would like to maximize the Shannon Entropy



      $$H_S(P)= - sum p_i log(p_i) $$



      subject to the natural constraint and the $p$'s bounded by the
      inequalities. As $p^*=1/3$ for all probabilities is both the global
      optimum of the entropy function with the natural constraint and
      satisfies all inequalities we would declare the answer



      $p=(1/3, 1/3, 1/3)$




      .



      Kapur




      Jayne's Principle of Maximum Entropy is only valid with linear
      constraints, the use of inequalities is not directly applicable to
      Shannon entropy. We would get the same answer as above for any set of
      inequalities as long as $p^*=1/3$ is contained within each inequality.
      The fact that $.3 le p_3 le .4$ or $0.33331 le p_3 le 0.33334$ would
      be immaterial to the above although the last one is most informative.
      Subject to only the natural constraint the principle of indifference
      is not on the probabilities themselves, but on where they are in the
      inequality. The inequality $0.33331 le p_3 le 0.33334 $ gives a lot
      more information than $ 0.31 le p_3 le .34 $ than $.3 le p_3 le
      0.4$
      . We must build up a measure of uncertainty from first principles that implicitly takes those inequalities, and information they are
      stating, into account. This is a special case of the generalized
      maximum entropy principle with inequalities on each probability only



      $$a_i le p_i le b_i $$



      We should maximize



      $$H_K(P)= left( - sum (p_i-a_i) log(p_i-a_i)) right) + left(
      - sum (b_i-p_i) log(b_i-p_i)) right) $$



      subject to the constraints. If the normalization constraint is the
      only constraint the optimization reduces to the fact that
      $(p_i-a_i)/(b_i-a_i)$ should be the same for all probabilities within
      their respective inequalities. We are maximimally uncertain of where
      in the inequality they should be, and by an extension of Laplace
      Principle of Insufficient Reason we should have them all in the same
      proportion within their intervals.



      For the problem above we would have $(p_i-a_i)/(b_i-a_i)=0.5$ yielding



      $p=(0.25, 0.4, 0.35)$



      Each probability is in the same proportion within their interval, in
      this case halfway.




      In most optimization books and papers I've seen when discussing maximizing the entropy it is treated as any other convex optimization



      : begin{align}
      &underset{x}{operatorname{maximize}}& & f(x) \
      &operatorname{subject;to}
      & &lb_i le x_i le ub_i, quad i = 1,dots,m \
      &&&h_i(x) = 0 , quad i = 1, dots,p.
      end{align}



      with $f(x)$ as Shannon Entropy and the inequalities box in the search space.



      Kapur seems to argue that the bounds of the inequalities themselves provide information and should be taken into account, with a new optimization function subject to linear constraints



      : begin{align}
      &underset{x}{operatorname{maximize}}& & g(x) \
      &operatorname{subject;to}
      &&h_i(x) = 0 , quad i = 1, dots,p.
      end{align}



      Although we only used the natural constraint, both optimizations can be expressed in terms of Lagrange Multipliers for more additional constraints and more probabilities.



      The question I have is when is either argument applicable? I can understand Jaynes argument, but it does seem to ignore the boundedness of the inequalities as long as the global minimum is contained within them. (If not contained the optimization would have some on the boundary of the inequality). Kapur also makes sense, the probabilities should be maximally uncertain where in the inequality they are, subject to the equality constraints.



      Additionally, wouldn't all probabilities have the bounds $0 le p_i le 1$? Or is the upper limit implicit in the normalization constraint and $p_i ge 0$ inequality which is usually seen in Maximum Entropy problems. If $a_i=0$ and $b_i$ unspecified, it seems $H_K$ reduces to $H_S$



      Sources:



      Jaynes, Edwin T.; Probability theory: The logic of science; Cambridge university press, 2003.



      Kapur, Kesavan; Entropy Optimization Principles with Applications; Academic Press 1992







      entropy






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Nov 26 at 16:50

























      asked Nov 26 at 13:44









      sheppa28

      318110




      318110






















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote



          accepted
          +50










          Suppose you're looking for a job, and your constraint is that you must live in Kansas, where you're from.



          Jaynes would say, take the job that is universally considered to be best (working as an actuary!), assuming such a position exists in Kansas.



          Kapur would say: given that we're destined to live in Kansas, what's the best job? Perhaps something uniquely Kansan, like working in the soybean industry.



          Who is right? Well, if the constraints could change or are somehow not so important, then having started as an actuary seems right (Jaynes).



          If there is no way to change the constraints, to the point where it would be almost absurd to imagine living outside of Kansas, go with soybeans (Kapur).






          share|cite|improve this answer





















            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3014352%2fmaximum-entropy-with-bounded-constraints%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            0
            down vote



            accepted
            +50










            Suppose you're looking for a job, and your constraint is that you must live in Kansas, where you're from.



            Jaynes would say, take the job that is universally considered to be best (working as an actuary!), assuming such a position exists in Kansas.



            Kapur would say: given that we're destined to live in Kansas, what's the best job? Perhaps something uniquely Kansan, like working in the soybean industry.



            Who is right? Well, if the constraints could change or are somehow not so important, then having started as an actuary seems right (Jaynes).



            If there is no way to change the constraints, to the point where it would be almost absurd to imagine living outside of Kansas, go with soybeans (Kapur).






            share|cite|improve this answer

























              up vote
              0
              down vote



              accepted
              +50










              Suppose you're looking for a job, and your constraint is that you must live in Kansas, where you're from.



              Jaynes would say, take the job that is universally considered to be best (working as an actuary!), assuming such a position exists in Kansas.



              Kapur would say: given that we're destined to live in Kansas, what's the best job? Perhaps something uniquely Kansan, like working in the soybean industry.



              Who is right? Well, if the constraints could change or are somehow not so important, then having started as an actuary seems right (Jaynes).



              If there is no way to change the constraints, to the point where it would be almost absurd to imagine living outside of Kansas, go with soybeans (Kapur).






              share|cite|improve this answer























                up vote
                0
                down vote



                accepted
                +50







                up vote
                0
                down vote



                accepted
                +50




                +50




                Suppose you're looking for a job, and your constraint is that you must live in Kansas, where you're from.



                Jaynes would say, take the job that is universally considered to be best (working as an actuary!), assuming such a position exists in Kansas.



                Kapur would say: given that we're destined to live in Kansas, what's the best job? Perhaps something uniquely Kansan, like working in the soybean industry.



                Who is right? Well, if the constraints could change or are somehow not so important, then having started as an actuary seems right (Jaynes).



                If there is no way to change the constraints, to the point where it would be almost absurd to imagine living outside of Kansas, go with soybeans (Kapur).






                share|cite|improve this answer












                Suppose you're looking for a job, and your constraint is that you must live in Kansas, where you're from.



                Jaynes would say, take the job that is universally considered to be best (working as an actuary!), assuming such a position exists in Kansas.



                Kapur would say: given that we're destined to live in Kansas, what's the best job? Perhaps something uniquely Kansan, like working in the soybean industry.



                Who is right? Well, if the constraints could change or are somehow not so important, then having started as an actuary seems right (Jaynes).



                If there is no way to change the constraints, to the point where it would be almost absurd to imagine living outside of Kansas, go with soybeans (Kapur).







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Dec 8 at 6:32









                Bjørn Kjos-Hanssen

                2,021818




                2,021818






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3014352%2fmaximum-entropy-with-bounded-constraints%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Bressuire

                    Cabo Verde

                    Gyllenstierna