Minimizing the sum of KL divergences











up vote
6
down vote

favorite
4












Given a list of probability distributions $q_i$, what distribution $p$ minimizes the sum of KL divergences (if they exist) to and from each of them? That is, how do I determine



$$operatorname*{argmin}_p sum_i D_text{KL}(p mathbin{Vert} q_i)$$



and



$$operatorname*{argmin}_p sum_i D_text{KL}(q_i mathbin{Vert} p)$$



I recall reading somewhere that, in the Jensen-Shannon divergence,
begin{align*}
D_text{JS}(p, q) &= frac{D_text{KL}(p mathbin{Vert} r) + D_text{KL}(q mathbin{Vert} r)}{2} \
r = &frac{p + q}{2}
end{align*}



The midpoint distribution $r$ is precisely
$$r = operatorname*{argmin}_s frac{D_text{KL}(p mathbin{Vert} s) + D_text{KL}(q mathbin{Vert} s)}{2}$$



I can't find a reference for this, however.










share|cite|improve this question

















This question has an open bounty worth +50
reputation from user76284 ending in 3 days.


This question has not received enough attention.
















  • Have you read "A new metric for probability distributions" by D.M. Endres and J.E. Schindelin? I think this may help you.
    – Flying Dogfish
    2 days ago

















up vote
6
down vote

favorite
4












Given a list of probability distributions $q_i$, what distribution $p$ minimizes the sum of KL divergences (if they exist) to and from each of them? That is, how do I determine



$$operatorname*{argmin}_p sum_i D_text{KL}(p mathbin{Vert} q_i)$$



and



$$operatorname*{argmin}_p sum_i D_text{KL}(q_i mathbin{Vert} p)$$



I recall reading somewhere that, in the Jensen-Shannon divergence,
begin{align*}
D_text{JS}(p, q) &= frac{D_text{KL}(p mathbin{Vert} r) + D_text{KL}(q mathbin{Vert} r)}{2} \
r = &frac{p + q}{2}
end{align*}



The midpoint distribution $r$ is precisely
$$r = operatorname*{argmin}_s frac{D_text{KL}(p mathbin{Vert} s) + D_text{KL}(q mathbin{Vert} s)}{2}$$



I can't find a reference for this, however.










share|cite|improve this question

















This question has an open bounty worth +50
reputation from user76284 ending in 3 days.


This question has not received enough attention.
















  • Have you read "A new metric for probability distributions" by D.M. Endres and J.E. Schindelin? I think this may help you.
    – Flying Dogfish
    2 days ago















up vote
6
down vote

favorite
4









up vote
6
down vote

favorite
4






4





Given a list of probability distributions $q_i$, what distribution $p$ minimizes the sum of KL divergences (if they exist) to and from each of them? That is, how do I determine



$$operatorname*{argmin}_p sum_i D_text{KL}(p mathbin{Vert} q_i)$$



and



$$operatorname*{argmin}_p sum_i D_text{KL}(q_i mathbin{Vert} p)$$



I recall reading somewhere that, in the Jensen-Shannon divergence,
begin{align*}
D_text{JS}(p, q) &= frac{D_text{KL}(p mathbin{Vert} r) + D_text{KL}(q mathbin{Vert} r)}{2} \
r = &frac{p + q}{2}
end{align*}



The midpoint distribution $r$ is precisely
$$r = operatorname*{argmin}_s frac{D_text{KL}(p mathbin{Vert} s) + D_text{KL}(q mathbin{Vert} s)}{2}$$



I can't find a reference for this, however.










share|cite|improve this question















Given a list of probability distributions $q_i$, what distribution $p$ minimizes the sum of KL divergences (if they exist) to and from each of them? That is, how do I determine



$$operatorname*{argmin}_p sum_i D_text{KL}(p mathbin{Vert} q_i)$$



and



$$operatorname*{argmin}_p sum_i D_text{KL}(q_i mathbin{Vert} p)$$



I recall reading somewhere that, in the Jensen-Shannon divergence,
begin{align*}
D_text{JS}(p, q) &= frac{D_text{KL}(p mathbin{Vert} r) + D_text{KL}(q mathbin{Vert} r)}{2} \
r = &frac{p + q}{2}
end{align*}



The midpoint distribution $r$ is precisely
$$r = operatorname*{argmin}_s frac{D_text{KL}(p mathbin{Vert} s) + D_text{KL}(q mathbin{Vert} s)}{2}$$



I can't find a reference for this, however.







probability probability-distributions reference-request






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 5 at 0:40

























asked Dec 4 at 0:43









user76284

1,1651123




1,1651123






This question has an open bounty worth +50
reputation from user76284 ending in 3 days.


This question has not received enough attention.








This question has an open bounty worth +50
reputation from user76284 ending in 3 days.


This question has not received enough attention.














  • Have you read "A new metric for probability distributions" by D.M. Endres and J.E. Schindelin? I think this may help you.
    – Flying Dogfish
    2 days ago




















  • Have you read "A new metric for probability distributions" by D.M. Endres and J.E. Schindelin? I think this may help you.
    – Flying Dogfish
    2 days ago


















Have you read "A new metric for probability distributions" by D.M. Endres and J.E. Schindelin? I think this may help you.
– Flying Dogfish
2 days ago






Have you read "A new metric for probability distributions" by D.M. Endres and J.E. Schindelin? I think this may help you.
– Flying Dogfish
2 days ago












1 Answer
1






active

oldest

votes

















up vote
1
down vote













A rather brute-force approach, using Lagrange multipliers. Defining the objetive functional



$$g(p_x)=sum_{i=1}^k D(p||q^{(i)})=sum_{i=1}^k sum_{x} p_x logleft( frac{p_x}{q_x^{(i)}} right) tag{1}$$



and the restriction $sum_x p_x = 1$ we get the critical point at



$$ n+sum_{i=1}^k logleft( frac{p_x}{q_x^{(i)}} right) +lambda =0 tag{2}$$



Rearraging we get



$$ p_x = gamma left(prod_{i=1}^k q_x^{(i)}right)^{1/k} tag{3}$$



where $gamma$ is the normalizing constant. Hence the critical distribution is the normalized geometric mean of the given $q_i$ distributions.



Because the KL divergence is convex in both arguments, this critical point must be a global minimum.





Or simpler and better: changing the sum order in $(1)$ :



$$g(p_x)= sum_{x} p_x log prod_{i=1}^k left( frac{p_x}{q_x^{(i)}}right)
=k sum_{x} p_x log left( frac{p_x}{overline{q_x}}right) tag{4}$$



where $overline{q_x}$ is the geometric mean, as in $(3)$. Notice, however that
$overline{q_x}$ is not in general a probability function. Defining the normalization constant $gamma=1/sum overline{q_x}$ we get



$$g(p_x)=k sum_{x} p_x log left( frac{gamma p_x}{ gamma overline{q_x}}right)=k sum_{x} p_x log left( frac{ p_x}{ gamma overline{q_x}}right) + k log(gamma) =\=k D(p||gamma overline{q}) + k log(gamma) tag{5}$$



The variable term is the first, which is a true KL-divergence, and is minimized (at zero) by $p=gamma overline{q}$, in agreement with $(3)$. The residual term gives the value of this minimum.



BTW: that $gammage 1$ (with equality only for all $q_i$ identical) is easily proved by GM-AM inequality.






share|cite|improve this answer























    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3024947%2fminimizing-the-sum-of-kl-divergences%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    1
    down vote













    A rather brute-force approach, using Lagrange multipliers. Defining the objetive functional



    $$g(p_x)=sum_{i=1}^k D(p||q^{(i)})=sum_{i=1}^k sum_{x} p_x logleft( frac{p_x}{q_x^{(i)}} right) tag{1}$$



    and the restriction $sum_x p_x = 1$ we get the critical point at



    $$ n+sum_{i=1}^k logleft( frac{p_x}{q_x^{(i)}} right) +lambda =0 tag{2}$$



    Rearraging we get



    $$ p_x = gamma left(prod_{i=1}^k q_x^{(i)}right)^{1/k} tag{3}$$



    where $gamma$ is the normalizing constant. Hence the critical distribution is the normalized geometric mean of the given $q_i$ distributions.



    Because the KL divergence is convex in both arguments, this critical point must be a global minimum.





    Or simpler and better: changing the sum order in $(1)$ :



    $$g(p_x)= sum_{x} p_x log prod_{i=1}^k left( frac{p_x}{q_x^{(i)}}right)
    =k sum_{x} p_x log left( frac{p_x}{overline{q_x}}right) tag{4}$$



    where $overline{q_x}$ is the geometric mean, as in $(3)$. Notice, however that
    $overline{q_x}$ is not in general a probability function. Defining the normalization constant $gamma=1/sum overline{q_x}$ we get



    $$g(p_x)=k sum_{x} p_x log left( frac{gamma p_x}{ gamma overline{q_x}}right)=k sum_{x} p_x log left( frac{ p_x}{ gamma overline{q_x}}right) + k log(gamma) =\=k D(p||gamma overline{q}) + k log(gamma) tag{5}$$



    The variable term is the first, which is a true KL-divergence, and is minimized (at zero) by $p=gamma overline{q}$, in agreement with $(3)$. The residual term gives the value of this minimum.



    BTW: that $gammage 1$ (with equality only for all $q_i$ identical) is easily proved by GM-AM inequality.






    share|cite|improve this answer



























      up vote
      1
      down vote













      A rather brute-force approach, using Lagrange multipliers. Defining the objetive functional



      $$g(p_x)=sum_{i=1}^k D(p||q^{(i)})=sum_{i=1}^k sum_{x} p_x logleft( frac{p_x}{q_x^{(i)}} right) tag{1}$$



      and the restriction $sum_x p_x = 1$ we get the critical point at



      $$ n+sum_{i=1}^k logleft( frac{p_x}{q_x^{(i)}} right) +lambda =0 tag{2}$$



      Rearraging we get



      $$ p_x = gamma left(prod_{i=1}^k q_x^{(i)}right)^{1/k} tag{3}$$



      where $gamma$ is the normalizing constant. Hence the critical distribution is the normalized geometric mean of the given $q_i$ distributions.



      Because the KL divergence is convex in both arguments, this critical point must be a global minimum.





      Or simpler and better: changing the sum order in $(1)$ :



      $$g(p_x)= sum_{x} p_x log prod_{i=1}^k left( frac{p_x}{q_x^{(i)}}right)
      =k sum_{x} p_x log left( frac{p_x}{overline{q_x}}right) tag{4}$$



      where $overline{q_x}$ is the geometric mean, as in $(3)$. Notice, however that
      $overline{q_x}$ is not in general a probability function. Defining the normalization constant $gamma=1/sum overline{q_x}$ we get



      $$g(p_x)=k sum_{x} p_x log left( frac{gamma p_x}{ gamma overline{q_x}}right)=k sum_{x} p_x log left( frac{ p_x}{ gamma overline{q_x}}right) + k log(gamma) =\=k D(p||gamma overline{q}) + k log(gamma) tag{5}$$



      The variable term is the first, which is a true KL-divergence, and is minimized (at zero) by $p=gamma overline{q}$, in agreement with $(3)$. The residual term gives the value of this minimum.



      BTW: that $gammage 1$ (with equality only for all $q_i$ identical) is easily proved by GM-AM inequality.






      share|cite|improve this answer

























        up vote
        1
        down vote










        up vote
        1
        down vote









        A rather brute-force approach, using Lagrange multipliers. Defining the objetive functional



        $$g(p_x)=sum_{i=1}^k D(p||q^{(i)})=sum_{i=1}^k sum_{x} p_x logleft( frac{p_x}{q_x^{(i)}} right) tag{1}$$



        and the restriction $sum_x p_x = 1$ we get the critical point at



        $$ n+sum_{i=1}^k logleft( frac{p_x}{q_x^{(i)}} right) +lambda =0 tag{2}$$



        Rearraging we get



        $$ p_x = gamma left(prod_{i=1}^k q_x^{(i)}right)^{1/k} tag{3}$$



        where $gamma$ is the normalizing constant. Hence the critical distribution is the normalized geometric mean of the given $q_i$ distributions.



        Because the KL divergence is convex in both arguments, this critical point must be a global minimum.





        Or simpler and better: changing the sum order in $(1)$ :



        $$g(p_x)= sum_{x} p_x log prod_{i=1}^k left( frac{p_x}{q_x^{(i)}}right)
        =k sum_{x} p_x log left( frac{p_x}{overline{q_x}}right) tag{4}$$



        where $overline{q_x}$ is the geometric mean, as in $(3)$. Notice, however that
        $overline{q_x}$ is not in general a probability function. Defining the normalization constant $gamma=1/sum overline{q_x}$ we get



        $$g(p_x)=k sum_{x} p_x log left( frac{gamma p_x}{ gamma overline{q_x}}right)=k sum_{x} p_x log left( frac{ p_x}{ gamma overline{q_x}}right) + k log(gamma) =\=k D(p||gamma overline{q}) + k log(gamma) tag{5}$$



        The variable term is the first, which is a true KL-divergence, and is minimized (at zero) by $p=gamma overline{q}$, in agreement with $(3)$. The residual term gives the value of this minimum.



        BTW: that $gammage 1$ (with equality only for all $q_i$ identical) is easily proved by GM-AM inequality.






        share|cite|improve this answer














        A rather brute-force approach, using Lagrange multipliers. Defining the objetive functional



        $$g(p_x)=sum_{i=1}^k D(p||q^{(i)})=sum_{i=1}^k sum_{x} p_x logleft( frac{p_x}{q_x^{(i)}} right) tag{1}$$



        and the restriction $sum_x p_x = 1$ we get the critical point at



        $$ n+sum_{i=1}^k logleft( frac{p_x}{q_x^{(i)}} right) +lambda =0 tag{2}$$



        Rearraging we get



        $$ p_x = gamma left(prod_{i=1}^k q_x^{(i)}right)^{1/k} tag{3}$$



        where $gamma$ is the normalizing constant. Hence the critical distribution is the normalized geometric mean of the given $q_i$ distributions.



        Because the KL divergence is convex in both arguments, this critical point must be a global minimum.





        Or simpler and better: changing the sum order in $(1)$ :



        $$g(p_x)= sum_{x} p_x log prod_{i=1}^k left( frac{p_x}{q_x^{(i)}}right)
        =k sum_{x} p_x log left( frac{p_x}{overline{q_x}}right) tag{4}$$



        where $overline{q_x}$ is the geometric mean, as in $(3)$. Notice, however that
        $overline{q_x}$ is not in general a probability function. Defining the normalization constant $gamma=1/sum overline{q_x}$ we get



        $$g(p_x)=k sum_{x} p_x log left( frac{gamma p_x}{ gamma overline{q_x}}right)=k sum_{x} p_x log left( frac{ p_x}{ gamma overline{q_x}}right) + k log(gamma) =\=k D(p||gamma overline{q}) + k log(gamma) tag{5}$$



        The variable term is the first, which is a true KL-divergence, and is minimized (at zero) by $p=gamma overline{q}$, in agreement with $(3)$. The residual term gives the value of this minimum.



        BTW: that $gammage 1$ (with equality only for all $q_i$ identical) is easily proved by GM-AM inequality.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 6 hours ago

























        answered 7 hours ago









        leonbloy

        40k645107




        40k645107






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3024947%2fminimizing-the-sum-of-kl-divergences%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Bressuire

            Cabo Verde

            Gyllenstierna