What is the name of this distribution?












0












$begingroup$


This is a somewhat long question but I want to make sure that you understand the context properly. Please bear with me.



I'm reading chapter 10 of Bishop's Pattern Recognition and Machine Learning and I'm stuck on "10.1.3 Example: The univariate Gaussian". In it, he defines the following likelihood function of the data with respect to the parameters of a Gaussian with mean $mu$ and precision $tau$.



begin{equation} tag{1}
p(mathcal{D}|mu, tau) = (frac{tau}{2pi})^{N/2}mathrm{exp}{-frac{tau}{2}sum_{n=1}^N (x_n - mu)^2}
end{equation}



He also introduces conjugate prior distributions for $mu$ and $tau$:



begin{equation} tag{2}
p(mu|tau) = mathcal{N}(mu|mu_0, (lambda_0tau)^{-1})
end{equation}

begin{equation} tag{3}
p(tau) = mathrm{Gam}(tau|a_0, b_0).
end{equation}



He then seeks to approximate the posterior $p(mu, tau | mathcal{D})$ by factorized variational approximation, which means he assumes that the posterior can be expressed as:



begin{equation} label{1}tag{4}
q(mu, tau) = q_{mu}(mu)q_{tau}(tau)
end{equation}



Note that the true posterior can not be factorized this way.



He then goes on to find that, for the optimal choices of $q_{mu}(mu)$ and $q_{tau}(tau)$,



begin{equation} tag{5}
q_{mu}(mu) = mathcal{N}(mu | mu_N, lambda^{-1}_N)
end{equation}

with
begin{equation} tag{6}
mu_N = frac{lambda_0 mu_0 + Noverline{x}}{lambda_0 + N}
end{equation}

begin{equation} tag{7}
lambda_N = (lambda_0 + N)mathbb{E}[tau]
end{equation}

and
begin{equation} tag{8}
q_{tau}(tau) = mathrm{Gam}(tau|a_N, b_N)
end{equation}

with
begin{equation} tag{9}
a_N = a_0 + frac{N}{2}
end{equation}

begin{equation} tag{10}
b_N = b_0 + frac{1}{2}mathbb{E}_{mu}[sum^N_{n=1}(x_n - mu)^2 + lambda_0(mu - mu_0)^2].
end{equation}



He then suggests initializing $mathbb{E}[tau]$ to some random number and using it to compute $q_{mu}(mu)$, and then using that to re-calculate $q_{tau}(tau)$. This should be done until convergence.



Now let's say I carried out the optimization and converged at some values for $mu_N$, $lambda_N$, $a_N$ and $b_N$, which I refer to as $mu_*$, $lambda_*$, $a_*$ and $b_*$. Carrying out the multiplication of the two distributions in (ref{1}) gives me



begin{align} label{2} tag{11}
q(mu, tau) &= frac{b_*^{a_*}}{Gamma(a_*)}tau^{a_* - 1}mathrm{exp}{-b_*tau}frac{1}{(2pilambda^{*-1})^{1/2}}mathrm{exp}{frac{lambda_*}{2}(mu-mu_*)^2}\
&= frac{b_*^{a_*}tau^{a_* - 1}lambda_*^{1/2}}{Gamma(a_*)(2pi)^{1/2}}mathrm{exp}{frac{lambda_*}{2}(mu - mu_*)^2 - b_*tau}
end{align}

where all symbols except $mu$ and $tau$ are constants. We can then write:



begin{equation} tag{12}
q(mu, tau) = Ctau^{a_*-1}mathrm{exp}{frac{lambda_*}{2}(mu - mu_*)^2 - b_*tau}
end{equation}

with $C = frac{b_*^{a_*}lambda_*^{1/2}}{Gamma(a_*)(2pi)^{1/2}}$.



(ref{2}) should be an approximation of the posterior distribution $p(mu, tau|mathcal{D})$. I'm fairly sure that (ref{2}) is correctly computed. My question, if this is the case, is: what is this type of distribution called? It looks most similar to a Normal-Gamma distribution (look here), but it is still not exactly the same, for example due to the different exponents on the $tau$ factor in the numerator outside the exponential.










share|cite|improve this question











$endgroup$












  • $begingroup$
    Of all of those symbols, which are constants and which is the variable? It appears to me that the thing you have arrived at in the end is a Gaussian, with a bunch of constants running around (which are only going to give you another Gaussian, only with a different mean and variance).
    $endgroup$
    – Xander Henderson
    Dec 20 '18 at 14:24












  • $begingroup$
    take care that there are two possible definitions for the Gamma distribution, one with shape & scale params, the other with shape & rate params. The one used in Bishop is the second one, PDF=$frac{e^{-text{$beta $x}} beta ^{alpha } x^{alpha -1}}{Gamma (alpha )}$. This is certainly the cause of confusion
    $endgroup$
    – Picaud Vincent
    Dec 20 '18 at 14:30












  • $begingroup$
    Picaud, that is the one I've looked at, but it is still not the same.
    $endgroup$
    – Sandi
    Dec 20 '18 at 14:31










  • $begingroup$
    I'm still confused about what the constants are and what the variables are, but perhaps you are looking at a beta distribution?
    $endgroup$
    – Xander Henderson
    Dec 20 '18 at 14:34










  • $begingroup$
    Xander, I've pointed out what the variables are in the question. It's only $mu$ and $tau$ that are variables, the rest are constants.
    $endgroup$
    – Sandi
    Dec 20 '18 at 14:38
















0












$begingroup$


This is a somewhat long question but I want to make sure that you understand the context properly. Please bear with me.



I'm reading chapter 10 of Bishop's Pattern Recognition and Machine Learning and I'm stuck on "10.1.3 Example: The univariate Gaussian". In it, he defines the following likelihood function of the data with respect to the parameters of a Gaussian with mean $mu$ and precision $tau$.



begin{equation} tag{1}
p(mathcal{D}|mu, tau) = (frac{tau}{2pi})^{N/2}mathrm{exp}{-frac{tau}{2}sum_{n=1}^N (x_n - mu)^2}
end{equation}



He also introduces conjugate prior distributions for $mu$ and $tau$:



begin{equation} tag{2}
p(mu|tau) = mathcal{N}(mu|mu_0, (lambda_0tau)^{-1})
end{equation}

begin{equation} tag{3}
p(tau) = mathrm{Gam}(tau|a_0, b_0).
end{equation}



He then seeks to approximate the posterior $p(mu, tau | mathcal{D})$ by factorized variational approximation, which means he assumes that the posterior can be expressed as:



begin{equation} label{1}tag{4}
q(mu, tau) = q_{mu}(mu)q_{tau}(tau)
end{equation}



Note that the true posterior can not be factorized this way.



He then goes on to find that, for the optimal choices of $q_{mu}(mu)$ and $q_{tau}(tau)$,



begin{equation} tag{5}
q_{mu}(mu) = mathcal{N}(mu | mu_N, lambda^{-1}_N)
end{equation}

with
begin{equation} tag{6}
mu_N = frac{lambda_0 mu_0 + Noverline{x}}{lambda_0 + N}
end{equation}

begin{equation} tag{7}
lambda_N = (lambda_0 + N)mathbb{E}[tau]
end{equation}

and
begin{equation} tag{8}
q_{tau}(tau) = mathrm{Gam}(tau|a_N, b_N)
end{equation}

with
begin{equation} tag{9}
a_N = a_0 + frac{N}{2}
end{equation}

begin{equation} tag{10}
b_N = b_0 + frac{1}{2}mathbb{E}_{mu}[sum^N_{n=1}(x_n - mu)^2 + lambda_0(mu - mu_0)^2].
end{equation}



He then suggests initializing $mathbb{E}[tau]$ to some random number and using it to compute $q_{mu}(mu)$, and then using that to re-calculate $q_{tau}(tau)$. This should be done until convergence.



Now let's say I carried out the optimization and converged at some values for $mu_N$, $lambda_N$, $a_N$ and $b_N$, which I refer to as $mu_*$, $lambda_*$, $a_*$ and $b_*$. Carrying out the multiplication of the two distributions in (ref{1}) gives me



begin{align} label{2} tag{11}
q(mu, tau) &= frac{b_*^{a_*}}{Gamma(a_*)}tau^{a_* - 1}mathrm{exp}{-b_*tau}frac{1}{(2pilambda^{*-1})^{1/2}}mathrm{exp}{frac{lambda_*}{2}(mu-mu_*)^2}\
&= frac{b_*^{a_*}tau^{a_* - 1}lambda_*^{1/2}}{Gamma(a_*)(2pi)^{1/2}}mathrm{exp}{frac{lambda_*}{2}(mu - mu_*)^2 - b_*tau}
end{align}

where all symbols except $mu$ and $tau$ are constants. We can then write:



begin{equation} tag{12}
q(mu, tau) = Ctau^{a_*-1}mathrm{exp}{frac{lambda_*}{2}(mu - mu_*)^2 - b_*tau}
end{equation}

with $C = frac{b_*^{a_*}lambda_*^{1/2}}{Gamma(a_*)(2pi)^{1/2}}$.



(ref{2}) should be an approximation of the posterior distribution $p(mu, tau|mathcal{D})$. I'm fairly sure that (ref{2}) is correctly computed. My question, if this is the case, is: what is this type of distribution called? It looks most similar to a Normal-Gamma distribution (look here), but it is still not exactly the same, for example due to the different exponents on the $tau$ factor in the numerator outside the exponential.










share|cite|improve this question











$endgroup$












  • $begingroup$
    Of all of those symbols, which are constants and which is the variable? It appears to me that the thing you have arrived at in the end is a Gaussian, with a bunch of constants running around (which are only going to give you another Gaussian, only with a different mean and variance).
    $endgroup$
    – Xander Henderson
    Dec 20 '18 at 14:24












  • $begingroup$
    take care that there are two possible definitions for the Gamma distribution, one with shape & scale params, the other with shape & rate params. The one used in Bishop is the second one, PDF=$frac{e^{-text{$beta $x}} beta ^{alpha } x^{alpha -1}}{Gamma (alpha )}$. This is certainly the cause of confusion
    $endgroup$
    – Picaud Vincent
    Dec 20 '18 at 14:30












  • $begingroup$
    Picaud, that is the one I've looked at, but it is still not the same.
    $endgroup$
    – Sandi
    Dec 20 '18 at 14:31










  • $begingroup$
    I'm still confused about what the constants are and what the variables are, but perhaps you are looking at a beta distribution?
    $endgroup$
    – Xander Henderson
    Dec 20 '18 at 14:34










  • $begingroup$
    Xander, I've pointed out what the variables are in the question. It's only $mu$ and $tau$ that are variables, the rest are constants.
    $endgroup$
    – Sandi
    Dec 20 '18 at 14:38














0












0








0





$begingroup$


This is a somewhat long question but I want to make sure that you understand the context properly. Please bear with me.



I'm reading chapter 10 of Bishop's Pattern Recognition and Machine Learning and I'm stuck on "10.1.3 Example: The univariate Gaussian". In it, he defines the following likelihood function of the data with respect to the parameters of a Gaussian with mean $mu$ and precision $tau$.



begin{equation} tag{1}
p(mathcal{D}|mu, tau) = (frac{tau}{2pi})^{N/2}mathrm{exp}{-frac{tau}{2}sum_{n=1}^N (x_n - mu)^2}
end{equation}



He also introduces conjugate prior distributions for $mu$ and $tau$:



begin{equation} tag{2}
p(mu|tau) = mathcal{N}(mu|mu_0, (lambda_0tau)^{-1})
end{equation}

begin{equation} tag{3}
p(tau) = mathrm{Gam}(tau|a_0, b_0).
end{equation}



He then seeks to approximate the posterior $p(mu, tau | mathcal{D})$ by factorized variational approximation, which means he assumes that the posterior can be expressed as:



begin{equation} label{1}tag{4}
q(mu, tau) = q_{mu}(mu)q_{tau}(tau)
end{equation}



Note that the true posterior can not be factorized this way.



He then goes on to find that, for the optimal choices of $q_{mu}(mu)$ and $q_{tau}(tau)$,



begin{equation} tag{5}
q_{mu}(mu) = mathcal{N}(mu | mu_N, lambda^{-1}_N)
end{equation}

with
begin{equation} tag{6}
mu_N = frac{lambda_0 mu_0 + Noverline{x}}{lambda_0 + N}
end{equation}

begin{equation} tag{7}
lambda_N = (lambda_0 + N)mathbb{E}[tau]
end{equation}

and
begin{equation} tag{8}
q_{tau}(tau) = mathrm{Gam}(tau|a_N, b_N)
end{equation}

with
begin{equation} tag{9}
a_N = a_0 + frac{N}{2}
end{equation}

begin{equation} tag{10}
b_N = b_0 + frac{1}{2}mathbb{E}_{mu}[sum^N_{n=1}(x_n - mu)^2 + lambda_0(mu - mu_0)^2].
end{equation}



He then suggests initializing $mathbb{E}[tau]$ to some random number and using it to compute $q_{mu}(mu)$, and then using that to re-calculate $q_{tau}(tau)$. This should be done until convergence.



Now let's say I carried out the optimization and converged at some values for $mu_N$, $lambda_N$, $a_N$ and $b_N$, which I refer to as $mu_*$, $lambda_*$, $a_*$ and $b_*$. Carrying out the multiplication of the two distributions in (ref{1}) gives me



begin{align} label{2} tag{11}
q(mu, tau) &= frac{b_*^{a_*}}{Gamma(a_*)}tau^{a_* - 1}mathrm{exp}{-b_*tau}frac{1}{(2pilambda^{*-1})^{1/2}}mathrm{exp}{frac{lambda_*}{2}(mu-mu_*)^2}\
&= frac{b_*^{a_*}tau^{a_* - 1}lambda_*^{1/2}}{Gamma(a_*)(2pi)^{1/2}}mathrm{exp}{frac{lambda_*}{2}(mu - mu_*)^2 - b_*tau}
end{align}

where all symbols except $mu$ and $tau$ are constants. We can then write:



begin{equation} tag{12}
q(mu, tau) = Ctau^{a_*-1}mathrm{exp}{frac{lambda_*}{2}(mu - mu_*)^2 - b_*tau}
end{equation}

with $C = frac{b_*^{a_*}lambda_*^{1/2}}{Gamma(a_*)(2pi)^{1/2}}$.



(ref{2}) should be an approximation of the posterior distribution $p(mu, tau|mathcal{D})$. I'm fairly sure that (ref{2}) is correctly computed. My question, if this is the case, is: what is this type of distribution called? It looks most similar to a Normal-Gamma distribution (look here), but it is still not exactly the same, for example due to the different exponents on the $tau$ factor in the numerator outside the exponential.










share|cite|improve this question











$endgroup$




This is a somewhat long question but I want to make sure that you understand the context properly. Please bear with me.



I'm reading chapter 10 of Bishop's Pattern Recognition and Machine Learning and I'm stuck on "10.1.3 Example: The univariate Gaussian". In it, he defines the following likelihood function of the data with respect to the parameters of a Gaussian with mean $mu$ and precision $tau$.



begin{equation} tag{1}
p(mathcal{D}|mu, tau) = (frac{tau}{2pi})^{N/2}mathrm{exp}{-frac{tau}{2}sum_{n=1}^N (x_n - mu)^2}
end{equation}



He also introduces conjugate prior distributions for $mu$ and $tau$:



begin{equation} tag{2}
p(mu|tau) = mathcal{N}(mu|mu_0, (lambda_0tau)^{-1})
end{equation}

begin{equation} tag{3}
p(tau) = mathrm{Gam}(tau|a_0, b_0).
end{equation}



He then seeks to approximate the posterior $p(mu, tau | mathcal{D})$ by factorized variational approximation, which means he assumes that the posterior can be expressed as:



begin{equation} label{1}tag{4}
q(mu, tau) = q_{mu}(mu)q_{tau}(tau)
end{equation}



Note that the true posterior can not be factorized this way.



He then goes on to find that, for the optimal choices of $q_{mu}(mu)$ and $q_{tau}(tau)$,



begin{equation} tag{5}
q_{mu}(mu) = mathcal{N}(mu | mu_N, lambda^{-1}_N)
end{equation}

with
begin{equation} tag{6}
mu_N = frac{lambda_0 mu_0 + Noverline{x}}{lambda_0 + N}
end{equation}

begin{equation} tag{7}
lambda_N = (lambda_0 + N)mathbb{E}[tau]
end{equation}

and
begin{equation} tag{8}
q_{tau}(tau) = mathrm{Gam}(tau|a_N, b_N)
end{equation}

with
begin{equation} tag{9}
a_N = a_0 + frac{N}{2}
end{equation}

begin{equation} tag{10}
b_N = b_0 + frac{1}{2}mathbb{E}_{mu}[sum^N_{n=1}(x_n - mu)^2 + lambda_0(mu - mu_0)^2].
end{equation}



He then suggests initializing $mathbb{E}[tau]$ to some random number and using it to compute $q_{mu}(mu)$, and then using that to re-calculate $q_{tau}(tau)$. This should be done until convergence.



Now let's say I carried out the optimization and converged at some values for $mu_N$, $lambda_N$, $a_N$ and $b_N$, which I refer to as $mu_*$, $lambda_*$, $a_*$ and $b_*$. Carrying out the multiplication of the two distributions in (ref{1}) gives me



begin{align} label{2} tag{11}
q(mu, tau) &= frac{b_*^{a_*}}{Gamma(a_*)}tau^{a_* - 1}mathrm{exp}{-b_*tau}frac{1}{(2pilambda^{*-1})^{1/2}}mathrm{exp}{frac{lambda_*}{2}(mu-mu_*)^2}\
&= frac{b_*^{a_*}tau^{a_* - 1}lambda_*^{1/2}}{Gamma(a_*)(2pi)^{1/2}}mathrm{exp}{frac{lambda_*}{2}(mu - mu_*)^2 - b_*tau}
end{align}

where all symbols except $mu$ and $tau$ are constants. We can then write:



begin{equation} tag{12}
q(mu, tau) = Ctau^{a_*-1}mathrm{exp}{frac{lambda_*}{2}(mu - mu_*)^2 - b_*tau}
end{equation}

with $C = frac{b_*^{a_*}lambda_*^{1/2}}{Gamma(a_*)(2pi)^{1/2}}$.



(ref{2}) should be an approximation of the posterior distribution $p(mu, tau|mathcal{D})$. I'm fairly sure that (ref{2}) is correctly computed. My question, if this is the case, is: what is this type of distribution called? It looks most similar to a Normal-Gamma distribution (look here), but it is still not exactly the same, for example due to the different exponents on the $tau$ factor in the numerator outside the exponential.







probability-theory density-function






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 20 '18 at 15:05







Sandi

















asked Dec 20 '18 at 13:16









SandiSandi

255112




255112












  • $begingroup$
    Of all of those symbols, which are constants and which is the variable? It appears to me that the thing you have arrived at in the end is a Gaussian, with a bunch of constants running around (which are only going to give you another Gaussian, only with a different mean and variance).
    $endgroup$
    – Xander Henderson
    Dec 20 '18 at 14:24












  • $begingroup$
    take care that there are two possible definitions for the Gamma distribution, one with shape & scale params, the other with shape & rate params. The one used in Bishop is the second one, PDF=$frac{e^{-text{$beta $x}} beta ^{alpha } x^{alpha -1}}{Gamma (alpha )}$. This is certainly the cause of confusion
    $endgroup$
    – Picaud Vincent
    Dec 20 '18 at 14:30












  • $begingroup$
    Picaud, that is the one I've looked at, but it is still not the same.
    $endgroup$
    – Sandi
    Dec 20 '18 at 14:31










  • $begingroup$
    I'm still confused about what the constants are and what the variables are, but perhaps you are looking at a beta distribution?
    $endgroup$
    – Xander Henderson
    Dec 20 '18 at 14:34










  • $begingroup$
    Xander, I've pointed out what the variables are in the question. It's only $mu$ and $tau$ that are variables, the rest are constants.
    $endgroup$
    – Sandi
    Dec 20 '18 at 14:38


















  • $begingroup$
    Of all of those symbols, which are constants and which is the variable? It appears to me that the thing you have arrived at in the end is a Gaussian, with a bunch of constants running around (which are only going to give you another Gaussian, only with a different mean and variance).
    $endgroup$
    – Xander Henderson
    Dec 20 '18 at 14:24












  • $begingroup$
    take care that there are two possible definitions for the Gamma distribution, one with shape & scale params, the other with shape & rate params. The one used in Bishop is the second one, PDF=$frac{e^{-text{$beta $x}} beta ^{alpha } x^{alpha -1}}{Gamma (alpha )}$. This is certainly the cause of confusion
    $endgroup$
    – Picaud Vincent
    Dec 20 '18 at 14:30












  • $begingroup$
    Picaud, that is the one I've looked at, but it is still not the same.
    $endgroup$
    – Sandi
    Dec 20 '18 at 14:31










  • $begingroup$
    I'm still confused about what the constants are and what the variables are, but perhaps you are looking at a beta distribution?
    $endgroup$
    – Xander Henderson
    Dec 20 '18 at 14:34










  • $begingroup$
    Xander, I've pointed out what the variables are in the question. It's only $mu$ and $tau$ that are variables, the rest are constants.
    $endgroup$
    – Sandi
    Dec 20 '18 at 14:38
















$begingroup$
Of all of those symbols, which are constants and which is the variable? It appears to me that the thing you have arrived at in the end is a Gaussian, with a bunch of constants running around (which are only going to give you another Gaussian, only with a different mean and variance).
$endgroup$
– Xander Henderson
Dec 20 '18 at 14:24






$begingroup$
Of all of those symbols, which are constants and which is the variable? It appears to me that the thing you have arrived at in the end is a Gaussian, with a bunch of constants running around (which are only going to give you another Gaussian, only with a different mean and variance).
$endgroup$
– Xander Henderson
Dec 20 '18 at 14:24














$begingroup$
take care that there are two possible definitions for the Gamma distribution, one with shape & scale params, the other with shape & rate params. The one used in Bishop is the second one, PDF=$frac{e^{-text{$beta $x}} beta ^{alpha } x^{alpha -1}}{Gamma (alpha )}$. This is certainly the cause of confusion
$endgroup$
– Picaud Vincent
Dec 20 '18 at 14:30






$begingroup$
take care that there are two possible definitions for the Gamma distribution, one with shape & scale params, the other with shape & rate params. The one used in Bishop is the second one, PDF=$frac{e^{-text{$beta $x}} beta ^{alpha } x^{alpha -1}}{Gamma (alpha )}$. This is certainly the cause of confusion
$endgroup$
– Picaud Vincent
Dec 20 '18 at 14:30














$begingroup$
Picaud, that is the one I've looked at, but it is still not the same.
$endgroup$
– Sandi
Dec 20 '18 at 14:31




$begingroup$
Picaud, that is the one I've looked at, but it is still not the same.
$endgroup$
– Sandi
Dec 20 '18 at 14:31












$begingroup$
I'm still confused about what the constants are and what the variables are, but perhaps you are looking at a beta distribution?
$endgroup$
– Xander Henderson
Dec 20 '18 at 14:34




$begingroup$
I'm still confused about what the constants are and what the variables are, but perhaps you are looking at a beta distribution?
$endgroup$
– Xander Henderson
Dec 20 '18 at 14:34












$begingroup$
Xander, I've pointed out what the variables are in the question. It's only $mu$ and $tau$ that are variables, the rest are constants.
$endgroup$
– Sandi
Dec 20 '18 at 14:38




$begingroup$
Xander, I've pointed out what the variables are in the question. It's only $mu$ and $tau$ that are variables, the rest are constants.
$endgroup$
– Sandi
Dec 20 '18 at 14:38










1 Answer
1






active

oldest

votes


















2












$begingroup$

(11) is a Normal distribution for $mu$ and a Gamma distribution for $tau$, and as the variables are independent (because (11) is separable) no-one's bothered giving it its own name as a multivariate distribution.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3047534%2fwhat-is-the-name-of-this-distribution%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2












    $begingroup$

    (11) is a Normal distribution for $mu$ and a Gamma distribution for $tau$, and as the variables are independent (because (11) is separable) no-one's bothered giving it its own name as a multivariate distribution.






    share|cite|improve this answer











    $endgroup$


















      2












      $begingroup$

      (11) is a Normal distribution for $mu$ and a Gamma distribution for $tau$, and as the variables are independent (because (11) is separable) no-one's bothered giving it its own name as a multivariate distribution.






      share|cite|improve this answer











      $endgroup$
















        2












        2








        2





        $begingroup$

        (11) is a Normal distribution for $mu$ and a Gamma distribution for $tau$, and as the variables are independent (because (11) is separable) no-one's bothered giving it its own name as a multivariate distribution.






        share|cite|improve this answer











        $endgroup$



        (11) is a Normal distribution for $mu$ and a Gamma distribution for $tau$, and as the variables are independent (because (11) is separable) no-one's bothered giving it its own name as a multivariate distribution.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Dec 20 '18 at 14:50









        Sandi

        255112




        255112










        answered Dec 20 '18 at 14:44









        J.G.J.G.

        25.2k22539




        25.2k22539






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3047534%2fwhat-is-the-name-of-this-distribution%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Bressuire

            Cabo Verde

            Gyllenstierna