What I suggested back then was to sample the parameters of a Dirichlet distribution by sampling

*from*a Dirichlet distribution and then multiplying that by a magnitude sampled from an exponential distribution. As it turns out, this is a special case of a much nicer general method.

The new method is to sample each parameter independently from a gamma distribution

\[\gamma_i \sim \mathrm{Gamma}(\alpha_i, \beta_i) \]

This can be related to my previous method where we had the parameters expressed as a magnitude \(\alpha\) multiplied by a vector \(\vec m\) whose components sum to 1. Expressed in terms of the new method

\[\alpha = \sum_i \gamma_i \]

and

\[ m_i = \gamma_i / \alpha \]

Moreover, if all of the \(\beta_i\) are equal, then \(\alpha\) is gamma distributed with shape parameter \(\sum_i \alpha_i\). If this sum is 1, then we have an exponential distribution for \(\alpha\) as in the original method.

I am pretty sure that this formulation also makes MCMC sampling from the posterior distribution easier as well because the products inside the expression for the joint probability will get glued together in a propitious fashion.

## 2 comments:

Thank you for the posting. Do you have any reference on this posting? Thank you very much

No. I was just making this up based on informal experience in sampling from these distributions.

Post a Comment