Probability distributions are classes. Their parameters do not normally change. A prototype for a probability distribution looks like
(For continuous distributions.)
Returns:  the value of the probability density function at the point x. 

(For discrete distributions.)
Returns:  the value of the probability mass function at the point x. 

Conditional distributions return a distribution when the values for the variables behind the bar are given.
Real number that is internally represented by its natural logarithm. This makes it possible to work with larger dynamic ranges than a normal float would allow.
The arithmetic operations +, *, and / are implemented.
To construct a LogFloat, either scalar or exponent must be given.
Parameters: 


Returns:  the exponent. 

Returns:  The logarithm of a number, whether a LogFloat object or something 

else.
Convert logfloats to floats, but leave everything else as is.
The following are implementations of probability distributions.
Uniform distribution over a range of reals.
Parameters: 


Returns:  the density if x is within [lower, upper); 0. otherwise. 

A categorical distribution.
This is a discrete distribution with explicitly specified values with associated probabilities.
Parameters:  values – a nonempty of (value, count) pairs. The counts can be fractions, but do not have to be normalised. The order is maintained. 

Returns:  (as an iterator) all (value, count) pairs, in order. 

Returns:  (as an iterator) all values, with each value repeated count 

times, where count is the count corresponding to the value. This is therefore only possible if all counts are ints. The values are yielded in order.
Multivariate Gaussian distribution.
Initialise the Gaussian either with mean and covariance, or with mean and precision.
Return the density at the point given by x.
Returns:  the unnormalised density at the point given by x. 

Returns:  the covariance matrix. 

If it was not given, it is computed from the precision matrix.
Returns:  the precision matrix. 

If it was not given, it is computed from the covariance matrix.
Return a joint Gaussian from two Gaussians. The crosscorrelation between the two Gaussians is set to zero.
Split the Gaussian in two marginals: one over the first half of the vector, one over the second half. Should the crosscovariance between the two blocks have been all zero, then the product of the densities of the returned Gaussians is equivalent to the density of d.
Factorise a joint Gaussian p (a, b) into factors p(a) and p(ba). p(a) is a Gaussian; p(ba) is a ConditionalDistribution that returns a Gaussian.
Parameters: 


Returns:  (gA, gB) so that for any point x g.density (x) = gA.density (x [:splitPoint]) * gB.given (x [:splitPoint]).density (x [splitPoint:]) 
Factorise a joint Gaussian p (a, b, c...) into factors p(a), p(ba), p(ca,b) et cetera.
Assembles statistics from data vectors to find Gaussian parameters with maximumlikelihood estimation.
Add a data vector to the statistics.
Parameters: 


Draw samples from a sampler and add them to the statistics.
Parameters: 


Returns:  the Gaussian estimated to maximise the likelihood on the data 

points the accumulator has seen so far.
Mixture distribution.
The components can all be continuous, or all be discrete.
Parameters:  components – list of (distribution, weight) pairs. The distributions must be ProbabilityDistributions. The weights do not have to be normalised. 

(For mixtures of continuous distributions.)
Returns:  the density of the distribution at point x. This marginalises out the component identity. 

(For mixtures of discrete distributions.)
Returns:  the mass of the distribution at point x. This marginalises out the component identity. 

Returns:  (weights, components) 

where weights is a Categorical distribution that generates component indices and components is a ConditionalDistribution whose given method takes a component index and returns a component.
Returns:  (as an iterator) pairs (distribution, weight) with weight unnormalised. 

Training mixtures normally uses expectation–maximisation.
Retrain a mixture on samples using expectation–maximisation.
Parameters: 


Train a Gaussian mixture model from data. This uses expectation–maximisation (which retrains components and weights iteratively) and mixing up (a not so mathematically neat process which turns the Gaussian(s) with the largest weight into two repeatedly).
Parameters: 


Samplers are unary functions (or classes with a __call__ method) that return a new sample from a distribution each time. probability.sampler.UniformUnitSampler is the base sampler that other samplers wrap.
Making the sampler a separate object allows
different base samplers to be specified;
(for example, Cholesky decomposition for a Gaussian; computing the cdf for a discrete distribution).
Sampler that always produces the same sample.
Parameters:  result – the sampler’s permanent result. 

Sampler that for each 2*n samples produces equidistant samples. The last 2*(n1) of those samples are in order. This is useful if one wants to apply systematic sampling at various levels.
Parameters: 


Sampler that returns unit Gaussian distributed samples. That is, samples with mean 0 and variance 1.