Probability distributions

Probability distributions are classes. Their parameters do not normally change. A prototype for a probability distribution looks like

class ProbabilityDistribution
density(x)

(For continuous distributions.)

Returns:the value of the probability density function at the point x.
mass(x)

(For discrete distributions.)

Returns:the value of the probability mass function at the point x.
sampler()
Returns:a sampler for this distribution. See Samplers.

Conditional distributions return a distribution when the values for the variables behind the bar are given.

class ConditionalDistribution
given(y)
Returns:the distribution given the parameters.

Log-float

class probability.log_float.LogFloat(scalar=None, exponent=None)

Real number that is internally represented by its natural logarithm. This makes it possible to work with larger dynamic ranges than a normal float would allow.

The arithmetic operations +, *, and / are implemented.

To construct a LogFloat, either scalar or exponent must be given.

Parameters:
  • scalar – a positive scalar value, which will be stored as its natural logarithm.
  • exponent – the natural logarithm to be stored in the LogFloat.
getExponent()
Returns:the exponent.
probability.log_float.log(f)
Returns:The logarithm of a number, whether a LogFloat object or something

else.

probability.log_float.makeStandardNumeric(n)

Convert log-floats to floats, but leave everything else as is.

Distributions

The following are implementations of probability distributions.

class probability.uniform.Uniform(lower, upper)

Uniform distribution over a range of reals.

Parameters:
  • lower – the lower bound.
  • upper – the upper bound.
density(x)
Returns:the density if x is within [lower, upper); 0. otherwise.
sampler()
class probability.categorical.Categorical(values)

A categorical distribution.

This is a discrete distribution with explicitly specified values with associated probabilities.

Parameters:values – a non-empty of (value, count) pairs. The counts can be fractions, but do not have to be normalised. The order is maintained.
sampler(uniformUnitSampler = sampler.UniformUnitSampler())
deltas()
Returns:(as an iterator) all (value, count) pairs, in order.
unitWeightSamples()
Returns:(as an iterator) all values, with each value repeated count

times, where count is the count corresponding to the value. This is therefore only possible if all counts are ints. The values are yielded in order.

class probability.gaussian.Gaussian(mean, covariance=None, precision=None)

Multivariate Gaussian distribution.

Initialise the Gaussian either with mean and covariance, or with mean and precision.

sampler(uniformUnitSampler = sampler.UniformUnitSampler())
density(x)

Return the density at the point given by x.

unnormalisedDensity(x)
Returns:the unnormalised density at the point given by x.
getCovariance()
Returns:the covariance matrix.

If it was not given, it is computed from the precision matrix.

getPrecision()
Returns:the precision matrix.

If it was not given, it is computed from the covariance matrix.

probability.gaussian.joinGaussians(d1, d2)

Return a joint Gaussian from two Gaussians. The cross-correlation between the two Gaussians is set to zero.

probability.gaussian.splitGaussian(d)

Split the Gaussian in two marginals: one over the first half of the vector, one over the second half. Should the cross-covariance between the two blocks have been all zero, then the product of the densities of the returned Gaussians is equivalent to the density of d.

probability.gaussian.factoriseGaussian(g, splitPoint)

Factorise a joint Gaussian p (a, b) into factors p(a) and p(b|a). p(a) is a Gaussian; p(b|a) is a ConditionalDistribution that returns a Gaussian.

Parameters:
  • g – the original Gaussian.
  • splitPoint – the dimensionality of a.
Returns:

(gA, gB) so that for any point x g.density (x) = gA.density (x [:splitPoint]) * gB.given (x [:splitPoint]).density (x [splitPoint:])

probability.gaussian.factoriseGaussianCompletely(g)

Factorise a joint Gaussian p (a, b, c...) into factors p(a), p(b|a), p(c|a,b) et cetera.

class probability.gaussian.GaussianAccumulator

Assembles statistics from data vectors to find Gaussian parameters with maximum-likelihood estimation.

add(sample, weight=1)

Add a data vector to the statistics.

Parameters:
  • sample – the data vector.
  • weight – the weight associated with the data vector.
addFromSampler(sampler, number)

Draw samples from a sampler and add them to the statistics.

Parameters:
  • sampler – the sampler (unary function that generates data vectors) to draw from.
  • number – the number of samples to draw.
distribution()
Returns:the Gaussian estimated to maximise the likelihood on the data

points the accumulator has seen so far.

Mixtures

class probability.mixture.Mixture(components)

Mixture distribution.

The components can all be continuous, or all be discrete.

Parameters:components – list of (distribution, weight) pairs. The distributions must be ProbabilityDistributions. The weights do not have to be normalised.
sampler(uniformUnitSampler = sampler.UniformUnitSampler())
density(x)

(For mixtures of continuous distributions.)

Returns:the density of the distribution at point x. This marginalises out the component identity.
mass(x)

(For mixtures of discrete distributions.)

Returns:the mass of the distribution at point x. This marginalises out the component identity.
factorise()
Returns:(weights, components)

where weights is a Categorical distribution that generates component indices and components is a ConditionalDistribution whose given method takes a component index and returns a component.

unnormalisedComponents()
Returns:(as an iterator) pairs (distribution, weight) with weight unnormalised.

Training mixtures normally uses expectation–maximisation.

probability.mix_up.trainMixture(samples, initial, iterationNum, AccumulatorType)

Retrain a mixture on samples using expectation–maximisation.

Parameters:
  • samples – the list of samples. Expectation–maximisation needs to go over these a number of times, so this must not be a general iteratable!
  • initial – the initial mixture.
  • iterationNum – the number of iterations of expectation–maximisation.
  • AccumulatorType – the type of accumulator (e.g. GaussianAccumulator to produce Gaussians).
probability.mix_up.trainGaussianMixtureFromScratch(samples, componentNum, componentIncrease=1, emIterations=4)

Train a Gaussian mixture model from data. This uses expectation–maximisation (which retrains components and weights iteratively) and mixing up (a not so mathematically neat process which turns the Gaussian(s) with the largest weight into two repeatedly).

Parameters:
  • samples – a list of samples.
  • componentNum – the number of components of the final mixture.
  • componentIncrease – the number of extra components to generate at once in between training of the components by splitting the component with the heaviest weight should be increased.
  • emIterations – the number of iterations of expectation–maximisation to run each time the number of components has changed.

Samplers

Samplers are unary functions (or classes with a __call__ method) that return a new sample from a distribution each time. probability.sampler.UniformUnitSampler is the base sampler that other samplers wrap.

Making the sampler a separate object allows

  1. different base samplers to be specified;

  2. computations that speed up sampling to be done when the sampler is constructed

    (for example, Cholesky decomposition for a Gaussian; computing the cdf for a discrete distribution).

class probability.sampler.UniformUnitSampler
printState()

Print state for debugging.

seed(n)
class probability.sampler.AlwaysSampler(result)

Sampler that always produces the same sample.

Parameters:result – the sampler’s permanent result.
class probability.sampler.IncrementalSystematicSampler(unbiased=False, uniformUnitSample=UniformUnitSampler())

Sampler that for each 2*n samples produces equidistant samples. The last 2*(n-1) of those samples are in order. This is useful if one wants to apply systematic sampling at various levels.

Parameters:
  • unbiased – make the first n samples unbiased for any n. This means that n/2 samples are cached.
  • uniformUnitSampler – the base unit sampler. If the unbiased == False, it is called only once, to determine the position of the first sample.
class probability.sampler.UnitGaussianSampler(uniformUnitSample=UniformUnitSampler())

Sampler that returns unit Gaussian distributed samples. That is, samples with mean 0 and variance 1.

Table Of Contents

Previous topic

Implementation

Next topic

Monte Carlo

This Page