Probabilities #
This defines a type Prob, which is simply any real number in the interval O to 1. This then comes with
additional statements such as its application to convex sets, and it makes useful type alias for
functions that only make sense on probabilities.
A significant application is in the Mixable typeclass, also in this file, which is a general notion
of convex combination that applies to types as opposed to sets; elements are Mixable.mixed using Probs.
Equations
- Prob.instZero = { zero := ⟨0, Prob.instZero._proof_1⟩ }
Equations
- One or more equations did not get rendered due to their size.
Equations
- One or more equations did not get rendered due to their size.
Equations
- Prob.instCoeNNReal = { coe := Prob.toNNReal }
Equations
- Prob.NNReal.asProb p hp = ⟨↑p, ⋯⟩
Instances For
Equations
- Prob.NNReal.asProb' p hp = ⟨↑p, ⋯⟩
Instances For
Subtract a probability from another. Truncates to zero, so this is often not great
to work with, for the same reason that Nat subtraction is a pain. But, it lets you write
1 - p, which is sufficiently useful on its own that this seems worth having.
A Mixable T typeclass instance gives a compact way of talking about the action of probabilities
for forming linear combinations in convex spaces. The notation p [ x₁ ↔ x₂ ] means to take a convex
combination, equal to x₁ if p=1 and to x₂ if p=0.
Mixable is defined by an "underlying" data type U with addition and scalar multiplication, and a
bijection between the T and a convex set of U. For instance, in Mixable (Distribution (Fin n)),
U is n-element vectors (which form the probability simplex, degenerate in one dimension). For
QuantumInfo.Finite.MState density matrices in quantum mechanics, which are PSD matrices of trace 1,
U is the underlying matrix.
Why not just stick with existing notions of Convex? Convex requires that the type already forms an
AddCommMonoid and Module ℝ. But many types, such as Distribution, are not: there is no good notion of
"multiplying a probability distribution by 0.3" to get another distribution. We can coerce the distribution
into, say, a vector or a function, but then we are not doing arithmetic with distributions. Accordingly,
the expression 0.3 * distA + 0.7 * distB cannot represent a distribution on its own.
- to_U : T → U
Getter for the underlying data
Proof that this getter is injective
Proof that this image is convex
Function to get a T from a proof that U is in the set.
Instances
Equations
- Mixable.mix_ab ha hb hab x₁ x₂ = ↑(Mixable.mkT ⋯)
Instances For
Mixable.mix represents the notion of "convex combination" on the type T, afforded by the Mixable
instance. It takes a Prob, that is, a Real between 0 and 1. For working directly with a Real, use mix_ab.
Equations
- p[x₁↔x₂] = Mixable.mix_ab ⋯ ⋯ ⋯ x₁ x₂
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
When T is the whole space, and T is a suitable vector space over ℝ, we get a Mixable instance.
Mixable instance on Pi types.
Equations
- One or more equations did not get rendered due to their size.
Mixable instances on subtypes (of other mixable types), assuming that they have the correct closure properties.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Probabilities Prob themselves are convex.
Equations
- Prob.instMixable = { to_U := Subtype.val, to_U_inj := ⋯, convex := Prob.instMixable._proof_1, mkT := fun {u : ℝ} (h : ∃ (t : Prob), ↑t = u) => ⟨⟨u, ⋯⟩, ⋯⟩ }
Equations
- Prob.«term—log» = Lean.ParserDescr.node `Prob.«term—log» 1024 (Lean.ParserDescr.symbol "—log ")