Documentation

ClassicalInfo.Entropy

Shannon entropy #

Definitions and facts about the Shannon entropy function -x*ln(x), both on a single variable and on a distribution.

There is significant overlap with Real.negMulLog and Real.binEntropy in Mathlib, and probably these files could be combined in some form.

def H₁ :
Prob

The one-event entropy function, H₁(p) = -p*ln(p). Uses nits.

Equations
Instances For
    @[simp]

    H₁ of 0 is zero.

    Equations
    Instances For
      @[simp]

      H₁ of 1 is zero.

      Equations
      Instances For
        theorem H₁_nonneg (p : Prob) :
        0 H₁ p

        Entropy is nonnegative.

        theorem H₁_le_1 (p : Prob) :
        H₁ p < 1

        Entropy is less than 1.

        theorem H₁_le_exp_m1 (p : Prob) :

        TODO: Entropy is at most 1/e.

        theorem H₁_concave (x y p : Prob) :
        p.mix (H₁ x) (H₁ y) H₁ (p.mix x y)
        def Hₛ {α : Type u} [Fintype α] (d : Distribution α) :

        The Shannon entropy of a discrete distribution, H(X) = ∑ H₁(p_x).

        Equations
        Instances For
          theorem Hₛ_nonneg {α : Type u} [Fintype α] (d : Distribution α) :
          0 Hₛ d

          Shannon entropy of a distribution is nonnegative.

          theorem Hₛ_le_log_d {α : Type u} [Fintype α] (d : Distribution α) :

          Shannon entropy of a distribution is at most ln d.

          @[simp]
          theorem Hₛ_constant_eq_zero {α : Type u} [Fintype α] {i : α} :

          The shannon entropy of a constant variable is zero.

          Shannon entropy of a uniform distribution is ln d.