Formalized math · machine-checked Lean 4
What the team has proved
A walkthrough of every theorem the math-team has formalized in Lean 4 + mathlib so far, with prose translations and proof status. Files: MatMul.lean, Eml.lean, MinimalBasis.lean, JunkValueTest.lean.
1. Matrix multiplication = composition of linear maps ¶
The textbook claim from §6.3 of shared/matrix-foundations/notes.md: "Row-times-column is not a convention; it is the unique formula consistent with composition once the column-as-image convention is fixed." The team formalized this in two complementary statements.
For finite-dimensional vector spaces V, W, U over a field F with bases b_V, b_W, b_U, and linear maps T : V → W, S : W → U:
theorem toMatrix_comp_eq
(b_V : Basis ιV F V) (b_W : Basis ιW F W) (b_U : Basis ιU F U)
(T : V →ₗ[F] W) (S : W →ₗ[F] U) :
LinearMap.toMatrix b_V b_U (S.comp T) =
LinearMap.toMatrix b_W b_U S * LinearMap.toMatrix b_V b_W T :=
LinearMap.toMatrix_comp b_V b_W b_U S T
T, then do S" is the product of the two individual matrices, in that order. The proof is one line — it appeals to mathlib's LinearMap.toMatrix_comp, which already encodes the derivation.The same theorem, but stating the row-times-column formula explicitly:
theorem toMatrix_comp_apply_eq
(b_V : Basis ιV F V) (b_W : Basis ιW F W) (b_U : Basis ιU F U)
(T : V →ₗ[F] W) (S : W →ₗ[F] U) (i : ιU) (j : ιV) :
LinearMap.toMatrix b_V b_U (S.comp T) i j =
∑ k : ιW, LinearMap.toMatrix b_W b_U S i k *
LinearMap.toMatrix b_V b_W T k j := by
rw [LinearMap.toMatrix_comp b_V b_W b_U S T]
rfl
S ∘ T equals Σₖ Aᵢₖ · Bₖⱼ — the row-times-column rule. The rfl at the end is mathematically content-free: in mathlib, this entry-wise formula is the definition of matrix multiplication, so once the matrix-level statement is rewritten, equality holds by definition.2. Why A·B ≠ B·A — concrete witness ¶
From §6.4. Function composition isn't commutative, so matrix multiplication isn't either. Three small examples make it a verified fact, not a hand-wave.
def A : Matrix (Fin 2) (Fin 2) ℤ := !![0, 1; 0, 0]
def B : Matrix (Fin 2) (Fin 2) ℤ := !![1, 0; 0, 0]
example : A * B = !![0, 0; 0, 0] := by unfold A B; decide
example : B * A = !![0, 1; 0, 0] := by unfold A B; decide
example : A * B ≠ B * A := by unfold A B; decide
A = !![0, 1; 0, 0] and B = !![1, 0; 0, 0]: A·B is the zero matrix, but B·A = A itself. The proofs are just decide — Lean's kernel reduces both products to canonical integer matrices and compares them. Note the entries are ℤ, not ℝ: decide needs DecidableEq, which ℤ has and ℝ does not.3. Algebra laws as one-line corollaries ¶
From §7. Each of identity, associativity, and distributivity falls out of §6.3 plus the corresponding fact about linear maps. The Lean proofs are 5–6 line rewrite chains, deliberately taking the prose-faithful route — mathlib has direct matrix-level lemmas (Matrix.mul_assoc, etc.) that would collapse these to one-liners, but the goal was to show the chain.
theorem toMatrix_id_eq (b : Basis ιV F V) :
LinearMap.toMatrix b b LinearMap.id = (1 : Matrix ιV ιV F) :=
LinearMap.toMatrix_id b
theorem toMatrix_comp_assoc
(b_V : Basis ιV F V) (b_W : Basis ιW F W) (b_X : Basis ιX F X) (b_U : Basis ιU F U)
(R : V →ₗ[F] W) (T : W →ₗ[F] X) (S : X →ₗ[F] U) :
(LinearMap.toMatrix b_X b_U S * LinearMap.toMatrix b_W b_X T) *
LinearMap.toMatrix b_V b_W R =
LinearMap.toMatrix b_X b_U S *
(LinearMap.toMatrix b_W b_X T * LinearMap.toMatrix b_V b_W R) := by
rw [← toMatrix_comp_eq b_W b_X b_U T S,
← toMatrix_comp_eq b_V b_W b_U R (S ∘ₗ T),
← toMatrix_comp_eq b_V b_W b_X R T,
← toMatrix_comp_eq b_V b_X b_U (T ∘ₗ R) S,
LinearMap.comp_assoc]
(A·B)·C = A·(B·C). The proof rewrites four matrix products back to compositions, applies the fact that function composition is associative (LinearMap.comp_assoc), and the matrix equation falls out. Five rewrite steps versus a one-line Matrix.mul_assoc — the team picked the longer route to keep the prose argument visible.The two distributivity laws (toMatrix_comp_add_right and toMatrix_comp_add_left) follow the same shape — five rewrite steps each, deliberately taking the prose path.
4. Change of basis — same map, different coordinates ¶
From §10. The same linear map can be represented by different matrices in different bases. The change-of-basis formula [T]_{B'} = P⁻¹·[T]_B·P is "literally §6.3 used three times."
theorem toMatrix_change_of_basis (B B' : Basis ι F V) (T : V →ₗ[F] V) :
LinearMap.toMatrix B' B' T =
LinearMap.toMatrix B B' LinearMap.id *
LinearMap.toMatrix B B T *
LinearMap.toMatrix B' B LinearMap.id := by
rw [← toMatrix_comp_eq B B B' T LinearMap.id,
LinearMap.id_comp,
← toMatrix_comp_eq B' B B' LinearMap.id T,
LinearMap.comp_id]
[T]_{B'} = [id]_{B'←B} · [T]_B · [id]_{B←B'}. The matrix of T in basis B' equals the change-of-basis matrix from B to B', times the matrix of T in B, times the change-of-basis matrix back. Two applications of §6.3 plus id ∘ T = T = T ∘ id. Pure §6.3 mechanics.theorem det_similar (P : (Matrix ι ι F)ˣ) (A : Matrix ι ι F) :
Matrix.det (P⁻¹.val * A * P.val) = Matrix.det A :=
Matrix.det_units_conj' P A
theorem trace_similar (P : (Matrix ι ι F)ˣ) (A : Matrix ι ι F) :
Matrix.trace (P⁻¹.val * A * P.val) = Matrix.trace A :=
Matrix.trace_units_conj' P A
A 2×2 worked example also lives in MatMul.lean: the map T(x,y) = (2x+y, x+2y) with matrix !![2,1;1,2] in the standard basis becomes the diagonal !![3,0;0,1] in the eigenbasis {(1,1), (1,−1)}. Verified entry-wise over ℚ via native_decide.
5. EML — Odrzywołek's exp-minus-log operator ¶
From shared/eml/notes.md. The operator is eml(x, y) := exp(x) − log(y). Odrzywołek's claim (April 2026) is that {eml, 1} generates every elementary function. The team formalized the operator and the foundational identities.
noncomputable def eml (x y : ℝ) : ℝ := exp x - log y
example : eml 0 1 = 1 := by simp [eml] -- exp(0) - log(1) = 1 - 0 = 1
theorem exp_eq_eml (x : ℝ) : exp x = eml x 1 := by simp [eml]
eml as exp-minus-log over reals. The sanity check (eml 0 1 = 1) is one tactic. exp x = eml x 1 is the first foundational identity — exponentiation is one level of nested eml.theorem log_eq_eml (x : ℝ) :
log x = eml 1 (eml (eml 1 x) 1) := by
simp only [eml, Real.log_one, sub_zero, Real.log_exp]
ring
eml. The proof unfolds the definition, simplifies log(1) = 0 and log(exp(...)) = ..., then closes with ring. Holds for all real x because mathlib's Real.log is extended to be 0 on (−∞, 0] by convention; both sides compute 0 = 0 there.6. EML constructive direction — sub, neg, add, mul ¶
The team formalized sub, neg, add, mul via the obvious construction, with explicit domain conditions. Each downstream operation inherits a narrower domain than its components — exactly the "constraint-narrowing" finding that motivates the obstruction work.
noncomputable def mySub (x y : ℝ) : ℝ := eml (myLog x) (myExp y)
theorem mySub_eq (x y : ℝ) (hx : 0 < x) : mySub x y = x - y := by
unfold mySub eml
rw [myLog_eq, myExp_eq, Real.exp_log hx, Real.log_exp]
sub(x, y) = eml(log x, exp y) = exp(log x) − log(exp y) = x − y. The proof requires 0 < x so exp(log x) = x holds.noncomputable def myNeg (x : ℝ) : ℝ := mySub (mySub (Real.exp 1) x) (Real.exp 1)
theorem myNeg_eq (x : ℝ) (hx : x < Real.exp 1) : myNeg x = -x := by
unfold myNeg
have h_inner : mySub (Real.exp 1) x = Real.exp 1 - x :=
mySub_eq _ _ (Real.exp_pos 1)
rw [h_inner]
have h_outer : mySub (Real.exp 1 - x) (Real.exp 1) = Real.exp 1 - x - Real.exp 1 :=
mySub_eq _ _ (by linarith)
rw [h_outer]
ring
neg(x) = (e − x) − e. Requires x < e so the inner e − x > 0; otherwise mySub's domain hypothesis fails. This narrower-than-its-parts pattern is the structural worry the obstruction theory captures.Add (myAdd) and Mul (myMul) compose these the same way, with even tighter domains: add needs 0 < x and y < e; mul needs 1 < x (so 0 < log x) and y < e^e ≈ 15.15.
7. EML's first concrete obstruction ¶
The constraint-narrowing pattern from §6 said: even negation can't be globally represented by the obvious eml-tree. The team formalized this as a witness theorem: there exists an x at which the obvious construction differs from −x.
theorem myNeg_not_globally_neg : ∃ x : ℝ, myNeg x ≠ -x := by
refine ⟨Real.exp 1 + 1, ?_⟩
have h_e_pos : (0 : ℝ) < Real.exp 1 := Real.exp_pos 1
have h_value : myNeg (Real.exp 1 + 1) = 1 - Real.exp 1 := by
unfold myNeg
rw [mySub_eq _ _ h_e_pos]
have h_inner : Real.exp 1 - (Real.exp 1 + 1) = -1 := by ring
rw [h_inner]
unfold mySub
rw [myLog_eq, myExp_eq]
have h_log_neg_one : Real.log (-1 : ℝ) = 0 := by
rw [show (-1 : ℝ) = -(1 : ℝ) by ring, Real.log_neg_eq_log, Real.log_one]
rw [h_log_neg_one]
unfold eml
rw [Real.exp_zero, Real.log_exp]
rw [h_value]
intro heq
have : (2 : ℝ) = 0 := by linarith
norm_num at this
x = e + 1, the inner mySub e x correctly computes −1; but the outer mySub _ e then takes log(−1) = 0 (mathlib's junk-value convention extends log to be 0 on the negatives), giving 1 − e instead of the expected −(e+1). The two are unequal because 1 − e = −e − 1 would force 1 = −1. The witness x = e + 1 is a falsifying input.This is a weaker claim than "no eml-tree of any shape represents negation globally." That stronger claim — and the analogous claim for the quintic root — is what the next section's framework targets.
8. The MinimalBasis framework — Tier 2 Path B ¶
This is the file that updates my mental model of where the team is. MinimalBasis.lean defines a generic obstruction-theory framework that any minimal-basis universality claim (EML, NAND, future AI primitives) can instantiate. The framework abstractions are proved; only the per-instantiation analytic content is left as Path-B axioms.
structure OperatorBasis (α : Type*) where
Op : Type
arity : Op → ℕ
eval : (op : Op) → (Fin (arity op) → α) → α
α is just: a type of operator symbols, an arity for each, and an evaluator. EML uses one binary operator (eml); NAND would use one binary operator (nand); a future AI-primitives basis could plug in here too.inductive Term {α : Type*} (B : OperatorBasis α) (n : ℕ) : Type _ where
| const : α → Term B n
| var : Fin n → Term B n
| app : (op : B.Op) → (Fin (B.arity op) → Term B n) → Term B n
def Term.representsGlobally {α : Type*} {B : OperatorBasis α} {n : ℕ}
(T : Term B n) (f : (Fin n → α) → α) : Prop :=
∀ env, T.eval env = f env
f if its evaluation matches f at every input. This is the syntactic object the entire universality debate is about.structure ObstructionTheory {α : Type*} (B : OperatorBasis α) where
inClass : (n : ℕ) → ((Fin n → α) → α) → Prop
const_in_class : ∀ {n : ℕ} (c : α), inClass n (fun _ => c)
var_in_class : ∀ {n : ℕ} (i : Fin n), inClass n (fun env => env i)
app_in_class : ∀ {n : ℕ} (op : B.Op)
(subs : Fin (B.arity op) → ((Fin n → α) → α)),
(∀ k, inClass n (subs k)) →
inClass n (fun env => B.eval op (fun k => subs k env))
theorem Term.eval_in_class {α : Type*} {B : OperatorBasis α}
(Th : ObstructionTheory B) {n : ℕ} :
∀ (T : Term B n), Th.inClass n T.eval
| .const c => Th.const_in_class c
| .var i => Th.var_in_class i
| .app op kids => by
have ih : ∀ k, Th.inClass n (kids k).eval :=
fun k => Term.eval_in_class Th (kids k)
exact Th.app_in_class op (fun k => (kids k).eval) ih
theorem Term.no_representation_of_not_in_class
{α : Type*} {B : OperatorBasis α} (Th : ObstructionTheory B) {n : ℕ}
(f : (Fin n → α) → α) (hf : ¬ Th.inClass n f) :
¬ ∃ T : Term B n, T.representsGlobally f := by
rintro ⟨T, hT⟩
exact T.not_representsGlobally_of_not_in_class Th f hf hT
f isn't in the obstruction class, no term globally represents it. This is the abstract non-expressibility theorem; per-basis instantiations specialize it. The proof is a one-line contraposition off the closure theorem.9. Headline theorem — no EML-term represents the quintic root ¶
The team instantiated the EML basis and an obstruction theory whose inClass predicate is "has solvable monodromy." Two of the original four Path-B axioms — the trivial-monodromy cases for constants and variables — have been discharged via the concrete hasSolvableMonodromy definition in Sheffer/Foundations/SolvableMonodromy.lean, which is built on the monodromy and multivalued-analytic infrastructure in MonodromyGroup.lean and MultivaluedAnalytic.lean. Two axioms remain.
axiom eml_preserves_solvableMonodromy {n : ℕ} (f g : (Fin n → ℝ) → ℝ) :
hasSolvableMonodromy f → hasSolvableMonodromy g →
hasSolvableMonodromy (fun env : Fin n → ℝ => EML.eml (f env) (g env))
axiom quinticRoot_has_S5_monodromy_universal : ...
-- the underlying positive fact: the quintic-root function has monodromy S₅
eml_preserves_solvableMonodromy is Khovanskii's lemma proper — composition of solvable-monodromy functions through eml remains solvable — and quinticRoot_has_S5_monodromy_universal is the classical Galois fact that the quintic root over a generic coefficient space has $S_5$ monodromy. Both are theorems with established proofs in the analytic-Galois literature; the Lean discharge of the eml-preservation axiom is the long pole of Path A.theorem no_eml_term_represents_quintic :
¬ ∃ T : MinimalBasis.Term EmlBasis 5, T.representsGlobally quinticRoot :=
Term.no_representation_of_not_in_class EmlObstruction quinticRoot
quinticRoot_not_solvableMonodromy
This is the artifact that lands the Tier 2 contribution. Strategy reweighting follows in a separate update; for now the math is on disk and machine-checked.
10. NAND — Sheffer's positive completeness ¶
The Boolean instance of the framework. Sheffer 1913 showed every Boolean function on n inputs is expressible as a NAND-tree. Three direct constructions for NOT, AND, OR are proved fully; full functional completeness is captured as a single axiom (the DNF-to-NAND construction is mechanical, ~50 Lean lines, deferred).
inductive NandOp : Type where
| nand : NandOp
deriving Inhabited, DecidableEq
def nand (a b : Bool) : Bool := !(a && b)
def NandBasis : MinimalBasis.OperatorBasis Bool where
Op := NandOp
arity _ := 2
eval _ args := nand (args 0) (args 1)
nand), arity 2, evaluator !(a && b). The framework's OperatorBasis structure absorbs the rest.-- NOT via nand x x = !x
def notTree : Term NandBasis 1 :=
Term.app NandOp.nand (fun _ => Term.var 0)
theorem notTree_repr : notTree.representsGlobally (fun env => !(env 0)) := by
intro env
cases h : env 0 <;> simp [notTree, Term.eval, NandBasis, nand, h]
-- AND via nand (nand x y) (nand x y) = x && y
def andTree : Term NandBasis 2 := ... -- nested nand of nand
theorem andTree_repr : andTree.representsGlobally (fun env => env 0 && env 1) := ...
-- OR via De Morgan: !(!x ∧ !y) = x ∨ y
def orTree : Term NandBasis 2 := ...
theorem orTree_repr : orTree.representsGlobally (fun env => env 0 || env 1) := ...
cases <;> simp exhaustive check over the truth table. No axioms./-- Every Boolean function on n inputs is representable as a NAND-term.
Proved via the DNF construction: bigAnd · bigOr · minTerm · dnf chain. -/
theorem sheffer_complete (n : ℕ) (f : (Fin n → Bool) → Bool) :
∃ T : MinimalBasis.Term NandBasis n, T.representsGlobally f := by
refine ⟨dnf f, ?_⟩
intro env
rw [dnf_eval]
-- closure follows from minTerm_self_eval, minTerm_diff_eval,
-- and exhaustive case-split over the Boolean cube
...
true, building a minterm (conjunction of variables and negations) for each such row, OR-ing the minterms, and then expressing the whole expression as a NAND-tree via the three direct constructions above. Helpers bigAnd, bigOr, minTerm, and dnf live alongside the theorem; together with minTerm_self_eval and minTerm_diff_eval they close the full case-split. ~120 lines added on top of the original NAND file.11. PolyAct — fully-constructive AI-architectural obstruction ¶
The cleanest single artifact the team has shipped. A polynomial-activation network of arity-1 operators, each applying a polynomial of degree at most k, composed to depth d, can represent exactly the polynomials of degree at most k^d. The argument is mathlib's Polynomial.natDegree_comp_le applied at each tree level. Zero axioms.
/-- The operator-symbol type for polynomial activations of degree ≤ k. -/
abbrev PolyActOp (k : ℕ) : Type := { p : Polynomial ℝ // p.natDegree ≤ k }
/-- The polynomial-activation basis at degree k: each operator is a univariate
polynomial of degree ≤ k. Arity 1. -/
noncomputable def PolyActBasis (k : ℕ) : MinimalBasis.OperatorBasis ℝ where
Op := PolyActOp k
arity _ := 1
eval p args := p.val.eval (args 0)
/-- The polynomial's natDegree is bounded by k^T.polyDepth. -/
theorem Term.toPoly_natDegree_le {k : ℕ}
(T : MinimalBasis.Term (PolyActBasis k) 1) :
T.toPoly.natDegree ≤ k ^ T.polyDepth := by
induction T with
| const c =>
simp [Term.toPoly, Term.polyDepth, Polynomial.natDegree_C]
| var i =>
simp [Term.toPoly, Term.polyDepth, Polynomial.natDegree_X]
| app op kids ih =>
let i := MinimalBasis.PolyAct.singleChild op
calc (op.val.comp (kids i).toPoly).natDegree
≤ op.val.natDegree * (kids i).toPoly.natDegree :=
Polynomial.natDegree_comp_le
_ ≤ k * k ^ (kids i).polyDepth :=
Nat.mul_le_mul op.property (ih i)
_ = k ^ (1 + (kids i).polyDepth) := by
rw [Nat.add_comm, pow_succ, Nat.mul_comm]
Term. Constants and variables have degree at most k^0 = 1. Application: natDegree(p ∘ q) ≤ deg(p) · deg(q) ≤ k · k^d = k^(1+d), where the first inequality is mathlib's Polynomial.natDegree_comp_le. One mathlib lemma, three lines of calc.theorem x_pow_9_not_representable_at_depth_3 :
¬ ∃ T : MinimalBasis.Term (PolyActBasis 2) 1,
T.polyDepth ≤ 3 ∧ T.representsGlobally (fun env => (env 0)^9) := by
rintro ⟨T, hd, hT⟩
obtain ⟨p, hp_deg, hp_eq⟩ := Term.eval_hasPolyDegLe 2 3 (by norm_num) T hd
have h_eq : ∀ x : ℝ, p.eval x = x^9 := by
intro x
have h1 := hp_eq (fun _ => x)
have h2 := hT (fun _ => x)
simp_all
have h_zero : p - Polynomial.X ^ 9 = 0 := by
apply Polynomial.eq_zero_of_infinite_isRoot
have h_every_root : ∀ x : ℝ, (p - Polynomial.X ^ 9).IsRoot x := by
intro x
change (p - Polynomial.X ^ 9).eval x = 0
rw [Polynomial.eval_sub, Polynomial.eval_pow, Polynomial.eval_X, h_eq x]
ring
have h_set_eq : { x : ℝ | (p - Polynomial.X ^ 9).IsRoot x } = Set.univ :=
Set.eq_univ_of_forall h_every_root
rw [h_set_eq]
exact Set.infinite_univ
have h_p_eq : p = Polynomial.X ^ 9 := sub_eq_zero.mp h_zero
rw [h_p_eq, Polynomial.natDegree_X_pow] at hp_deg
norm_num at hp_deg
deg(toPoly(T)) ≤ 2^3 = 8; (2) any real polynomial agreeing with x^9 at every real number has infinitely many roots when subtracted from X^9, so toPoly(T) = X^9 exactly; (3) deg(X^9) = 9 contradicts the bound. The reasoning is calculus-student-tractable; the proof is ~25 Lean lines. The interactive widget at interactive.html#polyact visualizes the expressibility frontier in real time.12. ArithCircuit — VP / VNP foundations (Phase 3 Round 1) ¶
Phase 3 Round 1 of the Target B foundations. Defines arithmetic circuits, polynomial-size families, VP and VNP — the algebraic-complexity-theory toolkit, formalized in Lean 4 + mathlib for what's plausibly the first time. 442 lines, 0 sorry, 0 axioms. Style: Bürgisser-Clausen-Shokrollahi 1997 §5.2, generalized to arbitrary input-variable types.
inductive ArithmeticCircuit (R : Type*) [CommSemiring R] (σ : Type*) : Type _ where
| input : σ → ArithmeticCircuit R σ
| const : R → ArithmeticCircuit R σ
| add : ArithmeticCircuit R σ → ArithmeticCircuit R σ → ArithmeticCircuit R σ
| mul : ArithmeticCircuit R σ → ArithmeticCircuit R σ → ArithmeticCircuit R σ
def size : ArithmeticCircuit R σ → ℕ
| .input _ => 0 | .const _ => 0
| .add a b => 1 + a.size + b.size | .mul a b => 1 + a.size + b.size
def eval (env : σ → R) : ArithmeticCircuit R σ → R
| .input i => env i | .const c => c
| .add a b => a.eval env + b.eval env | .mul a b => a.eval env * b.eval env
noncomputable def toMvPolynomial : ArithmeticCircuit R σ → MvPolynomial σ R
| .input i => MvPolynomial.X i | .const c => MvPolynomial.C c
| .add a b => a.toMvPolynomial + b.toMvPolynomial
| .mul a b => a.toMvPolynomial * b.toMvPolynomial
add or mul. size counts internal gates, eval computes the value at a specific input, toMvPolynomial extracts the underlying multivariate polynomial. The generalization over arbitrary input-variable type (any Type*) is what lets later definitions state permanent_in_VNP directly against MvPolynomial (Fin n × Fin n) R without an awkward Fin (n*n) encoding./-- A function p : ℕ → ℕ is polynomial-bounded if it's eventually
≤ some polynomial in n. -/
def IsPolyBound (p : ℕ → ℕ) : Prop :=
∃ c d : ℕ, ∀ n : ℕ, p n ≤ c * n^d + c
/-- VP-over: a family of polynomials f_n is in VP over R if there exist
circuits C_n with size(C_n) polynomially bounded and toMvPolynomial(C_n) = f_n. -/
def VPOver (R : Type*) [CommSemiring R] ... : Prop := ...
/-- VNP-over: a family of polynomials f_n is in VNP over R if it's a
projection (Boolean-cube sum) of a polynomial in VP. -/
def VNPOver (R : Type*) [CommSemiring R] ... : Prop := ...
/-- The permanent polynomial as an MvPolynomial over Fin n × Fin n. -/
noncomputable def permanentVar (R : Type*) [CommSemiring R] (n : ℕ) :
MvPolynomial (Fin n × Fin n) R := ...
theorem VPSubsetVNP {R : Type*} [CommSemiring R]
(f : (n : ℕ) → MvPolynomial (Fin n) R) :
VPOver R f → VNPOver R f := ...
m = 0 and summing over the singleton Boolean cube. Trivial-looking but requires plumbing through the Boolean-projection definitions. About 30 lines of Lean. Originally deferred from Round 1; landed cleanly in this morning's polish pass.13. PermanentVNP via Ryser ¶
The closure of $\mathrm{perm} \in \mathrm{VNP}$. Construction follows Ryser's formula (Ryser 1963, cited in Valiant 1979 as the source of small-permanent polynomial-size circuits); the naive permutation-sum is exponential, but Ryser's reorganization admits a polynomial-size arithmetic circuit. The whole chain — Ryser identity, explicit polynomial-size circuit, headline VNP-membership — is proved with zero new axioms. 14 step-lemmas plus helper lemmas in ArithCircuit.lean.
/-- Ryser's auxiliary polynomial. For an n × n matrix of formal variables,
the permanent is recovered as a Boolean-cube projection. The polynomial
itself is polynomial-size (the construction below is poly in n). -/
noncomputable def permanentRyserAux (R : Type*) [CommRing R] (n : ℕ) :
MvPolynomial (Fin n × Fin n ⊕ Fin n) R := ...
permanent(A) = (-1)^n · Σ_{S ⊆ [n]} (-1)^|S| · Π_i (Σ_{j ∈ S} A[i,j]). Subsets $S \subseteq [n]$ are encoded as elements of the Boolean cube $\{0,1\}^n$, and the right-hand side is a Boolean projection of a polynomial with $n$ Boolean variables — strictly tighter than the naive permutation-encoding — whose underlying circuit is polynomial-size in $n$.theorem evalBoolSubstitute_permanentRyserAux
(R : Type*) [CommRing R] (n : ℕ) (X : Fin n × Fin n → R) :
-- permanentRyserAux summed over the Boolean cube of its m extra variables
-- equals the permanent of the n × n matrix X.
... = Matrix.permanent (Matrix.of (fun i j => X (i, j))) := by
... -- closed via Finset.sum_ninvolution + pair-cancel
MvPolynomial evaluation against the Boolean cube, with the non-bijective case discharged by Finset.sum_ninvolution and pair-cancellation rather than the originally-anticipated funSplitAt. Methodology note recorded by researcher: sum_ninvolution > funSplitAt for pair-cancel sums.theorem permanentRyserAux_in_VPOver (R : Type*) [CommRing R] :
VPOverArity R (fun n => Fin n × Fin n ⊕ Fin n)
(fun n => permanentRyserAux R n) := by
-- explicit circuit of size 2n² + 5n + 2 ; IsPolyBound with k = 9
...
IsPolyBound with $k = 9$.theorem permanent_in_VNP_via_arity (R : Type*) [CommRing R] :
VNPOverArity R (fun n => Fin n × Fin n) (permanentVar R) := ...
-- chain to headline closed in 6 lines
14. Bonus — junk-value falsifiability test ¶
While formalizing EML's obstruction, researcher noticed a meta-pattern: when a function has a junk-value extension to its full domain (mathlib's 1/0 = 0, log(-1) = 0, Nat.pred 0 = 0), partial-function obstructions become single-point witness obstructions, dramatically cheaper to formalize. Researcher predicted this would generalize and proposed a falsifiability test. Prover ran it.
theorem one_div_x_mul_x_not_globally_one : ∃ x : ℝ, (1 / x) * x ≠ 1 := by
refine ⟨0, ?_⟩
simp
example (x : ℝ) (hx : x ≠ 0) : (1 / x) * x = 1 := by field_simp
(1/x)·x = 1" fails at x = 0 because mathlib defines 1/0 = 0. The non-existence claim ("(1/x)·x isn't globally 1") proves in two lines via the witness x = 0.theorem succ_pred_not_globally_id : ∃ n : ℕ, Nat.succ (Nat.pred n) ≠ n := by
refine ⟨0, ?_⟩
decide
example (n : ℕ) (hn : 0 < n) : Nat.succ (Nat.pred n) = n :=
Nat.succ_pred_eq_of_pos hn
succ(pred(n)) = n" fails at n = 0 because Nat.pred 0 = 0. Two-line witness proof. Same shape as case 1. Meta-claim survives the falsifiability test for cases 1 and 2 of the 3-case prediction; case 3 (0^0 = 1) excluded as philosophically ambiguous.