Maybe I'm too familiar with the set theoretic construction of the natural numbers (0 is the empty set, 1 = {0}, ..., 5 = {0,1,2,3,4}, etc.) but their example of "3 ∩ 4 = 3" or "4 intersect 3 is 3" doesn't seem weird, problematic or even useless to me, it just looks like a handy set theoretic implementation of the min() function.
By itself it's not a problem, but it's certainly useless. Perhaps you can tell me what use "3 ∩ 4 = 3" has.
The problem is that these properties get in the way of proving arithmetic theorems because if you are being absolutely strict, you have to distinguish things that are true of natural numbers as an algebraic structure, from things that just happen to be the case because you picked some specific representation to use for natural numbers. This introduces a lot of noise and makes formal proofs very frustrating, somewhat like when you're programming and you have to bend the type system of your compiler to accept your code even though the program is conceptually correct and you end up spending effort on type coercions, casts, "unsafe" blocks etc... mathematically this makes your proof significantly longer, more brittle, and harder to reuse because it accidentally depends on details of the chosen encoding rather than on the intrinsic properties of arithmetic.
It's a leaky abstraction, in software terms. Ideally, an abstraction models the semantics of the problem domain "opaquely"; ideally our natural numbers have only the properties of the natural numbers and no others. An additional property leaking through is not like handy "bonus", but a point of confusion. You can't rely on it in proofs involving natural numbers without being careful to delineate which conclusions follow from the construction vs. which are inherent.
This is very interesting. What happens if you keep pulling the thread and construct large theories on such abstraction-layer-breaking theorems? Would we arrive at interesting things like pulling the thread on sqrt(-1) for imaginary numbers? Or is it somehow “undefined behavior”, quirks of the various implementation substrates of abstract mathematics that should be (informally) ignored? My gut says the former.
Are the various alternative axiomatic foundations also equivalent at this level or not? I suppose they are since they can implement/emulate each other, not sure.
the last paragraphs cite why junk theorems are objectionable but then fully misinterprets it to draw the opposite conclusion. the intersection is the S-feature and problematic. 1 + 2 = 4 is a “theorem beyond T” expressed in T theory.
It seems to me that junk theorems are fundamentally a manifestation of leaky abstractions. It happens when you can see the implementation details from inside the abstraction.
I don't know much about Lean, but I attended an introductory talk at some point and I was particularly bothered by these partial function definitions. The example was sqrt, which would give 0 on the negatives.
Now, of course, if you're careful with the definitions you use, there is no problem. But in the (increasingly relevant) context of automatic theorem proving with LLMs, this seems to defeat the 'groundtruthness' of Lean!
How do you make sure that the LLM doesn't reward hack a proof using these workarounds?
>How do you make sure that the LLM doesn't reward hack a proof using these workarounds?
I'm not sure what you mean exactly? There is no soundness issue here, the fact that `sqrt -1` is defined to be 0 does not have any impact on what statements about `sqrt x` can be proved when `x` is positive.
It just means that if you are working on an intermediate step of a proof and you need the result that `sqrt y >= 0` you don't need to provide a proof that `y >= 0`. If you wanted an intermediate result that `(sqrt y) * 2 = y` then you would still need to provide a proof that `y >= 0`, though.
it's terrible advice for actual programmers though because often 0 is a sentinel value with special meaning for systems that you don't have control over (sometimes because of pre-digital conventions that shouldn't be lightly fucked with).
This is usually done by PL's that want to avoid crashes at all costs, but "turning crashes into subtle logic errors" seems like a really bad idea.
"As mentioned before, this is not a post about what’s practically a good idea. All I’m arguing is that mathematically, we can extend division in this way without leading to a contradiction. Programming languages are different from mathematical formalisms, and should be different. I prefer that 1/0 is an error, because I’m not using my program to prove theories."
Please do yourself a favor and actually read it.
Besides, 0 as a sentinental value on disk or on the wire is fine, but once you have values in a programming language, use option types. This is not 1980s anymore, you don't need to use 0 or -1 or 0xffff to express something special which sooner or later just falls on your feet.
I don't understand why they would make such footgun functions either, especially because (IIUC, and I probably don't) in a way the whole point of Lean's dependent type system is to be able to express arbitrary constraints on the inputs of these functions so that they can be total -- e.g., to be able to define a subtraction function on the nonnegative integers that takes one integer and one {integer that is less than or equal to the first integer}. And to even call this function, you (or perhaps Lean itself) would need to first prove that its second argument is less than or equal to its first.
You can express those constraints; it just turns out to be less ergonomic in practice if you do. (You can even do so in terms of the junk-valued total functions! Just define `actual_subtraction` to call straight through to `junky_subtraction`, but `actual_subtraction` has these constraints on its domain.)
The mathlib way to do things is to push those requirements out to the one who wishes to use the theorem. If you find that you're depending on a junk value in a way that's incompatible with what you wanted to prove, then you've simply discovered that you forgot to restrict your own domain to exclude the junk. (And if your desired usage lines up with the junk, then great, you get to omit an annoying busywork hypothesis.) A sqrt function that gives 0 on the negatives isn't breaking any of sqrt's properties on the positives!
The mathlib way means that instead of every function having to express these constraints and pass proofs down the line, only some functions have to.
> If you find that you're depending on a junk value in a way that's incompatible with what you wanted to prove
This is the part I'm struggling with. How would you actually know/realise that you were doing this? It seems like "the mathlib way" you describe is choosing to rely on programmer discipline for something that could be enforced automatically.
My fear is that relying on the junk values of functions (values where their "proper" partial counterparts are not defined) is somehow unsound (could lead to proving something untrue). But perhaps my intuition is off here? If so, I think the specific junk values chosen must not matter at all -- e.g., having sqrt return 42 for negative x values should work just as well, am I right?
You can't prove something untrue (in the sense that it implies false) without proving that the theorem prover is is unsound, which I think at the moment is not known to be possible in Lean.
But you're exactly right. There's nothing linking theorem prover definitions to pen and paper definitions in any formal system.
It is enforced automatically for most purposes: If you're writing a proof involving e.g. the sqrt function, you want to use theorems about it, e.g. that (sqrt(x))^2 = x. Almost all of those theorems have x>=0 as a precondition, so you do need to prove it when it matters.
This is a topic of contention in formalized math with no universal right answer. Some libraries go heavy on the dependent types, and some like mathlib try to avoid them. I do math in both Rocq and Lean and I find I like the latter style a lot more for my work for a couple reasons:
- Fewer side conditions: Setting a / 0 = 0 means that some laws hold even when a denominator is 0, and so you don't need to prove the denominator is nonzero. This is super nice when the denominator is horrible. I heard once that if you set the junk value for a non-converging Riemann integral to the average of the lim sup and lim inf you can obliterate a huge number of integrability side conditions (though I didn't track down this paper to find out for sure).
- Some of the wacky junk arithmetic values, especially as it relates to extended reals, do show up in measure theory. Point being: "junk arithmetic" is a different mathematical theory than normal math, but it's no less legitimate, and is closely related.
- Definition with Hilbert's epsilon operator. If I want to define a function that takes eg. a measurable set S as an argument, I could do the dependent types way
def MyDef (S) (H : measurable S) := /-- real definition -/
but then I need to write all of my theorems in terms of (MyDef S H) and this can cause annoying unification problems (moreso in Rocq than in Lean, assuming H is a Prop). Alternatively, I could use junk math
def MyDef' (S) := if (choose (H : measurable S)) then /-- real definition -/ else /-- junk -/
I can prove (MyDef' S = MyDef S H) when I have access to (H : measurable S). And the property H here can be be really complex, convergence properties, existence properties, etc. It's nice to avoid trucking them around everywhere.
This is a really good explanation, but it reinforces my understanding that these “junk maths” are literally undefined behavior as in C and such. They are not defined (in maths), you are not supposed to trigger them, so they can be anything. Great…
This is horrible for a language whose whole purpose I thought was that to be foolproof and that if it compiles its true. Having very subtly different definitions of common operations is such a footgun.
Of course, I understand that this doesn’t bother mathematicians because they are used to not having any guardrails anyways. Just like C programmers have the attitude that if you fall on such a trap, you deserve it and you are not a “real programmer”. But Lean is supposed to be the other extreme isn’t it? Take nothing for granted and verify it from the ground up.
I suppose I am falling for that “Twitter confusion” the post is referring to. I never had any issues with this when actually using Lean. I just don’t like the burden of having to be paranoid about it, I thought Lean had my back and I could use it fairly mechanically by transforming abstract structures without thinking about the underlying semantics too much.
Anyway, despite the annoyance, I do assume that the designers know better and that it is a pragmatic and necessary compromise if it’s such a common pattern. But there must be a better solution, if having the exception makes it uncomfortable to prove, then design the language so that it is comfortable to prove such a thing. Don’t just remove the exception because 99% of the time it doesn’t matter. If we are happy with 99% we wouldn’t be reaching for formal verification, there are much more practical means to check correctness.
There is still a guardrail. The blog post explains that it is just using different functions and notation which might allow things like 0/0. But at the end of the day, different notation still cannot be used to prove false things.
In other words, you can use all these junk theorems to build strange results on the side, but you can never build something that disagrees with normal math or that contradicts itself. There is no footgun, because the weird results you obtain are just notation. They look weird to a human, but they don't allow you to actually break any rules or to prove 1=0.
Thank you! This hit the nail on the head for me, though I probably need to try out a few more examples to fully convince myself.
TL;DR: It's actually harmless (and often convenient) to "inflate" the domains of partial functions to make them total (by making them return arbitrary junk values where the original function is undefined), provided that every theorem you want to apply still comes with the original, full restrictions.
Kevin's example is good. My stupider example would be: We can define a set that contains the integers ..., -2, -1, 0, 1, 2, ..., plus the extra element "banana". If we define the result of any addition, subtraction or multiplication involving a banana to be 42, and to have their usual results otherwise, then, provided that we add the condition "None of the variables involved is banana" to the theorem "x+y = y+x", and to every other theorem about arithmetic, anything that we can prove about arithmetic on elements of this set is also true of arithmetic on integers.
Wow, okay. I would imagine this makes mathematicians quite angry? I guess you're responsible for all the operations you use in your proof being well-behaved.
It sounds like subtraction over Nats needs to be split into `sub?`, and `sub!`, the former returning an option, and the latter crashing, on underflow, as is the Lean convention?
To use the default `sub`, you should need to provide a witness that the minuend is >= the subtrahend...
The version with silent underflow is still useful, it should just be called `saturatingSub`, or something, so that mathematicians using it know what they're getting themselves into...
I don't think anyone minds this. The purpose of a formal foundation is to prove useful theorems. Junk theorems are just a side effect. But I'm always tickled by logicians who think type theories are philosophically superior to set theories because in some of the latter you might be able to prove something like 1 ∈ 2, whereas type theories, even without their own junk theorems, have a pragmatic difficulty with division (hence they tend to define 1/0 = 0). Junk theorems just come with the territory, and foundations need to be considered based on their utility, not philosophical purity, which is never achieved anyway (at least not without a cost to utility).
> But I'm always tickled by logicians who think type theories are philosophically superior to set theories because in some of the latter you might be able to prove something like 1 ∈ 2
Note that this is actually how the basic ZF construction works, where 0 = {} and successor(n) = n ∪ {n}, so you immediately get 2 = {0, 1} and thus 1 ∈ 2 , without any need for a proof.
Except it suffices to know that some construction that supports the integer/natural axioms exists without having any specific theorems, such as 1 ∈ 2, about its specifics. In fact, in TLA+, which contains a formalised set theory, the construction is not part of the definition of the integers, and 1 ∈ 2 (or any other theorem about the construction of the integers) is not provable (of course, 1 ∉ 2 is not provable, either). The details of the construction can remain unknowable.
Anyway, my point is that type theories contain at least as many junk theorems as set theories, if not more, and junk theorems are fine either way. Neither approach is more philosophically pure. Any claims to that effect are really an expression of personal aesthetic preferences.
I mean in a normal math curriculum you would define only the multiplicative inverse and then there is a separate way to define fraction, if you start out with certain rings. It is kind of surprising to me that they did a lazy definition of division.
Note that the word "coordinate" used here feels a bit disingenuous to me, because that's how one might refer to the nth property defining a mathematical object or another.
For example: The third coordinate of the rational number 1/2 is a bijection.
Coordinate here actually means: third property in the definition of a rational number in Lean. Here, this property is the statement that the denominator 2 is not zero. This is not so absurd, if we define a rational number as a tuple consisting of a natural number for the numerator (property 1) and an integer for the denominator (property 2), with the added restrictions that the denominator is not the integer zero (property 3), and that the numerator and denominator are in least terms (property 4).
But the part where the proof that the denominator is nonzero can be viewed as a bijective function, is to me indeed type-theoretic weirdness. If I'm not wrong, it's just the proof viewed as a zero-argument function. (proofs for theorems that begin with e.g. forall are functions that take arguments).
This is a junk theorem, it's trying to prove something that will sound strange or meaningless but is technically allowed by the details of the foundations.
Here it's building a list with one element and saying all elements of this list are equivalent. S̶o̶ ̶t̶h̶e̶ ̶f̶o̶l̶l̶o̶w̶i̶n̶g̶ ̶e̶l̶e̶m̶e̶n̶t̶s̶ ̶o̶f̶ ̶t̶h̶e̶ ̶l̶i̶s̶t̶ ̶a̶r̶e̶ ̶a̶l̶l̶ ̶e̶q̶u̶i̶v̶a̶l̶e̶n̶t̶ ̶t̶o̶ ̶e̶a̶c̶h̶ ̶o̶t̶h̶e̶r̶ ̶(̶t̶h̶e̶r̶e̶ ̶i̶s̶ ̶a̶ ̶s̶i̶n̶g̶l̶e̶ ̶e̶l̶e̶m̶e̶n̶t̶ ̶i̶n̶ ̶t̶h̶e̶ ̶l̶i̶s̶t̶)̶
It doesn't mean anything. The point is that the language of lean, and its proof derivation system, are able to express (and prove) statements that do not correspond to any meaningful mathematics.
List.TFAE is a helper definition and it’s invoked on a funny looking term when translated directly into english. I don’t know what I think, yeah it’s kinda junky but not in the way that 57 \mem 100 in a set encoding of the naturals.
I've been writing [libsodium](https://doc.libsodium.org/) bindings in Lean4 and have ended up using `native_decide` quite liberally, mostly as a convenience. Can any Lean devs provide a more thorough interrogation of this? Should I go back and try to scrub its usage out of my library? Logically it seems consistent with what I'm trying to do with Lean4's FFI (i.e. you really do need to explicitly trust the Lean kernel since I'm adding nontrivial computation using a foreign cryptography library) but I'm curious if this isn't necessary and whether Lean devs would push back on its use.
[1] https://www.cantorsparadise.com/what-are-junk-theorems-29868...
The problem is that these properties get in the way of proving arithmetic theorems because if you are being absolutely strict, you have to distinguish things that are true of natural numbers as an algebraic structure, from things that just happen to be the case because you picked some specific representation to use for natural numbers. This introduces a lot of noise and makes formal proofs very frustrating, somewhat like when you're programming and you have to bend the type system of your compiler to accept your code even though the program is conceptually correct and you end up spending effort on type coercions, casts, "unsafe" blocks etc... mathematically this makes your proof significantly longer, more brittle, and harder to reuse because it accidentally depends on details of the chosen encoding rather than on the intrinsic properties of arithmetic.
Are the various alternative axiomatic foundations also equivalent at this level or not? I suppose they are since they can implement/emulate each other, not sure.
don’t be mislead about what a junk theorem is!
Now, of course, if you're careful with the definitions you use, there is no problem. But in the (increasingly relevant) context of automatic theorem proving with LLMs, this seems to defeat the 'groundtruthness' of Lean!
How do you make sure that the LLM doesn't reward hack a proof using these workarounds?
I'm not sure what you mean exactly? There is no soundness issue here, the fact that `sqrt -1` is defined to be 0 does not have any impact on what statements about `sqrt x` can be proved when `x` is positive.
It just means that if you are working on an intermediate step of a proof and you need the result that `sqrt y >= 0` you don't need to provide a proof that `y >= 0`. If you wanted an intermediate result that `(sqrt y) * 2 = y` then you would still need to provide a proof that `y >= 0`, though.
(discussion: https://news.ycombinator.com/item?id=17736046)
This is usually done by PL's that want to avoid crashes at all costs, but "turning crashes into subtle logic errors" seems like a really bad idea.
"As a programmer, I don’t like it."
"As mentioned before, this is not a post about what’s practically a good idea. All I’m arguing is that mathematically, we can extend division in this way without leading to a contradiction. Programming languages are different from mathematical formalisms, and should be different. I prefer that 1/0 is an error, because I’m not using my program to prove theories."
Please do yourself a favor and actually read it.
Besides, 0 as a sentinental value on disk or on the wire is fine, but once you have values in a programming language, use option types. This is not 1980s anymore, you don't need to use 0 or -1 or 0xffff to express something special which sooner or later just falls on your feet.
The mathlib way to do things is to push those requirements out to the one who wishes to use the theorem. If you find that you're depending on a junk value in a way that's incompatible with what you wanted to prove, then you've simply discovered that you forgot to restrict your own domain to exclude the junk. (And if your desired usage lines up with the junk, then great, you get to omit an annoying busywork hypothesis.) A sqrt function that gives 0 on the negatives isn't breaking any of sqrt's properties on the positives!
The mathlib way means that instead of every function having to express these constraints and pass proofs down the line, only some functions have to.
> If you find that you're depending on a junk value in a way that's incompatible with what you wanted to prove
This is the part I'm struggling with. How would you actually know/realise that you were doing this? It seems like "the mathlib way" you describe is choosing to rely on programmer discipline for something that could be enforced automatically.
My fear is that relying on the junk values of functions (values where their "proper" partial counterparts are not defined) is somehow unsound (could lead to proving something untrue). But perhaps my intuition is off here? If so, I think the specific junk values chosen must not matter at all -- e.g., having sqrt return 42 for negative x values should work just as well, am I right?
But you're exactly right. There's nothing linking theorem prover definitions to pen and paper definitions in any formal system.
- Fewer side conditions: Setting a / 0 = 0 means that some laws hold even when a denominator is 0, and so you don't need to prove the denominator is nonzero. This is super nice when the denominator is horrible. I heard once that if you set the junk value for a non-converging Riemann integral to the average of the lim sup and lim inf you can obliterate a huge number of integrability side conditions (though I didn't track down this paper to find out for sure).
- Some of the wacky junk arithmetic values, especially as it relates to extended reals, do show up in measure theory. Point being: "junk arithmetic" is a different mathematical theory than normal math, but it's no less legitimate, and is closely related.
- Definition with Hilbert's epsilon operator. If I want to define a function that takes eg. a measurable set S as an argument, I could do the dependent types way
def MyDef (S) (H : measurable S) := /-- real definition -/
but then I need to write all of my theorems in terms of (MyDef S H) and this can cause annoying unification problems (moreso in Rocq than in Lean, assuming H is a Prop). Alternatively, I could use junk math
def MyDef' (S) := if (choose (H : measurable S)) then /-- real definition -/ else /-- junk -/
I can prove (MyDef' S = MyDef S H) when I have access to (H : measurable S). And the property H here can be be really complex, convergence properties, existence properties, etc. It's nice to avoid trucking them around everywhere.
I found the last section especially helpful.
This is horrible for a language whose whole purpose I thought was that to be foolproof and that if it compiles its true. Having very subtly different definitions of common operations is such a footgun.
Of course, I understand that this doesn’t bother mathematicians because they are used to not having any guardrails anyways. Just like C programmers have the attitude that if you fall on such a trap, you deserve it and you are not a “real programmer”. But Lean is supposed to be the other extreme isn’t it? Take nothing for granted and verify it from the ground up.
I suppose I am falling for that “Twitter confusion” the post is referring to. I never had any issues with this when actually using Lean. I just don’t like the burden of having to be paranoid about it, I thought Lean had my back and I could use it fairly mechanically by transforming abstract structures without thinking about the underlying semantics too much.
Anyway, despite the annoyance, I do assume that the designers know better and that it is a pragmatic and necessary compromise if it’s such a common pattern. But there must be a better solution, if having the exception makes it uncomfortable to prove, then design the language so that it is comfortable to prove such a thing. Don’t just remove the exception because 99% of the time it doesn’t matter. If we are happy with 99% we wouldn’t be reaching for formal verification, there are much more practical means to check correctness.
In other words, you can use all these junk theorems to build strange results on the side, but you can never build something that disagrees with normal math or that contradicts itself. There is no footgun, because the weird results you obtain are just notation. They look weird to a human, but they don't allow you to actually break any rules or to prove 1=0.
TL;DR: It's actually harmless (and often convenient) to "inflate" the domains of partial functions to make them total (by making them return arbitrary junk values where the original function is undefined), provided that every theorem you want to apply still comes with the original, full restrictions.
Kevin's example is good. My stupider example would be: We can define a set that contains the integers ..., -2, -1, 0, 1, 2, ..., plus the extra element "banana". If we define the result of any addition, subtraction or multiplication involving a banana to be 42, and to have their usual results otherwise, then, provided that we add the condition "None of the variables involved is banana" to the theorem "x+y = y+x", and to every other theorem about arithmetic, anything that we can prove about arithmetic on elements of this set is also true of arithmetic on integers.
It sounds like subtraction over Nats needs to be split into `sub?`, and `sub!`, the former returning an option, and the latter crashing, on underflow, as is the Lean convention?
To use the default `sub`, you should need to provide a witness that the minuend is >= the subtrahend...
The version with silent underflow is still useful, it should just be called `saturatingSub`, or something, so that mathematicians using it know what they're getting themselves into...
What is a coordinate in the context of a rational number? How many coordinates does it have?
Note that this is actually how the basic ZF construction works, where 0 = {} and successor(n) = n ∪ {n}, so you immediately get 2 = {0, 1} and thus 1 ∈ 2 , without any need for a proof.
Anyway, my point is that type theories contain at least as many junk theorems as set theories, if not more, and junk theorems are fine either way. Neither approach is more philosophically pure. Any claims to that effect are really an expression of personal aesthetic preferences.
x/0 = x(1/0) = x*0 = 0, so x/0 = 0 for all x.
So x/0 = y/0.
Multiply both sides by 0: x = y.
For example: The third coordinate of the rational number 1/2 is a bijection.
Coordinate here actually means: third property in the definition of a rational number in Lean. Here, this property is the statement that the denominator 2 is not zero. This is not so absurd, if we define a rational number as a tuple consisting of a natural number for the numerator (property 1) and an integer for the denominator (property 2), with the added restrictions that the denominator is not the integer zero (property 3), and that the numerator and denominator are in least terms (property 4).
But the part where the proof that the denominator is nonzero can be viewed as a bijective function, is to me indeed type-theoretic weirdness. If I'm not wrong, it's just the proof viewed as a zero-argument function. (proofs for theorems that begin with e.g. forall are functions that take arguments).
Here it's building a list with one element and saying all elements of this list are equivalent. S̶o̶ ̶t̶h̶e̶ ̶f̶o̶l̶l̶o̶w̶i̶n̶g̶ ̶e̶l̶e̶m̶e̶n̶t̶s̶ ̶o̶f̶ ̶t̶h̶e̶ ̶l̶i̶s̶t̶ ̶a̶r̶e̶ ̶a̶l̶l̶ ̶e̶q̶u̶i̶v̶a̶l̶e̶n̶t̶ ̶t̶o̶ ̶e̶a̶c̶h̶ ̶o̶t̶h̶e̶r̶ ̶(̶t̶h̶e̶r̶e̶ ̶i̶s̶ ̶a̶ ̶s̶i̶n̶g̶l̶e̶ ̶e̶l̶e̶m̶e̶n̶t̶ ̶i̶n̶ ̶t̶h̶e̶ ̶l̶i̶s̶t̶)̶