> To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena. But the problem is that we cannot truly observe infinity, and so we cannot truly say what it is.
When the author says we cannot truly observe infinity, what does that mean? Infinity is a mathematical symbol we can observe. We can't observe infinitely many objects, but even if we could, it wouldn't be the same as observing infinity. You can't observe the number one by observing one stone.
I think there is some confusion in this article between symbols and what they can stand for, and I can't help but wonder if that same confusion is at the root of ideas like ultrafinitism.
Some might say that 2 is as made up as infinity.
Let me elaborate a little - your brain together with society made an abstraction "apple", and only by not distinguishing between these "sets" of atoms you can have numbers.
Mathematical concepts don't have to have an obviously physical analogue. I mean, you'd find it difficult to observe minus two apples and certainly tricky to observe i apples.
To my mind, maths is like a "what if?" puzzle and whether or not infinity makes sense in the physical world, there's still fun to be had by considering the consequences of it.
That also means that it can be interesting to consider limited number systems which don't have any concept of infinity.
I don’t understand, and I hope it’s just bad writing.
Certainly you can build a branch of mathematics without an axiom of infinity, and that’s fine, it’s math over finite sets.
However, an axiom of infinity is independent, it doesn’t contradict anything in standard formalizations, and so it doesn’t make sense to say “infinity is wrong”.
He may think the axiom of infinity isn’t satisfied by our real physical world, but that’s not a math question! There’s nothing logically inconsistent about infinite sets nor their axiomatizations.
It's an interesting read. I don't think it's bad, but it's not rigorous or really aimed at anything in particular. Basically asking a discrete mathematician whether he needs continuity: no. It seems reasonable that we might need separate paradigms to think about different kinds of problem (e.g., is there a physical size of the universe vs. is there a biggest prime number) because we don't know yet if there is a theory of everything or if there are innate boundary layers.
It's a fun thinking prompt, and you can go down the rabbit hole of information theory and quantized spacetime. Like you suggest, it's perfectly fine to say "infinity does not exist" and also contemplate and operate on slice at a time.
> However, an axiom of infinity is independent, it doesn’t contradict anything in standard formalizations, and so it doesn’t make sense to say “infinity is wrong”.
Suppose we start with ZFC - Infinity as our base system. Then the negation of Infinity is consistent with this system. But adding Infinity itself makes the system strictly stronger, since ZFC proves the consistency of ZFC - Inf: in particular, in ZFC, we cannot prove that Infinity is consistent with ZFC - Inf.
In other words, in principle, it might be the case that ZFC - Inf is consistent, yet ZFC itself has a contradiction. In practice, most people believe that ZFC is also consistent, but we have no way to prove it a priori without accepting even more new axioms.
What people might not be understanding is that mathematics is inherently built... ZFC was pored over for years and eventually the community concluded it was a good system to (a) preserve most, if not all, of the mathematics that had already been done and (b) build more mathematics.
You can have gripes over whether or not pure math is compatible with the physical world but we're not exactly close to solving that problem... if we were, then physicists would have a much easier time lol
> But in the late 1800s, Georg Cantor and other mathematicians showed that the infinite really can exist.
I think, as I understand it, the objection is this. The proposition that infinity is "real", and there are actually infinite (not just very many) things.
I don't think it's bad writing. These people actually get angry at the idea that other people do math that might not connect to the real world. And they specially have it out for infinity.
I say do whatever math you like. It is helpful to know what math you are doing. For instance, while I don't have a "problem" with the Axiom of Choice per se I do like clean specifications of when we are using it and when we are not, because it is another example of when we detach from reality as we know it. I don't have a problem with detaching from reality as we know it, I just like there to be awareness that we have.
But plenty of math is detached from reality. Honestly we don't observe very many "mathematical entities" at all; I've never seen a graph. I've never seen hyperbolic space. I'm aware of the many places aspects of them seem to map to reality, but I've never actually seen a literal graph in the real world.
Personally I am reminded of the way that we model our computers with Turing Complete formalisms, despite the fact they are observably not Turing Complete and are technically just finite state machines. However, the observation that they are "just" finite state machines doesn't move us closer to an understanding of how our computers work, it moves us farther away. Even though computers are completely real-world phenomena, if you want to understand the issues raised by things like Turing Incompleteness and other such things in the real world, you're going to be exponentially better off using Turing Machine formalisms and simply noting that you may run out of memory or practically-available computational resources before a calculation can complete than trying to build a new set of formalisms around finite state machines. We can be in an engineering context where we are well aware of the finite nature of everything we are doing because it all comes back to real, physical machines, but it's still easier to model with infinity than without it.
In that context, the real utility of "infinity" is less "an infinite number of things" than "you will never reach for another X [byte of RAM, byte of disk, CPU cycle, incrementing counter, etc.] and be told you're out of resources". Basically we write our proofs, formal or informal, as ignoring "what if I reach for this resource and it's not there?" for every such resource and every time we reach for a resource, which is quite often. You could go through a system and add a "what if" check for every such instance, but it's way cheaper to just buy another stick of RAM or tweak the program to take fewer resources than it is to try to deal with the exponential-with-a-large-exponent explosion of states this causes mathematically.
The problem with infinity is that it's a hack. It is basically the NULL pointer of mathematicians. An instance of a number that has a special meaning that breaks the abstraction of numbers.
If you want to do things with infinity, fine, but then do it properly and write things like lim x->inf (your expression with x here)
> An instance of a number that has a special meaning.
Not really. There are infinitely many infinities. Infinite numbers are not particularly more special than real numbers, complex numbers, matrices, functions/operators, etc.
> An instance of a number that has a special meaning
Lots of numbers have special meanings. The ancients didn't think 1 was a number, and later lots of people didn't (and some still don't) think 0 was a number.
> One morning in 1976, the Princeton mathematician Edward Nelson (opens a new tab) woke up and experienced a crisis of faith. “I felt the momentary overwhelming presence of one who convicted me of arrogance for my belief in the real existence of an infinite world of numbers,” he reflected decades later (opens a new tab), “leaving me like an infant in my crib reduced to counting on my fingers.”
Friends don't let friends do Platonism.
For real, if you're a formalist you can ask these foundational questions without fear of this kind of dread; they become methodological rather than some kind of metaphysical mess.
My favorite math paper is "Is 10^10^10 a Finite Number?" by David van Dantzig. It lies more on the side of philosophy, so many can understand it easily. I first learned about it many years ago from Van Bendegem's list of strict finitism papers, and I would recommend that list for anyone interested in learning more about strict finitism.
For my personal opinion, strict finitism provides a richer field of study than potential infinitism or actual infinitism. Compare this to Errett Bishop's constructive analysis that requires the calculation of bounds to real numbers, instead of classical analysis only requiring that a real number exists. Much more difficult, though more precise.
I found "On Feasible Numbers" by Vladimir Sazonov to have application for computers. In a feasible mathematics, a large number fails to exist (say, 2^512), but a proof of contradiction must exceed such a large size (perhaps larger than the universe). Likewise, we have unix time that tries to count forever, so we should pick a storage size so large that counting exceeds the heat death of the universe. 10^100 years worth of Planck seconds fits in 501 bits, so round that to 512 bits. 512 bits of time ought to be enough for anybody :)
> To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena. But the problem is that we cannot truly observe infinity, and so we cannot truly say what it is.
I'm hoping this is just bad writing from Quanta rather than something "ultrafinitists" truly believe.
I really don't think it's that complicated. Even pre-schoolers, competing to see who can say the highest number, quickly learn the concept of infinity. Or elementary school students trying to write 1/3 as a decimal.
Of course you need to be careful mapping infinity onto the physical world. But as a mathematical concept, there is absolutely nothing wrong with it.
> Mathematicians can construct a form of calculus without infinity, for instance, cutting infinitesimal limits out of the picture entirely.
This seems like a useful concept that also doesn't require denying the very obvious concept of infinity.
They pretty quickly realize that there is no winning because you can always just say more numbers than the last kid - there is no biggest number. Usually something like "a hundred million million million million million and two", "a hundred million million million million million and three", etc.
And then someone, whose friend or older brother taught them the concept, blurts out "infinity". And after a quick explanation, everyone more or less gets it.
When I was about ten, a math teacher once asked me whether the number 0.9999... (infinitely repeating) was different than 1. I said, with my child's intuition, that of course it was. He then challenged me to write down a number that was in between them, because if they were not the same number then there would be many (in fact, infinitely many) numbers between them. I couldn't, of course: the best I could do was to write 0.9999...5, which falls into the same category error as "infinity plus one / infinity plus two".
Now, decades later, I get it better. The number 0.99999... is 9/10 + 9/100 + 9/1000 + 9/10000 + ..., which approaches 1 asymptotically the same way that 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... approaches 1. Under many circumstances, you can treat that number as if it was 1, which neatly answers Zeno's Paradox. (Though beware of the limitations of that analysis: 1/n approaches infinity as n approaches 0, but 1/0 is not equal to infinity. Because 1/n approaches infinity only as n approaches 0 from the positive direction. If you look at the sequence 1/-0.1, 1/-0.01, 1/-0.001, etc. where n approaches 0 from the negative direction, that approaches negative infinity. A function that has two different limits as you approach the same number from two different directions cannot have its limit substituted like that).
This is one of my life goals is to prepare my kids to troll their math teachers with the dual numbers and the claim that .999... is obviously 1-ε. Goal is to convince the teacher .999...≠1. Bonus points if they instead convince the teacher to doubt that complex numbers exist.
It really comes down to what semantics we attach to "=" when one of the sides is an infinite series.
The "equals to" sign that we have used prior to that mental exercise was for finite terms only, we had not had to deal with infinitely many terms before that leap in thought. So now we have to extend the notion in a way that is backward compatible.
A convenient one is it is equal to its limit if it exists.
> semantics we attach to "=" when one of the sides is an infinite series
I would say that the semantics are about what an infinite series itself is, not about the equal sign. Once we have the common analytic notion of convergence of an infinite series, then the equality makes sense. The issue is that an infinite series is not an actual sum, but, formally, it is a sequence (of the partial sums). As you say, we represent the limit of the sequence of the partial sums with the same notation and only in the case that we have absolute convergence, but that's basically because we use the same notation for two different things (the sequence of the partial sums, and the limit of that). If we know we refer to the limit, I don't think there is any semantic complication with the equal sign.
Only if they live forever, which they won't. They can only count so fast, and there are only so many of them. Even if every atom in the observable universe was counting at, idk, 1GHz, that's still a finite number. The universe is not (as far as we know for certain) infinitely old. Time may extend infinitely into the future, or it may not. We don't know. So far as we know for sure everything is in fact finite.
Take the approximate number of subatomic particles in the universe, call it Ω. Define the largest number as Ω² and the smallest number as -Ω², and define the number of decimal numbers between each integer number as Ω², evenly spaced. That should be more than enough numbers. Redefine Ω with each new discovery in physics.
If this seems too conservative to you, like if for some reason you want to talk about the volume of the universe in terms of the width of an up-quark or whatever, feel free to tack on some modifier to my proposed number system.
I want to count the number of possible permutations of the particles. We’ve now got a “larger” number than Ω will ever be able to represent by definition (even Ω² is minuscule by comparison).
yeah that seems fine. there's like no good reason to do that. are you trying to simulate reality or something?
but my point still stands, choose whichever calculation you think is important to be able to do with Ω, defined as f(Ω), square it for good measure, and set that as the max, the min, and the number of numbers in between each integer.
The total number of possible numbers will be ~2*f(Ω)⁴ which should be more than enough numbers :)
AES256 already has more possible keys than exist atoms in the visible universe and that’s a pretty mundane thing. If you wanted to store all those keys, that’s even large. # of atoms in the universe turns into a very small very quickly when talking about permutations and permutations come up all the time (mathematical simulations, probability computations, etc).
I really don’t understand what point you’re trying to make saying “pick the largest possible number relevant” as that number varies. Also, that’s just the rational numbers. There’s plenty of digits of precision needed for trajectories over galactic distances and the more precision you try to give irrational numbers, the larger your magical “largest number” needs to grow again.
Also, we don’t know how big the “non observable universe” is and it’s beyond the scope of science. It very well could be an infinite number of atoms and then what?
> It very well could be an infinite number of atoms and then what?
Where I get stuck with this is how might we measure that? Continuous measurements and infinite measurements are not something we can make. We fit continuous theories to discrete measurements--and the good ones fit really well!--but until we can measure it how can we actually know? I concluded we just can't, and we have to be OK with that.
> We fit continuous theories to discrete measurements--and the good ones fit really well!--but until we can measure it how can we actually know?
Well, physicists came up with quantum mechanics because they found a way to distinguish a genuinely discrete phenomenon.
Understanding the physical universe overlaps with a subset of math. It shouldn't constrain the abstract tools which may or may not one day be useful for that understanding.
I agree that continuity (and therefore infinity) are really useful tools. But it may also be useful to develop mathematical formalism that hews more closely to that which we can actually observe. Or not! But if nobody investigates we'll never know.
At the bottom end we have the Planck length. How many cubic Planck lengths in the visible universe ? Anyone ? To paraphrase Bill Gates (allegedly), "(PlanckLengths/widthOfUniverse)*3 ought to be enough for anybody."
Sad that the article doesn't mention wildberger (coincidentally similar last name), an (in)famous math youtuber that's been mentioned on HN several times before. He has a "rational trigonometry series" an approachable way to see how math would work in an ultrafinite setting.
Last year I made the mistake of asking ChatGPT what the world would look like if `∞ === -∞` and it took me seriously (I think) and led me on an hours-long dance where in the end it had me trying to prove, mathematically, that `2 > 1` ... and it was at that point I realised that I'm not cut out to think in numbers and maybe it was for the best that I failed my end-of-school Maths exam
The first thing that came to mind reading the article is that you need only 60ish digits of pi to calculate the circumference of the universe with a resolution of a Planck length, or something like that. You can have all the digits you want, but at some point you are beyond what is possible in reality, and giving back wrong answers for what you are trying to achieve.
I have always maintained that real mathematics starts when you address the infinite. I don't see how you can get anything interesting (like analysis, differential geometry, topology) without the assumption that the infinite exists.
Surprised Wildberger’s youtube channel wasnt in here.
People ask whats the point? For me the study of the infinitesimal vs finite has really helped me better understand issues of precision and approximation in computers. I feel like I know exactly why 1/3 plus 1/5 is not exactly 8/15 in my Calculator app. Or why points in my 3d object face are not coplanar after rotation. Or why games have weird glitches when your character is too far from origin point. Or why a spreadsheet shows rounding issues
In school I developed a strong hunch that continuity and infinity are "convenient delusions" we have that allow us to process the otherwise horrific complexity of the world. Experiencing time, sound, or visual motion as continuous, rather than discrete signal inputs is so much simpler. Similarly, the mathematical tricks and shortcuts we can use on well behaved continuous functions are both "unreasonably effective" and... probably not grounded in actual reality[1]? But damn are they convenient.
[1] EDIT: the reasoning is simple, if naive: the largest quantities we can measure are not, in fact, infinitely large, and the smallest ones we can measure are not, in fact, infinitesimally small. So until you show me an infinitesimal or an infinity, you're just making them up!
> Experiencing time, sound, or visual motion as continuous, rather than discrete signal inputs is so much simpler.
Some practice with Mahasi Sayadaw style "noting" can train you into seeing your phenomenological experience as a stream of point-events between which we weave the illusion of continuity.
I've always felt that to treat infinity as number is to commit a category error (aka type conflict), to confuse the process with the outcome of the process. Infinity has proven to be very useful, but usefulness doesn't make it always valid.
>To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena.
>“Infinity may or may not exist; God may or may not exist,” he said. “But in mathematics, there should not be any place, neither for infinity nor God.”
>But one day, he added, mathematicians will look back and see that this crackpot, like those of yore who questioned gods and superstitions, was right. “Luckily, heretics are no longer burned at the stake.”
Contrarian thinking can be great because it taps into the intuition that the masses are mostly followers who can be led anywhere, not critical thinkers who've deeply examined what they believe. Being contrarian, then, is akin to staking out a new leadership position.
The space of contrarian ideas is vast, and most of them are probably bad, but, nevertheless, the willingness to hold unconventional, internally consistent views should be celebrated, because it increases diversity of thought. Our collective hive mind grows stronger through heresy.
However, I like my heresy with a splash of axiomatic precision, which is sadly lacking in this article.
It's not a new idea, and it's a challenging one to investigate. Without real numbers (that are infinitely long) most of the calculus stops working. And everything that depends on it.
Perhaps we can recover some of it by treating the infinitely variable values as approximations of the more discrete values and then somehow proving that the errors from them stay bounded, for at least some interesting problems.
> The article doesn’t really tell us what is gained by rejecting infinity.
Decidability. The issues around undecidability all involve the lack of an upper bound. In a finite deterministic space, everything is decidable, although some things may be too costly computationally to decide.
There are several ways to go for decidability. The brute force way is computer arithmetic - there is no number larger than 2^64-1. That's how we get things done on computers, but proofs about numbers with finite upper bounds need lots of special cases. Mathematicians hate that.
I used to work on this sort of thing, using Boyer-Moore theory. That's a lot like the Peano axioms. There is (ZERO), and (ADD1 (ZERO)), and (ADD1 (ADD1 (ZERO))), etc. Everything is constructive and has an unambiguous representation in a LISP-like form.
You can have recursive functions. But they must be proven to terminate, by having a nonnegative value which decreases on each recursive call. There is a distinction between "infinite" and "arbitrarily large". You can talk about arbitrarily large numbers, but you cannot get to 1/2 + 1/4 + 1/8 ... = 1. You can have integers and rational numbers of arbitrary size, but not reals.
Set theory was interesting. Rather than axiomatic set theory, I was using lists as sets, with the constraints that no value could be duplicated and the list must be ordered. Equality is strict - two things are equal only if the elements are all equal, compared element by element. It's possible to prove the usual axioms of set theory via this route. The ordered criterion requires proving things about ordered list insertion to get there. It's ugly and needs machine proofs.
I was doing this back in the early 1980s, when machine proofs were frowned upon. Mathematicians were still upset about the four-color theorem proof. It's all case analysis, with thousands of cases. That's more acceptable today.
Looked at in this light, infinity is a labor-saving device to eliminate special cases, at a potential cost in soundness.
> Looked at in this light, infinity is a labor-saving device to eliminate special cases, at a potential cost in soundness.
Or it is something that clearly conceptually exists, and makes simplistic reductionist viewpoints impossible to prove, which frustrates those who attempt to extend them into twisted metaphysical conjectures.
All indications seem to be that things are only lost, not gained. But that doesn't mean it doesn't hew closer to how things actually are. But if that's how reality actually is, then developing a rigorous understanding of it can only be a good thing, right?
Rejecting infinity is a purely philosophical stance that doesn’t teach us anything about reality.
There is a big difference between “infinity doesn’t exist” and “infinity doesn’t exist physically”.
I should also add that the resolution of zeno’s paradox in the form of calculus where and infinite set of steps can occur in a finite time (or infinite set of distance can span a finite total distance) is conceptually very simple and useful. Rejecting it as unphysical, or saying it must imply time or space come in discrete chunks, is not contributing to an understanding of reality unless the rejection also comes with a set of testable (in principle) predictions.
EDIT: you could even probably claim "nothing exists which isn't physically measureable" which may or may not be a stronger claim depending on your point of view.
EDIT AGAIN: rate limited by this dogshit website :D but I'll respond to this comment here:
> Which is exactly why I mentioned rejection of zero, negative numbers, etc. You can reject them, but doing so just throws away useful tools without gaining anything in return.
Yeah! I fully agree. I can see no obvious benefit to rejecting these powerful tools. However, important discoveries often happen in non-obvious directions, and exploring unexplored territory is generally worthwhile. So the fact that it doesn't seem immediately useful doesn't mean it's not worth trying!
The idea that nothing is demonstrative of infinity is clearly incorrect.
Take the screen you're reading this on. One pixel is composed of a bunch of different atoms, and once you get down to one of them, that atom subdivides into a bunch of subatomic particles, some of which even have mass. Let's take one of those for argument's sake. Split that, and you get some quarks.
Now let's imagine that's the smallest you can go. We can still talk about half of a down quark, or half of that, etc. Say, uh, infinitely so. There you go, everything is infinite. That wasn't so hard was it?
You can't split a quark, partial quarks doesn't exist. In fact, singular quarks can't exist, if you try to pull quark out of nucleus, it produces another quark to pair with. Quarks can be destroyed in particle accelerators collisions but those aren't components.
Also, all of the components of an atom, electrons and nucleus, have mass.
The paradoxes of Zeno are caused by his lack of understanding of the symmetry between zero and infinity. It is also possible that he actually understood more than is apparent from his paradoxes, but those were intended only to troll the other philosophers.
Zeno understood things like zero multiplied by a number being zero and a number multiplied by infinity being infinity, but he did not understood that neither of zero and infinity is stronger than the other, so that the product of zero and infinity may be any finite number, i.e. the limit of a sequence of products where one factor decreases towards zero and the other increases indefinitely can be any number.
While Zeno either ignored or faked ignorance about the existence of limits of infinite sequences, other later Ancient Greek mathematicians, like Eudoxus and Archimedes, computed several limits, so they had an intuitive understanding of their behavior, even if they did not have a comprehensive theory.
So, firstly, you have split the particle 5 times. That's not infinite times. You can split it more, so that would be 6 times. And more. Even if you could split it 1000 times, that's not infinity.
The standard argument for infinity is that "you can always add 1 to any number, so there must be an infinity of them", and the refutation is that no matter how many times you add 1 to a number, all you've done is create a larger number. You never reach the point of actual infinity, no matter how long you keep doing this. You need to have infinite time in order to create an infinity by adding 1 to each number, so you're starting with the axiom that infinity exists (because you need an infinite number of operations to actually create an infinity). If you don't start with that axiom, then you can never reach infinity by addition (or any operation).
Time has nothing to do with it. There are an infinite number of ways to divide anything. You don’t need time to prove that. Whatever number you think of you can divide by a larger number.
Create an infinity? What does that mean? Why would you need to do that?
Is there a limit to how many times something can be logically divided? If not, then there’s your infinity. It doesn’t require you to continue brute forcing it, just reason about it.
Maybe? Can you prove there's no limit? The default proof by induction requires postulate of infinity. (this statement is potentially incorrect, but takes across the point)
Does half of something have a limit? Not by its definition. Same thing with addition or multiplication. All of these only work with some concept of infinity.
We could redefine "half" to mean "half of whatever you're talking about until you get to some arbitrary limit", but doing that to all of arithmetic is going to wind up in a very odd place.
When the author says we cannot truly observe infinity, what does that mean? Infinity is a mathematical symbol we can observe. We can't observe infinitely many objects, but even if we could, it wouldn't be the same as observing infinity. You can't observe the number one by observing one stone.
I think there is some confusion in this article between symbols and what they can stand for, and I can't help but wonder if that same confusion is at the root of ideas like ultrafinitism.
i can observe two apples. i cannot observe infinity apples.
To my mind, maths is like a "what if?" puzzle and whether or not infinity makes sense in the physical world, there's still fun to be had by considering the consequences of it.
That also means that it can be interesting to consider limited number systems which don't have any concept of infinity.
The mathematical symbol is just a representation of a concept, it's not infinity itself, you've got it backwards.
Certainly you can build a branch of mathematics without an axiom of infinity, and that’s fine, it’s math over finite sets.
However, an axiom of infinity is independent, it doesn’t contradict anything in standard formalizations, and so it doesn’t make sense to say “infinity is wrong”.
He may think the axiom of infinity isn’t satisfied by our real physical world, but that’s not a math question! There’s nothing logically inconsistent about infinite sets nor their axiomatizations.
It's a fun thinking prompt, and you can go down the rabbit hole of information theory and quantized spacetime. Like you suggest, it's perfectly fine to say "infinity does not exist" and also contemplate and operate on slice at a time.
Suppose we start with ZFC - Infinity as our base system. Then the negation of Infinity is consistent with this system. But adding Infinity itself makes the system strictly stronger, since ZFC proves the consistency of ZFC - Inf: in particular, in ZFC, we cannot prove that Infinity is consistent with ZFC - Inf.
In other words, in principle, it might be the case that ZFC - Inf is consistent, yet ZFC itself has a contradiction. In practice, most people believe that ZFC is also consistent, but we have no way to prove it a priori without accepting even more new axioms.
You can have gripes over whether or not pure math is compatible with the physical world but we're not exactly close to solving that problem... if we were, then physicists would have a much easier time lol
I think, as I understand it, the objection is this. The proposition that infinity is "real", and there are actually infinite (not just very many) things.
I say do whatever math you like. It is helpful to know what math you are doing. For instance, while I don't have a "problem" with the Axiom of Choice per se I do like clean specifications of when we are using it and when we are not, because it is another example of when we detach from reality as we know it. I don't have a problem with detaching from reality as we know it, I just like there to be awareness that we have.
But plenty of math is detached from reality. Honestly we don't observe very many "mathematical entities" at all; I've never seen a graph. I've never seen hyperbolic space. I'm aware of the many places aspects of them seem to map to reality, but I've never actually seen a literal graph in the real world.
Personally I am reminded of the way that we model our computers with Turing Complete formalisms, despite the fact they are observably not Turing Complete and are technically just finite state machines. However, the observation that they are "just" finite state machines doesn't move us closer to an understanding of how our computers work, it moves us farther away. Even though computers are completely real-world phenomena, if you want to understand the issues raised by things like Turing Incompleteness and other such things in the real world, you're going to be exponentially better off using Turing Machine formalisms and simply noting that you may run out of memory or practically-available computational resources before a calculation can complete than trying to build a new set of formalisms around finite state machines. We can be in an engineering context where we are well aware of the finite nature of everything we are doing because it all comes back to real, physical machines, but it's still easier to model with infinity than without it.
In that context, the real utility of "infinity" is less "an infinite number of things" than "you will never reach for another X [byte of RAM, byte of disk, CPU cycle, incrementing counter, etc.] and be told you're out of resources". Basically we write our proofs, formal or informal, as ignoring "what if I reach for this resource and it's not there?" for every such resource and every time we reach for a resource, which is quite often. You could go through a system and add a "what if" check for every such instance, but it's way cheaper to just buy another stick of RAM or tweak the program to take fewer resources than it is to try to deal with the exponential-with-a-large-exponent explosion of states this causes mathematically.
If you want to do things with infinity, fine, but then do it properly and write things like lim x->inf (your expression with x here)
Not really. There are infinitely many infinities. Infinite numbers are not particularly more special than real numbers, complex numbers, matrices, functions/operators, etc.
Lots of numbers have special meanings. The ancients didn't think 1 was a number, and later lots of people didn't (and some still don't) think 0 was a number.
Friends don't let friends do Platonism.
For real, if you're a formalist you can ask these foundational questions without fear of this kind of dread; they become methodological rather than some kind of metaphysical mess.
https://web.math.princeton.edu/~nelson/papers/faith.pdf
You can find many of his papers here.
https://web.math.princeton.edu/~nelson/papers/
For my personal opinion, strict finitism provides a richer field of study than potential infinitism or actual infinitism. Compare this to Errett Bishop's constructive analysis that requires the calculation of bounds to real numbers, instead of classical analysis only requiring that a real number exists. Much more difficult, though more precise.
I found "On Feasible Numbers" by Vladimir Sazonov to have application for computers. In a feasible mathematics, a large number fails to exist (say, 2^512), but a proof of contradiction must exceed such a large size (perhaps larger than the universe). Likewise, we have unix time that tries to count forever, so we should pick a storage size so large that counting exceeds the heat death of the universe. 10^100 years worth of Planck seconds fits in 501 bits, so round that to 512 bits. 512 bits of time ought to be enough for anybody :)
https://jeanpaulvanbendegem.be/home/papers/strict-finitism/
I'm hoping this is just bad writing from Quanta rather than something "ultrafinitists" truly believe.
I really don't think it's that complicated. Even pre-schoolers, competing to see who can say the highest number, quickly learn the concept of infinity. Or elementary school students trying to write 1/3 as a decimal.
Of course you need to be careful mapping infinity onto the physical world. But as a mathematical concept, there is absolutely nothing wrong with it.
> Mathematicians can construct a form of calculus without infinity, for instance, cutting infinitesimal limits out of the picture entirely.
This seems like a useful concept that also doesn't require denying the very obvious concept of infinity.
Yes, they could on indefinitely, but will they ever?
And then someone, whose friend or older brother taught them the concept, blurts out "infinity". And after a quick explanation, everyone more or less gets it.
Eventually, one of the kids will name a largest number, because no one else will name another and the game ends with a largest number.
It is possible that aliens exist, so is that proof aliens exist?
Is is possible to create ever larger numbers, but is that proof that infinity exists other than as a fanciful idea in our minds?
Now, decades later, I get it better. The number 0.99999... is 9/10 + 9/100 + 9/1000 + 9/10000 + ..., which approaches 1 asymptotically the same way that 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... approaches 1. Under many circumstances, you can treat that number as if it was 1, which neatly answers Zeno's Paradox. (Though beware of the limitations of that analysis: 1/n approaches infinity as n approaches 0, but 1/0 is not equal to infinity. Because 1/n approaches infinity only as n approaches 0 from the positive direction. If you look at the sequence 1/-0.1, 1/-0.01, 1/-0.001, etc. where n approaches 0 from the negative direction, that approaches negative infinity. A function that has two different limits as you approach the same number from two different directions cannot have its limit substituted like that).
It really comes down to what semantics we attach to "=" when one of the sides is an infinite series. The "equals to" sign that we have used prior to that mental exercise was for finite terms only, we had not had to deal with infinitely many terms before that leap in thought. So now we have to extend the notion in a way that is backward compatible.
A convenient one is it is equal to its limit if it exists.
I would say that the semantics are about what an infinite series itself is, not about the equal sign. Once we have the common analytic notion of convergence of an infinite series, then the equality makes sense. The issue is that an infinite series is not an actual sum, but, formally, it is a sequence (of the partial sums). As you say, we represent the limit of the sequence of the partial sums with the same notation and only in the case that we have absolute convergence, but that's basically because we use the same notation for two different things (the sequence of the partial sums, and the limit of that). If we know we refer to the limit, I don't think there is any semantic complication with the equal sign.
Only if they live forever, which they won't. They can only count so fast, and there are only so many of them. Even if every atom in the observable universe was counting at, idk, 1GHz, that's still a finite number. The universe is not (as far as we know for certain) infinitely old. Time may extend infinitely into the future, or it may not. We don't know. So far as we know for sure everything is in fact finite.
If this seems too conservative to you, like if for some reason you want to talk about the volume of the universe in terms of the width of an up-quark or whatever, feel free to tack on some modifier to my proposed number system.
but my point still stands, choose whichever calculation you think is important to be able to do with Ω, defined as f(Ω), square it for good measure, and set that as the max, the min, and the number of numbers in between each integer.
The total number of possible numbers will be ~2*f(Ω)⁴ which should be more than enough numbers :)
I really don’t understand what point you’re trying to make saying “pick the largest possible number relevant” as that number varies. Also, that’s just the rational numbers. There’s plenty of digits of precision needed for trajectories over galactic distances and the more precision you try to give irrational numbers, the larger your magical “largest number” needs to grow again.
Also, we don’t know how big the “non observable universe” is and it’s beyond the scope of science. It very well could be an infinite number of atoms and then what?
Where I get stuck with this is how might we measure that? Continuous measurements and infinite measurements are not something we can make. We fit continuous theories to discrete measurements--and the good ones fit really well!--but until we can measure it how can we actually know? I concluded we just can't, and we have to be OK with that.
Well, physicists came up with quantum mechanics because they found a way to distinguish a genuinely discrete phenomenon.
Understanding the physical universe overlaps with a subset of math. It shouldn't constrain the abstract tools which may or may not one day be useful for that understanding.
> computers handle math just fine
strong disagree tbh
That's the only way to ask it.
But in the spirit of generosity you may be interested in the "one-point compactification of the line".
BTW, the article is really badly written.
People ask whats the point? For me the study of the infinitesimal vs finite has really helped me better understand issues of precision and approximation in computers. I feel like I know exactly why 1/3 plus 1/5 is not exactly 8/15 in my Calculator app. Or why points in my 3d object face are not coplanar after rotation. Or why games have weird glitches when your character is too far from origin point. Or why a spreadsheet shows rounding issues
Zeilberger is intellectually honest in a way that Wildberger is not.
[1] EDIT: the reasoning is simple, if naive: the largest quantities we can measure are not, in fact, infinitely large, and the smallest ones we can measure are not, in fact, infinitesimally small. So until you show me an infinitesimal or an infinity, you're just making them up!
Some practice with Mahasi Sayadaw style "noting" can train you into seeing your phenomenological experience as a stream of point-events between which we weave the illusion of continuity.
You can make up math using different rules[1][2], and get different possibilities.
[1]: https://en.wikipedia.org/wiki/Non-standard_model_of_arithmet...
[2]: https://en.wikipedia.org/wiki/Internal_set_theory
>“Infinity may or may not exist; God may or may not exist,” he said. “But in mathematics, there should not be any place, neither for infinity nor God.”
>much as, Zeilberger might say, science brought doubt to God’s doorstep.
>But one day, he added, mathematicians will look back and see that this crackpot, like those of yore who questioned gods and superstitions, was right. “Luckily, heretics are no longer burned at the stake.”
LOL. What is this guy's problem?
The space of contrarian ideas is vast, and most of them are probably bad, but, nevertheless, the willingness to hold unconventional, internally consistent views should be celebrated, because it increases diversity of thought. Our collective hive mind grows stronger through heresy.
However, I like my heresy with a splash of axiomatic precision, which is sadly lacking in this article.
Perhaps we can recover some of it by treating the infinitely variable values as approximations of the more discrete values and then somehow proving that the errors from them stay bounded, for at least some interesting problems.
And in general, why not also reject zero, negative numbers, irrational numbers, complex numbers, uncomputable numbers, etc.?
Seems like an article about quacks that can’t even agree on what the bounds and rules of their quackery are.
Decidability. The issues around undecidability all involve the lack of an upper bound. In a finite deterministic space, everything is decidable, although some things may be too costly computationally to decide.
There are several ways to go for decidability. The brute force way is computer arithmetic - there is no number larger than 2^64-1. That's how we get things done on computers, but proofs about numbers with finite upper bounds need lots of special cases. Mathematicians hate that.
I used to work on this sort of thing, using Boyer-Moore theory. That's a lot like the Peano axioms. There is (ZERO), and (ADD1 (ZERO)), and (ADD1 (ADD1 (ZERO))), etc. Everything is constructive and has an unambiguous representation in a LISP-like form. You can have recursive functions. But they must be proven to terminate, by having a nonnegative value which decreases on each recursive call. There is a distinction between "infinite" and "arbitrarily large". You can talk about arbitrarily large numbers, but you cannot get to 1/2 + 1/4 + 1/8 ... = 1. You can have integers and rational numbers of arbitrary size, but not reals.
Set theory was interesting. Rather than axiomatic set theory, I was using lists as sets, with the constraints that no value could be duplicated and the list must be ordered. Equality is strict - two things are equal only if the elements are all equal, compared element by element. It's possible to prove the usual axioms of set theory via this route. The ordered criterion requires proving things about ordered list insertion to get there. It's ugly and needs machine proofs.
I was doing this back in the early 1980s, when machine proofs were frowned upon. Mathematicians were still upset about the four-color theorem proof. It's all case analysis, with thousands of cases. That's more acceptable today.
Looked at in this light, infinity is a labor-saving device to eliminate special cases, at a potential cost in soundness.
Or it is something that clearly conceptually exists, and makes simplistic reductionist viewpoints impossible to prove, which frustrates those who attempt to extend them into twisted metaphysical conjectures.
There is a big difference between “infinity doesn’t exist” and “infinity doesn’t exist physically”.
I should also add that the resolution of zeno’s paradox in the form of calculus where and infinite set of steps can occur in a finite time (or infinite set of distance can span a finite total distance) is conceptually very simple and useful. Rejecting it as unphysical, or saying it must imply time or space come in discrete chunks, is not contributing to an understanding of reality unless the rejection also comes with a set of testable (in principle) predictions.
Is there? I think one could make a decent case for "nothing exists which doesn't exist physically[1]".
[1] https://plato.stanford.edu/entries/physicalism/
EDIT: you could even probably claim "nothing exists which isn't physically measureable" which may or may not be a stronger claim depending on your point of view.
EDIT AGAIN: rate limited by this dogshit website :D but I'll respond to this comment here:
> Which is exactly why I mentioned rejection of zero, negative numbers, etc. You can reject them, but doing so just throws away useful tools without gaining anything in return.
Yeah! I fully agree. I can see no obvious benefit to rejecting these powerful tools. However, important discoveries often happen in non-obvious directions, and exploring unexplored territory is generally worthwhile. So the fact that it doesn't seem immediately useful doesn't mean it's not worth trying!
You can reject them, but doing so just throws away useful tools without gaining anything in return.
The idea that nothing is demonstrative of infinity is clearly incorrect.
Take the screen you're reading this on. One pixel is composed of a bunch of different atoms, and once you get down to one of them, that atom subdivides into a bunch of subatomic particles, some of which even have mass. Let's take one of those for argument's sake. Split that, and you get some quarks.
Now let's imagine that's the smallest you can go. We can still talk about half of a down quark, or half of that, etc. Say, uh, infinitely so. There you go, everything is infinite. That wasn't so hard was it?
Also, all of the components of an atom, electrons and nucleus, have mass.
Zeno understood things like zero multiplied by a number being zero and a number multiplied by infinity being infinity, but he did not understood that neither of zero and infinity is stronger than the other, so that the product of zero and infinity may be any finite number, i.e. the limit of a sequence of products where one factor decreases towards zero and the other increases indefinitely can be any number.
While Zeno either ignored or faked ignorance about the existence of limits of infinite sequences, other later Ancient Greek mathematicians, like Eudoxus and Archimedes, computed several limits, so they had an intuitive understanding of their behavior, even if they did not have a comprehensive theory.
So, firstly, you have split the particle 5 times. That's not infinite times. You can split it more, so that would be 6 times. And more. Even if you could split it 1000 times, that's not infinity.
The standard argument for infinity is that "you can always add 1 to any number, so there must be an infinity of them", and the refutation is that no matter how many times you add 1 to a number, all you've done is create a larger number. You never reach the point of actual infinity, no matter how long you keep doing this. You need to have infinite time in order to create an infinity by adding 1 to each number, so you're starting with the axiom that infinity exists (because you need an infinite number of operations to actually create an infinity). If you don't start with that axiom, then you can never reach infinity by addition (or any operation).
Is there a limit to how many times something can be logically divided? If not, then there’s your infinity. It doesn’t require you to continue brute forcing it, just reason about it.
We could redefine "half" to mean "half of whatever you're talking about until you get to some arbitrary limit", but doing that to all of arithmetic is going to wind up in a very odd place.