Remix.run Logo
griffzhowl 4 days ago

An eventual output of a calculation has to be a finite result, but the concepts that we use to get there are often not.

The standard way of setting up calculus involves continous magnitudes, hence irrational quantities, and obviously that's used all over physics and there doesn't seem to be a problem with it.

I think to make a compelling case for a finitist foundation for maths you would at the least have to construct all of the physically useful maths on a finitist basis.

Even if you did that, you should show somehwere this finitist foundation disagrees with the results obtained by the standard foundation, otherwise there's no reason to think the standard foundation is in error.

alexey-salmin 3 days ago | parent | next [-]

> Even if you did that, you should show somehwere this finitist foundation disagrees with the results obtained by the standard foundation, otherwise there's no reason to think the standard foundation is in error.

Well these are probably easy to find even now? E.g the Banach-Tarsky paradox is unlikely to be provable in finitist math which is somewhat of an improvement.

griffzhowl 3 days ago | parent [-]

I was thinking more about applications in physics where calculus and irrational quantities are used all the time.

At more advanced levels the theories are based on differential geometry and operators on Hilbert space. I'm not sure if fully worked out finitist versions of these even exist. Where finitist versions do exist, they're often technically more difficult to use than the standard versions, which is the opposite of an improvement in my view.

Whether it's undesirable for your mathematical foundation to prove the Banach-Tarski paradox is debatable. It's counter-intuitive, but doesn't lead to contradictions, as far as is known. It doesn't apply to physics because the construction uses non-measurable sets.

alexey-salmin 2 days ago | parent [-]

I'm not a finitist myself but my understanding is that it has to do as much with physics as does ZFC, which is very little. The math used in physics works on practice and did work long before the question of foundations even came up.

The problem that bothers some mathematicians is that despite working well math still lacks a solid foundation. Furthermore it's basically proven that these foundations can't even exist, or at least for the mainstream version of math. This is where non-mainstream versions pop up. The denial of uncountable sets does help you resolve some of the paradoxes. Not all unfortunately, even the countable sets already lead to things like incompleteness theorems. Well, one can dream.

griffzhowl 12 hours ago | parent [-]

> Furthermore it's basically proven that these foundations can't even exist,

What are you referring to? The current working foundation is ZFC but there are equivalent type theoretical foundations like what Lean and other proof-checking software uses. I guess you know that, but that's why I don't know what you mean by saying this

alexey-salmin 3 hours ago | parent [-]

I'm referring to the failure of Hilbert's program. All the incompleteness, undefinability and undecidability results arise when and only when some sort of infinite objects are present so I can definitely see the allure of finitism.

ZFC is a working foundation of math but it's unknown whether it's consistent or arithmetically sound and important statements like CH are independent from it. It's a "working foundation" but not a "true foundation" which alas cannot exist.

As mentioned above I'm personally not a finitist and think that math without infinite and uncountable sets is intellectually poorer. I don't mind however developing further a finitist subset of math and see what's provable (and describable) in it, much like there's value in proving theorems in ZF instead of ZFC whenever possible.

fuzzfactor 3 days ago | parent | prev [-]

>An eventual output of a calculation has to be a finite result, but the concepts that we use to get there are often not.

This is so true but it can be good if you're flexible enough to try it either way.

With massive tables of physical properties officially produced by pages of 32-bit Fortran it really did look like floating-point was ideal at first. Because it worked great.

The algorithm had been stored as a direct mathematical equation, plain as day, exactly as deduced with constants and operations in 32-bit double-precision floating point.

But when the only user-owned computers were still just 8-bit machines, there was no way to reproduce the exact results across the entire table to the same number of significant figures, using floating point.

Since it's a table it is of course not infinite, and a matrix to boot. A matrix of real numbers across an entire working spectrum.

The algorithm takes a set of input values, calculates results as defined, and rounds it off repeatably in the subsequent logic before output, so everyone can get agreement. The software OTOH takes a range of input values and outputs a matrix. And/or retains a matrix in "imaginary" spreadsheet form for later use :)

Every single value in the matrix is a floating-point representation of a real number, but they are rounded off as precisely as possible to the "exact" degree of usefulness, making them functionally all finite values in the end. This took a lot of work from top mathematicians, computer scientists, and engineers. And as designed, the matrix then carries the algorithm on its own without reference to the fundamental equation.

The solution turned out to involve working backward from the matrix reiteratively until an alternate algorithm was found using only integers for values and operations, up until the final rounding and fixed.point representation at the end. Dramatically unrecognizable algorithm but it worked and only took 0.5 kilobytes of 8-bit Basic code which was a fraction of the original Fortran.

This time the feature that showed up without having to make extra effort was the property of being more precise based directly on increased bitness of the computer, without need for floating-point at all. Of course the Fortran code accomplished this too by the wise use of floating-point but it took a lot bigger iron to do so. And wasn't going to be battery powered any time soon way back then.

>somehwere this finitist foundation disagrees with the results obtained by the standard foundation,

>there's no reason to think the standard foundation is in error.

This is "exactly" how it was. There were disagreements all over the place but they were in further decimal places not representable by the table. The standard was an international standard having carefully agreed-upon accuracy & precision, as defined by the Fortran which really worked and was then written in stone, with any nonmatched output being a notable failure.