Remix.run Logo
amelius 9 hours ago

Think about this. It's silly to use floating point numbers to represent geometry, because it gives coordinates closer to the origin more precision and in most cases the origin is just an arbitrary point.

anonymars 9 hours ago | parent | next [-]

Random aside but as I recall I think this is what made Kerbal Space Program so difficult. Very large distances and changing origins as you'd go to separate bodies, and I think the latter was basically because of this aspect of floating point. And because of the mismanagement of KSP2 they had to relearn these difficulties, because they didn't really have the experienced people work with the new developers.

I only played it rather than modded it, so happy to be corrected or further enlightened, but seems like an interesting problem to have to solve.

Edit: sure enough, it was actually discussed here: https://news.ycombinator.com/item?id=26938812

adgjlsfhk1 5 hours ago | parent [-]

What KSP really should have done is just done their orbital math separately from their force propagation. If they had made a virtual node for each craft's center of mass, they could have made it so that the COM position was just never affected by intra-body forces and done the orbital math in super high (Float128?) precision.

Dylan16807 an hour ago | parent | prev | next [-]

Floating point has the benefit of not screaming and exploding when you have to take three lengths and calculate a volume.

Double precision floating point is like a 54-bit fixed point system that automatically scales to the exact size you need it to be. You get huge benefits for paying those 10 exponent bits. Even if you need those extra bits, you're often better off switching to a higher precision float or a double-double system.

m-schuetz 4 hours ago | parent | prev | next [-]

For geometry, fixed-precision integers are better. But for computation and usability, floats are great. Scaling a 10 meter model in floats to 13% of the size is a trivial multiplication by 0.13f. With integers, this can get tricky. Can't first divide by 100 then multiply by 13 because you'd lose precision. Also can't multiply by 13 and then divide by 100 because you might overflow. Unless maybe venders would add hardware that computes that more accurately like they currently do for float, but honestly, float is good enough and the the potential benefits do not outweigh the disadvantages.

Float is also fantastic for depth values precisely because they have more precision towards the origin, basically quasi-logarithmic precision. Having double the precision at half the distance is A+. At least if you're writing software rasterizers and do linear depth. The story with depth buffer precision in GPU pipelines with normalized depth and and hyperbolic distribution is...sad.

meheleventyone 8 hours ago | parent | prev | next [-]

Yeah in a lot of cases it's much better to use integers and a fixed precision as the absolute unit of position. For games it's just that the scale of most games works well with floats in the range they care about.

adrian_b 7 hours ago | parent | prev | next [-]

You are right, but only for a certain meaning of the word "geometry".

If "geometry" refers to the geometry of an affine space, i.e. a space of points, then indeed there is nothing special about any point that is chosen as the origin and no reason do desire lower tolerances for the coordinates of points close to the current origin.

Therefore for the coordinates of points in an affine space, using fixed-point numbers would be a better choice. There are also other quantities for which usually floating-point numbers are used, despite the fact that fixed-point numbers are preferable, e.g. angles and logarithms.

On the other hand, if you work with the vector space associated to an affine space, i.e. with the set of displacements from one point to another, then the origin is special, i.e. it corresponds with no displacement. For the components of a vector, floating-point numbers are normally the right representation.

So for the best results, one would need both fixed-point numbers and floating-point numbers in a computer.

These were provided in some early computers, but it is expensive to provide hardware for both, so eventually hardware execution units were provided only for floating-point numbers.

The reason is that fixed-point numbers can be implemented in software with a modest overhead, using operations with integer numbers. The overhead consists in implementing correct rounding, keeping track of the position of the fraction point and doing some extra shifting when multiplications or divisions are done.

In languages that allow the user to define custom types and that allow operator overloading and function overloading, like C++, it is possible to make the use of fixed-point numbers as simple as the use of the floating-point numbers.

Some programming languages, like Ada, have fixed-point numbers among the standard data types. Nevertheless, not all compilers for such programming languages include an implementation for fixed-point numbers that has a good performance.

AlotOfReading 5 hours ago | parent [-]

Fixed point and Floating point are extremely similar, so most of the time you should just go with floats. If you start with a fixed type, reserve some bits for storing an explicit exponent and define a normalization scheme, you've recreated the core of IEEE floats. That also means we can go the other way and emulate (lower precision) fixed point by masking an appropriate number of LSBs in the significand to regain the constant density of fixed. You can treat floating point like fixed point in a log space for most purposes, ignoring some fiddly details about exponent boundaries.

And since they're essentially the same, there just aren't many situations where implementing your own fixed point is worth it. MCUs without FPUs are increasingly uncommon. Financial calculations seem to have converged on Decimal floating point. Floating point determinism is largely solved these days. Fixed point has better precision at a given width, but 53 vs 64 bits isn't much different for most applications. If you happen to regularly encounter situations where you need translation invariants across a huge range at a fixed (high) precision though, fixed point is probably more useful to you.

adrian_b 3 hours ago | parent [-]

There are applications where the difference between fixed-point and floating-point numbers matters, i.e. the difference between having a limit for the absolute error or for the relative error.

The applications where the difference does not matter are those whose accuracy requirements are much less than provided by the numeric format that is used.

When using double-precision FP64 numbers, the rounding errors are frequently small enough to satisfy the requirements of an application, regardless if those requirements are specified as a relative error or as an absolute error.

In such cases, floating-point numbers must be used, because they are supported by the existing hardware.

But when an application has more strict requirements for the maximum absolute error, there are cases when it is preferable to use smaller fixed-point formats instead of bigger floating-point formats, especially when FP64 is not sufficient, so quadruple-precision floating-point numbers would be needed, for which there is only seldom hardware support, so they must be implemented in software anyway, preferably as double-double-precision numbers.

AlotOfReading 2 hours ago | parent [-]

    i.e. the difference between having a limit for the absolute error or for the relative error.
The masking procedure I mentioned gives uniform absolute error in floats, at the cost of lost precision in the significand. The trade-off between the two is really space and hence precision.

I'm not saying fixed point is never useful, just that it's a very situational technique these days to address specific issues rather than an alternative default. So if you aren't even doing numerical analysis (as most people don't), you should stick with floats.

rpdillon 8 hours ago | parent | prev [-]

For all the players of the original Morrowind out there, you'll notice that your character movement gets extremely janky when you're well outside of Vvardenfell because the game was never designed to go that far from the origin. OpenMW fixes this (as do patches to the original Morrowind, though I haven't used those), since mods typically expand outwards from the original island, often by quite a bit.