| ▲ | adrian_b 7 hours ago | ||||||||||||||||
You are right, but only for a certain meaning of the word "geometry". If "geometry" refers to the geometry of an affine space, i.e. a space of points, then indeed there is nothing special about any point that is chosen as the origin and no reason do desire lower tolerances for the coordinates of points close to the current origin. Therefore for the coordinates of points in an affine space, using fixed-point numbers would be a better choice. There are also other quantities for which usually floating-point numbers are used, despite the fact that fixed-point numbers are preferable, e.g. angles and logarithms. On the other hand, if you work with the vector space associated to an affine space, i.e. with the set of displacements from one point to another, then the origin is special, i.e. it corresponds with no displacement. For the components of a vector, floating-point numbers are normally the right representation. So for the best results, one would need both fixed-point numbers and floating-point numbers in a computer. These were provided in some early computers, but it is expensive to provide hardware for both, so eventually hardware execution units were provided only for floating-point numbers. The reason is that fixed-point numbers can be implemented in software with a modest overhead, using operations with integer numbers. The overhead consists in implementing correct rounding, keeping track of the position of the fraction point and doing some extra shifting when multiplications or divisions are done. In languages that allow the user to define custom types and that allow operator overloading and function overloading, like C++, it is possible to make the use of fixed-point numbers as simple as the use of the floating-point numbers. Some programming languages, like Ada, have fixed-point numbers among the standard data types. Nevertheless, not all compilers for such programming languages include an implementation for fixed-point numbers that has a good performance. | |||||||||||||||||
| ▲ | AlotOfReading 5 hours ago | parent [-] | ||||||||||||||||
Fixed point and Floating point are extremely similar, so most of the time you should just go with floats. If you start with a fixed type, reserve some bits for storing an explicit exponent and define a normalization scheme, you've recreated the core of IEEE floats. That also means we can go the other way and emulate (lower precision) fixed point by masking an appropriate number of LSBs in the significand to regain the constant density of fixed. You can treat floating point like fixed point in a log space for most purposes, ignoring some fiddly details about exponent boundaries. And since they're essentially the same, there just aren't many situations where implementing your own fixed point is worth it. MCUs without FPUs are increasingly uncommon. Financial calculations seem to have converged on Decimal floating point. Floating point determinism is largely solved these days. Fixed point has better precision at a given width, but 53 vs 64 bits isn't much different for most applications. If you happen to regularly encounter situations where you need translation invariants across a huge range at a fixed (high) precision though, fixed point is probably more useful to you. | |||||||||||||||||
| |||||||||||||||||