| ▲ | mwkaufma 5 days ago |
| Define boundary conditions -- how much precision do you need? Then you can compute the min/max distances. If the "world" needs to be larger, then prepare to divide it into sectors, and have separate global/local coordinates (e.g. No Man's Sky works this way). Really though, games are theater tech, not science. Double-Precision will be more than enough for anything but the most exotic use-case. The most important thing is just to remember not to add very-big and very-small numbers together. |
|
| ▲ | kbolino 4 days ago | parent | next [-] |
| The problem with double-precision in video games is that the GPU hardware does not support it. So you are plagued with tedious conversions and tricks like "global vs. local coordinates" etc. |
| |
| ▲ | mwkaufma 4 days ago | parent [-] | | 100! OTOH
- Constant factor scaling between game and render world space fixes a lot (gfx often need less precision than physics).
- most view coords are in view or clip space, which are less impacted, so large world coords tend not to code-sprawl even when introduced. |
|
|
| ▲ | skeezyboy 4 days ago | parent | prev [-] |
| > Define boundary conditions -- how much precision do you need? imagine if integer arithmetic gave wrong answers in certain conditions lol why did we choose the current compromise? |
| |
| ▲ | ForOldHack 2 days ago | parent | next [-] | | Compromises. We had BCD for finance, binary for games, and floating point for math. I wrote a sample 'make change' using floating, BCD, and integer( normalizing by multiplying by 100). The integer ripped thru it, but surprisingly BCD kept up with FP, and with compiler optimizations, in certain edge cases and unit tests was significantly faster. You get surprising things with common place problems. | | | |
| ▲ | kbolino 4 days ago | parent | prev | next [-] | | In my experience, most code that operates on integers does not anticipate overflow or wraparound. So it is almost always guaranteed to produce wrong results when these conditions occur, and is only saved by the fact that usually they doesn't occur in practice. It is odd to me that every major CPU instruction set has ALU codes to indicate when these conditions have occurred, and yet many programming languages ignore them entirely or make it hard to access them. Rust at least has the quartet of saturating, wrapping, checked, and unchecked arithmetic operations. | | |
| ▲ | ForOldHack 2 days ago | parent [-] | | The trick is to get your ALUs to do some of the math for you. Oh I miss the days of the 68020 fast barrel shifter and the 68030 byte smears. Tricky stuff lost to the silicon/sands of time. |
| |
| ▲ | mwkaufma 4 days ago | parent | prev [-] | | They're not "wrong" -- the error bars are well-defined. Signed Integer Overflow OTOH is Undefined Behavior, so it's worse. |
|