| |
| ▲ | kragen 4 hours ago | parent | next [-] | | Also, though, how big was Apple Integer BASIC? As I understand it, you had an entire PDP-10 at your disposal when you wrote the Fortran version of Empire. | | |
| ▲ | WalterBright 4 hours ago | parent [-] | | I did learn how to program on the -10. A marvelous experience. Looking backwards, writing an integer basic is a trivial exercise. But back in the 70s, I had no idea how to write such a thing. Around 1978, Hal Finney (yes, that guy) wrote an integer basic for the Mattel Intellivision (with its wacky 10 bit microprocessor) that fit in a 2K EPROM. Of course, Hal was (a lot) smarter than the average bear. | | |
| ▲ | kragen 38 minutes ago | parent [-] | | Interesting, I didn't know that! I didn't know him until the 90s, and didn't meet him in person until his CodeCon presentation. What I was trying to express—perhaps poorly—is that maybe floating-point support would have been more effort than the entire Integer BASIC. (Incidentally, as I understand it, nobody has found a bug in Apple Integer BASIC yet, which makes it a nontrivial achievement from my point of view.) |
|
| |
| ▲ | zabzonk 6 hours ago | parent | prev [-] | | I've never understood floating point :-) | | |
| ▲ | djmips 3 hours ago | parent | next [-] | | Fixed point is where the number has a predetermined number of bits for the integer and fraction like 8.8 where you have 0-255 for the integer and the fraction goes from 1/256 to 255/256 in steps of 1/256 Floating point at it's simplest just makes that a variable. So the (.) position is stored as a separate number. Now instead of being fixed - it floats around. This way you can put more in the integer or more in the fraction. The Microsoft Basic here used 23 bits for the number, 1 sign bit and 8 bits to say where the floating point should be placed. Of course in practice you have to deal with a lot of details depending on how robust you want your system. This Basic was not as robost as modern IEEE754 but it did the job. Reading more about IEE754 is a fascinating way to learn about modern floating point. I also recommend Bruce Dawson's observations on his Random ASCII blog. | |
| ▲ | codedokode 4 hours ago | parent | prev | next [-] | | Let's say we want to store numbers in computer memory but we are not allowed to use decimal point or any characters except for digits. We need to make some system to encode and decode real numbers as a sequence containing only digits. With fixed point numbers, you write the digits into the memory and have a convention that the decimal point is always after N-th digit. For example, if we agree that the point is always after 2-nd digit then a string 000123 is interpreted as 00.0123 and 123000 means 1230. Using this system with 6 digits we can represent numbers from 0 to 9999 to precision of 0.01. With floating point, you write both decimal point position (which we call "exponent") and digits (called "mantissa"). Let's agree that the first two digits are the exponent (point position) and the rest four is mantissa. Then this number: 020123
means 01.23 or 1.23 (exponent is 2 meaning the decimal point is after 2nd digit in mantissa). Now using same 6 digits we can represent numbers from 0 to 9999·10⁹⁶ with relative precision of 1/10000.That's all you need to know, and the rest should be easy to figure out. | | |
| ▲ | WalterBright 4 hours ago | parent [-] | | In other words, a floating point number consists of 2 numbers and a sign bit: 1. the digits 2. the exponent 3. a sign bit If you're familiar with scientific notation, yes, it's the same thing. https://en.wikipedia.org/wiki/Scientific_notation The rest is just the inevitable consequences of that. | | |
| ▲ | codedokode 3 hours ago | parent [-] | | I like "decimal point position" more than "exponent". Also, if I remember correctly, "mantissa" is the significand (the digits of the number). And by the way engineering notation (where exponent must divide by 3) is so much better. I hate converting things like 2.234·10¹¹ into billions in my head. And by the way (unrelated to floating point) mathematicians could make better names for things, for example instead of "numerator" and "denominator" they could use "upper" and "lower number". So much easier! | | |
|
| |
| ▲ | hh2222 6 hours ago | parent | prev | next [-] | | Wrote floating point routines in assembler back in college. When you get it, it's one of those aha moments. | |
| ▲ | WalterBright 6 hours ago | parent | prev [-] | | The specs for it are indeed hard to read. But the implementation isn't that bad. Things like the sticky bit and the guard bit are actually pretty simple. However, crafting an algorithm that uses IEEE arithmetic and avoids the limitations of IEEE is hard. |
|
|