Remix.run Logo
zabzonk a day ago

I've written an Intel 8080 emulator that was portable between Dec10/VAX/IBM VM CMS. That was easy - the 8080 can be done quite simply with a 256 value switch - I did mine in FORTRAN77.

Writing a BASIC interpreter, with floating point, is much harder. Gates, Allen and other collaborators BASIC was pretty damned good.

teleforce 14 hours ago | parent | next [-]

Fun facts, according to Jobs for some unknown reasons Wozniak refused to add floating point support to Apple Basic thus they had to license BASIC with floating point numbers from Microsoft [1].

[1] Bill & Steve (Jobs!) reminisce about floating point BASIC:

https://devblogs.microsoft.com/vbteam/bill-steve-jobs-remini...

WalterBright 7 hours ago | parent | next [-]

Writing a floating point emulator (I've done it) is not too hard. First, write it in a high level language, and debug the algorithm. Then hand-assembling it is not hard.

What is hard is skipping the high level language step, and trying to do it in assembler in one step.

kragen 4 hours ago | parent | next [-]

Also, though, how big was Apple Integer BASIC? As I understand it, you had an entire PDP-10 at your disposal when you wrote the Fortran version of Empire.

WalterBright 4 hours ago | parent [-]

I did learn how to program on the -10. A marvelous experience.

Looking backwards, writing an integer basic is a trivial exercise. But back in the 70s, I had no idea how to write such a thing.

Around 1978, Hal Finney (yes, that guy) wrote an integer basic for the Mattel Intellivision (with its wacky 10 bit microprocessor) that fit in a 2K EPROM. Of course, Hal was (a lot) smarter than the average bear.

kragen 40 minutes ago | parent [-]

Interesting, I didn't know that! I didn't know him until the 90s, and didn't meet him in person until his CodeCon presentation.

What I was trying to express—perhaps poorly—is that maybe floating-point support would have been more effort than the entire Integer BASIC. (Incidentally, as I understand it, nobody has found a bug in Apple Integer BASIC yet, which makes it a nontrivial achievement from my point of view.)

zabzonk 6 hours ago | parent | prev [-]

I've never understood floating point :-)

djmips 3 hours ago | parent | next [-]

Fixed point is where the number has a predetermined number of bits for the integer and fraction like 8.8 where you have 0-255 for the integer and the fraction goes from 1/256 to 255/256 in steps of 1/256

Floating point at it's simplest just makes that a variable. So the (.) position is stored as a separate number. Now instead of being fixed - it floats around.

This way you can put more in the integer or more in the fraction.

The Microsoft Basic here used 23 bits for the number, 1 sign bit and 8 bits to say where the floating point should be placed.

Of course in practice you have to deal with a lot of details depending on how robust you want your system. This Basic was not as robost as modern IEEE754 but it did the job.

Reading more about IEE754 is a fascinating way to learn about modern floating point. I also recommend Bruce Dawson's observations on his Random ASCII blog.

codedokode 4 hours ago | parent | prev | next [-]

Let's say we want to store numbers in computer memory but we are not allowed to use decimal point or any characters except for digits. We need to make some system to encode and decode real numbers as a sequence containing only digits.

With fixed point numbers, you write the digits into the memory and have a convention that the decimal point is always after N-th digit. For example, if we agree that the point is always after 2-nd digit then a string 000123 is interpreted as 00.0123 and 123000 means 1230. Using this system with 6 digits we can represent numbers from 0 to 9999 to precision of 0.01.

With floating point, you write both decimal point position (which we call "exponent") and digits (called "mantissa"). Let's agree that the first two digits are the exponent (point position) and the rest four is mantissa. Then this number:

    020123 
means 01.23 or 1.23 (exponent is 2 meaning the decimal point is after 2nd digit in mantissa). Now using same 6 digits we can represent numbers from 0 to 9999·10⁹⁶ with relative precision of 1/10000.

That's all you need to know, and the rest should be easy to figure out.

WalterBright 4 hours ago | parent [-]

In other words, a floating point number consists of 2 numbers and a sign bit:

1. the digits

2. the exponent

3. a sign bit

If you're familiar with scientific notation, yes, it's the same thing.

https://en.wikipedia.org/wiki/Scientific_notation

The rest is just the inevitable consequences of that.

codedokode 3 hours ago | parent [-]

I like "decimal point position" more than "exponent". Also, if I remember correctly, "mantissa" is the significand (the digits of the number).

And by the way engineering notation (where exponent must divide by 3) is so much better. I hate converting things like 2.234·10¹¹ into billions in my head.

And by the way (unrelated to floating point) mathematicians could make better names for things, for example instead of "numerator" and "denominator" they could use "upper" and "lower number". So much easier!

WalterBright 35 minutes ago | parent [-]

I do get significand and mantissa mixed up. I solved that by just removing them!

hh2222 6 hours ago | parent | prev | next [-]

Wrote floating point routines in assembler back in college. When you get it, it's one of those aha moments.

WalterBright 6 hours ago | parent | prev [-]

The specs for it are indeed hard to read. But the implementation isn't that bad. Things like the sticky bit and the guard bit are actually pretty simple.

However, crafting an algorithm that uses IEEE arithmetic and avoids the limitations of IEEE is hard.

zozbot234 14 hours ago | parent | prev [-]

Floating point math was a key feature on these early machines, since it opened up the "glorified desk calculator" use case. This was one use for them (along with gaming and use as a remote terminal) that did not require convenient data storage, which would've been a real challenge before disk drives became a standard. And the float implementation included in BASIC was the most common back in the day. (There are even some subtle differences between it and the modern IEEE variety that we'd be familiar with today.)

musicale 21 hours ago | parent | prev | next [-]

I agree - it's a useful BASIC that can do math and fits in 4 or 8 kilobytes of memory.

And Bill Gates complaining about pirating $150 Altair BASIC inspired the creation of Tiny BASIC, as well as the coining of "copyleft".

phkahler a day ago | parent | prev | next [-]

I still have a cassette tape with Microsoft Basic for the Interact computer. It's got an 8080.

thijson 9 hours ago | parent | next [-]

I remember my old Tandy Color Computer booting up and referencing Microsoft BASIC:

https://tinyurl.com/2jttvjzk

The computer came with some pretty good books with example BASIC programs to type in.

vile_wretch 6 hours ago | parent | prev | next [-]

I have a MS Extended Basic cassette for the Sol-20, also 8080 based.

thesuitonym 10 hours ago | parent | prev [-]

You should upload the audio to the Internet Archive!

TMWNN 16 hours ago | parent | prev [-]

>Writing a BASIC interpreter, with floating point, is much harder. Gates, Allen and other collaborators BASIC was pretty damned good.

The floating point routines are Monte Davidoff's work. But yes, Gates and Allen writing Altair BASIC on the Harvard PDP-10 without ever actually seeing a real Altair, then having it work on the first try after laboriously entering it with toggle switches at MITS in Albuquerque, was a remarkable achievement.

WalterBright 7 hours ago | parent | next [-]

What Allen did was write an 8080 emulator that ran on the -10. The 8080 is a simple CPU, so writing an emulator for it isn't hard.

https://pastraiser.com/cpu/i8080/i8080_opcodes.html

Then, their BASIC was debugged by running it on the emulator.

The genius was not the difficulty of doing that, it wasn't hard. The genius was the idea of writing an 8080 emulator. Wozniak, in comparison, wrote Apple code all by hand in assembler and then hand-assembled it to binary, a very tedious and error-prone method.

In the same time period, I worked at Aph, and we were developing code that ran on the 6800 and other microprocessors. We used full-fledged macro assemblers running on the PDP-11 to assemble the code into binary, and then download binary into an EPROM which was then inserted into the computer and run. Having a professional macro assembler and text editors on the -11 was an enormous productivity boost, with far fewer errors. (Dan O'Dowd wrote those assemblers.)

(I'm doing something similar with my efforts to write an AArch64 code generator. First I wrote a disassembler for it, testing it by generating AArch64 code via gcc, disassembling that with objdump and then comparing the results with my disassmbler. This helps enormously in verifying that the correct binary is being generated. Since there are thousands of instructions in the AArch64, this is a much scaled up version of the 8080.)

dhosek 6 hours ago | parent [-]

The Wozniak method was how I used to write 6502 assembler programs in high school since I didn’t have the money to buy a proper assembler. I wrote everything out longhand on graph paper in three columns. Addresses on the left, a space for the code in the middle and the assembler opcodes on the right, then I’d go through and fill in all the hex codes for what I’d written. When you work like that, it really focuses the mind because there’s not much margin for error and making a big change in logic requires a lot of manual effort.

mfuzzey 4 hours ago | parent [-]

I started Z80 assemnbler (on a ZX80 computer) that way. But I soon get fed up looking up opcodes and especially calculating relative jumps (especially backwards ones) by hand as I often seemed to make off by one errors causing my program to crash.

So I wrote my on assembler in BASIC :)

zabzonk 15 hours ago | parent | prev [-]

Allen had to write the loader in machine code, which was toggled in on the Altair console. The BASIC interpreter itself was loaded from paper tape via the loader and a tape reader. The first BASIC program Allen ran on the Altair was apparently "2 + 2", which worked - i.e. it printed "4" I'd like to have such confidence in my own code, particularly the I/O, which must have been tricky to emulate on the Dec10.

WalterBright 7 hours ago | parent [-]

> which must have been tricky to emulate on the Dec10

I don't see why it would be tricky. I don't know how Allen's 8080 emulator on the PDP-10 worked, but it seems straightforward to emulate 8080 I/O.

zabzonk 6 hours ago | parent [-]

Well, I found it a bit hard on my Dec10-based emulator. I never got the memory-mapped stuff to work properly - I just mocked up some of the I/O instructions. But it was actually a spare-time project, intended to let my students do stuff like sorting, searching in strings, so I didn't feel too guilty. It had an assembler, debugger and other stuff. And it was portable - completely standard FORTRAN77!