Remix.run Logo
lastdong 9 days ago

I find it incredible that we all now have access to an SGI-level machine at home, thanks to Nvidia. This reminds me of a previous thread on HN: https://news.ycombinator.com/item?id=39945487

ofrzeta 9 days ago | parent | next [-]

It's more like thanks to 3dfx?

dagw 8 days ago | parent | next [-]

3dfx never really competed with SGI since they were never compatible with the commercial and scientific software that people bought SGI machines for. Nvidia on the other hand (mostly) was. I worked at a small animation studio at the time and shortly after the GeForce 2 was launched we'd basically replaced all our expensive SGI and Intergraph machines with cheap generic Wintel boxes at a quarter of the price.

FirmwareBurner 8 days ago | parent | prev | next [-]

There were like a bazillion companies competing for consumer 3D accelerators in the 90s. 3dfx was the most successful thanks to their Glide API and vertical integration but they weren't the only one on the market, which is why they were so affordable despite the novelty. Unlike today.

pjmlp 8 days ago | parent | prev [-]

Not really, because initially they went with Glide, their great boards eventually weren't a match to Nvidia, that had enough cash to buy 3dfx.

I was disappointed that I couldn't make my newly bought Voodoo card work on my motherboard due to a PCI connection issue, but the Riva TNT that the shop offered me as possible alternative did, thus NVidia got one more customer.

flohofwoe 8 days ago | parent [-]

I think what parent means is that 3dfx was founded by three former Silicon Graphics engineers, so I guess that the 3dfx hardware had a lot more SGI DNA than Nvidia's chips.

Nvidia's first 3D chip ~~Riva 128~~ (my bad, it was the 'NV1') also was a weird design, and it's successor the Riva 128 wasn't remarkable performance-wise especially compared to what 3dfx had to offer (Nvidia's only good decision was that they bet on D3D early on when everybody else was still doing their own 3D APIs - even though early D3D versions objectively sucked compared to Glide or even OpenGL, it turned out to be the right long-term decision).

Nvidias first remarkable chip was the Riva TNT which came out in 1998 (hardware progress really was unbelievably fast back then - 3dfx Voodoo in 1996, Riva 128 in 1997, Riva TNT in 1998, and both the Riva TNT2 and Geforce in 1999).

edit: fixed my NV1 vs Riva 128 mistake, somehow I merged those two into one :)

ChrisGreenHeur 8 days ago | parent | next [-]

A better point is that the consumer 3d cards were created specifically because SGI was designing 3d machines the wrong way. It became obvious that the way to design a consumer 3d card was to create a small multithreaded chip that you could scale up and down based on the workload. SGI instead created special designs for every computer, sometimes multiple graphics board designs for every computer.

Sometimes (such as VICE in an O2) SGI would also put old processors in odd configurations inside new computers in the hopes they could offload things such as dvd decompression, but then drop dvd support before releasing the machine while keeping the hardware.

SGI was just very unfocused around 1998-2204 when this whole consumer 3d chip became realistic and they just refused to do things in a sane way. they even knew it but did it anyway. Betting the company on web servers instead.

pjmlp 8 days ago | parent | prev [-]

Point taken, and as usual there is a lesson learnt with first movers losing the market they helped create, 3dfx on one side, and Ad Lib for the other side on multimedia PC history.

kragen 8 days ago | parent | prev [-]

It's not because of NVIDIA but because of Moore. We have SGI-level five-dollar microcontroller boards now.

https://wiki.preterhuman.net/SGI_Maximum_IMPACT says:

> Maximum Impact graphics are the highest tier of SGI's IMPACT graphics offered both on the SGI Indigo2 and SGI Octane workstations. They include a 27MB frame buffer and have 2 raster engines (i.e. are "2RSS" boards).

...

> Two GE11 Geometry/Image Engines:

>> Power the graphics subsystem

>>> 960 MFLOPS for transforming triangles

>>> 960 MIOPS for processing pixels

>>> 600,000 gates each

>>> Note: The refreshed Octane 'E-series' Geometry engines were capable of 1344 MFLOPS

>> Two RE4 Raster Engines:

>>> Provide the pixel-fill capabilities

>>> 234 Mpixels/sec gouraud fill rate

I think the Raspberry Pi 4B has 8000 megaflops https://www.reddit.com/r/raspberry_pi/comments/fsc3fw/perfor... and I think that's just the CPU. That's roughly 5× the performance of the Indigo²'s Maximum Impact card. The Pi 3 CPU came in at 2700 megaflops: https://raspberrypi.stackexchange.com/questions/55862/what-i... and I think the GPU is something like four times that.

Of course, benchmarks can be misleading, but if anything I'd expect this number to understate the difference, since the SGI card was a fixed-function pipeline. You can do all kinds of crazy visual effects on the Pi's CPU that the Indigo² couldn't touch. And of course the Pi's texture memory and framebuffer is measured in gigabytes now, not megabytes.

Compare the specs on the ESP32-S3 IoT microcontroller: 480 megaflops (counting multiply-accumulates as two flops, as is stupid but traditional) https://www.reddit.com/r/esp32/comments/t46960/whats_the_esp.... It only comes with 320K of RAM, but if you want a 27MB framebuffer, it supports 32MiB external RAM (PSRAM): https://docs.espressif.com/projects/esp-idf/en/stable/esp32s... but that's still less than half as fast as the Indigo²'s Maximum Impact card. They cost US$2.74 though https://www.digikey.com/en/products/detail/espressif-systems... so you might be able to afford more than one. They're commonly used for things like opening cat flaps in doors so your cat can go outside: https://hackaday.com/2025/06/12/2025-pet-hacks-contest-cat-a...

But this web page is about SGIs that long predate the Indigo² (which was circa 01994), such as the 4D/60 from 01987 built around an 8MHz MIPS R2000, and so are dramatically slower than the ESP32. The "G" card described could fill 5500 Gouraud-shaded polygons per second, while the "GTX" could hit 100,000, about 2,000 per frame.

ddingus 8 days ago | parent | next [-]

Max Impact was the first workstation I used that delivered solid model rotation at 60Fps. I remember literally feeling that smooth movement and how it made a significant difference in assembly and some interactive surfacing / sketching workflows.

The O2 Copper unified or shared memory design was the first machine I used that could deliver large image and or video manipulation via surfaces. Was amazing to see a huge satellite image and be able to zoom way in, composite other images to sub-pixel accuracy, or model a product featuring high resolution reflections at 60fps.

At the time, PC cards just did not yet offer GB of RAM, but would soon.

The O2 chipset got used in the 320, 540 visual workstations too. The shared memory performed great on some texture memory demanding games, but all the cool features went largely unused. There was going to be Linux X Window support, essentially creating an Intel O2 type computer that could be fast, dual CPU, and big memory capable, but Microsoft cried about it and basically flexed their ownership of the ARC loader SGI used on those distinctive PC's and it all got buried. Not even a leak...

Years later, Apple improved on those concepts with the M1, which feels remarkably like what could have been earlier, ar least graphically.

I agree a Pi4 feels 90's era workstation like. Faster, but not so fast that the feel of that era is gone.

dagw 8 days ago | parent | prev [-]

My core memory of SGI workstations was not that they necessarily were super fast when it came to pure flops (especially towards the end of their life), but how smooth and solid they were. Even if our Nvidia/Wintel machines were faster on paper and faster at things like rendering, the SGI machines would run buttery smooth no matter what we threw at them. Whether scrubbing through complex composition shots, doing real time lighting preview, or manipulating large 3D models, the frame rate and latency on the SGI machines was rock solid.