| ▲ | add-sub-mul-div a day ago | ||||||||||||||||||||||||||||||||||||||||||||||
If you told a programmer 30 years ago that someday we'd switch from a deterministic to nondeterministic paradigm for programming computers, they'd ask if we'd put lead back in the drinking water. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | munk-a a day ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
We'd just explain that management told us we had to and then they'd understand. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | dg247 a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Been doing this 30 years now. I am asking that question. Everyone talks around it. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | reducesuffering a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Right? I get a kick out of programming used to being: put this exact value inside this exact register at the right concurrent time and all the tedious exactness that C required into now: "pretty please can you not do that and fix the bug somewhere a different way" | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | georgemcbay a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> they'd ask if we'd put lead back in the drinking water. With Lee Zeldin heading the EPA is anyone sure we won't? | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | com2kid a day ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
It has always been non-deterministic but we relied on low level engineers who knew the dark magicks to keep the horrors at bay. Bit flips in memory are super common. Even CPUs sometimes output the wrong answer for calculations because of random chance. Network errors are common, at scale you'll see data corruption across a LAN often enough that you'll quickly implement application level retries because somehow the network level stuff still lets errors through. Some memory chips are slightly out of timing spec. This manifests itself as random crashes, maybe one every few weeks. You need really damn good telemetry to even figure out what is going on. Compilers do indeed have bugs. Native developers working in old hairy code bases will confirm, often with stories of weeks spent debugging what the hell was going on before someone figured out the compiler was outputting incorrect code. It is just that the randomness has been so rare, or the effects so minor, that it has all been, mostly, an inconvenience. It worries people working in aviation or medical equipment, but otherwise people accept the need for an occasional reboot or they don't worry about a few pixels in a rendered frame being the wrong color. LLMs are uncertainty amplifiers. Accept a lot of randomness and in return you get a tool that was pure sci-fi bullshit 10 years ago. Hell when reading science fiction now days I am literally going "well we have that now, and that, oh yeah we got that working, and I think I just saw a paper on that last week." | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||