| ▲ | rstuart4133 5 hours ago | |
Others are disagreeing with you here, and I do too. The difference is profound, and takes more than a couple of days to get your head around the implications. I'd summarise it as: "if you give a computer the same input it always produces the same output, but if you give a model the same input it always produces different output". Add to that the output is often wrong and it can't reliably follow instructions, and the difference is so great it breaks most of your intuitions. The reward working with this piece of unreliable jelly is it can be far smarter than you (think the difference between a man with a shovel and a 20 ton excavator - they can literally find bugs in minutes that would take a human hours or days), and they know far more than you. The engineering challenge is to make this near random machine produce a reliable product. It isn't easy. The hype you see around them is it's trivially easy to get it to produce a feature rich but very unreliable product, as Anthropic demonstrates with their vibe coded claude-cli. I refuse to use it now. Among its other charms, it triggers a BSOD on windows: https://github.com/anthropics/claude-code/issues/30137 (Granted, it's just another Windows bug: https://learn.microsoft.com/en-ca/answers/questions/5814272/..., but if you are shipping to Windows you should be working around such bugs.) | ||