| ▲ | HNisCIS 13 hours ago | |
He's not wrong... From an information theory perspective, LLMs are just regurgitating content from a loss-ily compressed training set. It just turns out that like 95% of software we write is extremely repetitive rehashed shit globbed together. We just haven't found ways to abstract a lot of the redundant code well enough yet so here we are, stuck with the stupid robot. That remaining 5% is stuff that's fully never been done before. If you ask an LLM to come up with a fully new sorting algorithm it's going to give you worthless garbage, maybe it'll get lucky if you burn a nuclear power plant worth of tokens in an infinite-keyboard-monkeys way. All this is to say, if we want the field to actually progress we still need somebody with some knowledge about how a computer actually works. | ||