| ▲ | HappMacDonald 2 days ago | |
I don't think the kind of exponential you are looking for (and especially not "the singularity") can manifest until the product (AI) is at a point where it can meaningfully take over the task of improving itself directly. I would say we have certainly seen a bottleneck in the ability of LLMs to handle any kind of broad abstractions or master the architecture of coding. That is the hinge of why "vibe coding" is as trashy of an approach as it is: the LLM can't cut the mustard on any actual software design. So they have nothing close to the deep understanding required to improve their own substrate. They can be exceptionally good at understanding what humans mean when they say things, far better than poking keywords into a google search for example, especially when said keywords are noisy and overloaded. And they can be a very good encyclopedic store of concepts (the more general the idea the less likely they hallucinate it, while the details and citations are far more frequently made up on the spot). But they suck at volition, and at state representation (thanks to those limited context windows) which cuts them off at the knees if they ever have to tenaciously search for anything including performing creative problem solving. We do have AI models which can get somewhere on theorem proving or protein folding or high level competitive game playing, but those only sometimes even glancingly involve LLMs, and are primarily custom-built amalgams of different kinds of neural networks each trained on specific tasks in their fields. None of that can directly move the needle on actual AI research yet. | ||