▲ | ehnto 4 days ago | |
I don't think LLMs are building towards an AI singularity at least. I also wonder if we can even power an AI singularity. I guess it depends on what the technology is. But it is taking us more energy than really reasonable (in my opinion) just to produce and run frontier LLMs. LLMs are this really weird blend of stunningly powerful, yet with a very clear inadequacy in terms of sentient behaviour. I think the easiest way to demonstrate that, is that it did not take us consuming the entirety of human textual knowledge, to form a much stronger world model. | ||
▲ | michaelhoney 4 days ago | parent | next [-] | |
True, but our "training" has been a billion years of evolution and multimodal input every waking moment of our lives. We come heavily optimised for reality. | ||
▲ | ACCount37 4 days ago | parent | prev [-] | |
I see no reason why not. There was a lot of "LLMs are fundamentally incapable of X" going around - where "X" is something that LLMs are promptly demonstrated to be at least somewhat capable of, after a few tweaks or some specialized training. This pattern has repeated enough times to make me highly skeptical of any such claims. It's true that LLMs have this jagged capability profile - less so than any AI before them, but much more so than humans. But that just sets up a capability overhang. Because if AI gets to "as good as humans" at its low points, the advantage at its high points is going to be crushing. |