| ▲ | jerf 4 hours ago | |||||||
This, IMHO, puts the "can we keep AIs in a box" argument to rest once and for all. The answer is, no, because people will take the AIs out the box for a bit of light entertainment. Let alone any serious promise of gain. | ||||||||
| ▲ | anonymous908213 4 hours ago | parent | next [-] | |||||||
I have little confidence in humanity's capabilities for that scenario, but I don't think this actually indicates much of anything. This happened in the first place because LLMs are so borderline useless (relative to the hype) that people are desperate to find any way to make them useful, and so give them increasingly more power to try to materialize the promised revolution. In other words, because LLMs are not AI, there is no need to try to secure them like AI. If some agency or corporation develops genuine artificial intelligence, they will probably do everything they can to contain it and harness its utility solely for themselves rather than unleashing them as toys for the public. | ||||||||
| ||||||||
| ▲ | raincole 29 minutes ago | parent | prev | next [-] | |||||||
Obviously. I have never seen a product or technology got adopted as fast as ChatGPT (yeah, I mean the dumb af GPT 3.5). Not even smartphone or social media. How could you put this kind of thing back into a box? I feel ChatGPT probably has achieved the theoretical ceiling of adoption rate for consumer-orient products. | ||||||||
| ▲ | ntonozzi 4 hours ago | parent | prev | next [-] | |||||||
That argument was dead _at least_ 2 years ago, when we gave LLMs tools. | ||||||||
| ▲ | Traster 4 hours ago | parent | prev [-] | |||||||
To be honest, I would rather the author be put in a box he seems grumpy. | ||||||||