| ▲ | ctoth 2 days ago | |
> The fundamental challenge in AI for the next 20 years is avoiding extinction. So nice to see people who think about this seriously converge on this. Yes. Creating something smarter than you was always going to be a sketchy prospect. All of the folks insisting it just couldn't happen or ... well, there have just been so many objections. The goalposts have walked from one side of the field to the other, and then left the stadium, went on a trip to Europe, got lost in a beautiful little village in Norway, and decided to move there. All this time though, the prospect of instantiating a something smarter than you (and yes, it will be smarter than you even if it's at human level because of electronic speeds...) This whole idea is just cursed and we should not do the thing. | ||
| ▲ | Mawr a day ago | parent | next [-] | |
> Creating something smarter than you was always going to be a sketchy prospect. Sure, but not so sure that this has any relevance to the topic at hand. You seem to be taking the assumption that LLMs can ever reach that level for granted. It may be possible that all it takes is scaling up and at some point some threshold gets reached past which intelligence emerges. Maybe. Personally, I'm more on board with the idea that since LLMs display approximately 0 intelligence right now, no amount of scaling will help and we need a fundamentally different approach if we want to create AGI. | ||
| ▲ | cheschire 2 days ago | parent | prev [-] | |
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." | ||