▲ | dragonwriter a day ago | ||||||||||||||||||||||
> When we get to the point where a LLM can say "oh, I made that mistake because I saw this in my training data, which caused these specific weights to be suboptimal, let me update it", that'll be AGI. While I believe we are far from AGI, I don't think the standard for AGI is an AI doing things a human absolutely cannot do. | |||||||||||||||||||||||
▲ | redeux a day ago | parent | next [-] | ||||||||||||||||||||||
All that was described here is learning from a mistake, which is something I hope all humans are capable of. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | no_wizard a day ago | parent | prev [-] | ||||||||||||||||||||||
We're far from AI. There is no intelligence. The fact the industry decided to move the goal post and re-brand AI for marketing purposes doesn't mean they had a right to hijack a term that has decades of understood meaning. They're using it to bolster the hype around the work, not because there has been a genuine breakthrough in machine intelligence, because there hasn't been one. Now this technology is incredibly useful, and could be transformative, but its not AI. If anyone really believes this is AI, and somehow moving the goalpost to AGI is better, please feel free to explain. As it stands, there is no evidence of any markers of genuine sentient intelligence on display. | |||||||||||||||||||||||
|