▲ | jonahx 3 days ago | ||||||||||||||||
Thanks for the reply. I am not sure the vision is the failing point here, but logic. I routinely try to get these models to solve difficult puzzles or coding challenges (the kind that a good undergrad math major could probably solve, but that most would struggle with). They fail almost always. Even with help. For example, JaneStreet monthly puzzles. Surprisingly, the new o3 was able to solve this months (previous models were not), which was an easier one. Believe me, I am not trying to minimize the overall achievement -- what it can do incredible -- but I don't believe the phrase AGI should even be mentioned until we are seeing solutions to problems that most professional mathematicians would struggle with, including solutions to unsolved problems. That might not be enough even, but that should be the minimum bar for even having the conversation. | |||||||||||||||||
▲ | og_kalu 3 days ago | parent [-] | ||||||||||||||||
>what it can do incredible -- but I don't believe the phrase AGI should even be mentioned until we are seeing solutions to problems that most professional mathematicians would struggle with, including solutions to unsolved problems. But Why ? Why should Artificial General Intelligence preclude things a good chunk of humans wouldn't be able to do ? Are those guys no longer General Intelligences ? I'm not saying this definition is 'wrong' but you have to realize at this point, the individual words of that acronym no longer mean anything. | |||||||||||||||||
|