| ▲ | santadays 2 hours ago | |||||||||||||
> I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you. I think this is the new turing test. Once it's been passed we will have AGI and all the Sam Altmans of the world will be proven correct. (This isn't a perfect test obviously, but neither was the turing test) If it fails to pass we will still have what jdthedisciple pointed out > a non-farmer, is doing professional farmer's work all on his own without prior experience I am actually curious how many people really believe AGI will happen. Theres alot of talk about it, but when can I ask claude code to build me a browser from scratch and I get a browser from scratch. Or when can I ask claude code to grow corn and claude code grows corn. Never? In 2027? In 2035? In the year 3000? HN seems rife with strong opinions on this, but does anybody really know? | ||||||||||||||
| ▲ | cevn an hour ago | parent | next [-] | |||||||||||||
I think once we get off LLM's and find something that more closely maps to how humans think, which is still not known afaik. So either never or once the brain is figured out. | ||||||||||||||
| ||||||||||||||
| ▲ | bayindirh an hour ago | parent | prev [-] | |||||||||||||
Researchers love to reduce everything into formulae, and believe that when they have the right set of formulae, they can simulate something as-is. Hint: It doesn't work that way. Another hint: I'm a researcher. Yes, we have found a great way to compress and remix the information we scrape from the internet, and even with some randomness, looks like we can emit the right set of tokens which makes sense, or search the internet the right way and emit these search results, but AGI is more than that. There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise. AI models doesn't work on those. LLMs consume language and emit language. The information embedded in these languages are available to them, but most of the tacit knowledge is just an empty shell of the thing we try to define with the limited set of words. It's the same with anything we're trying to replace humans in real world, in daily tasks (self-driving, compliance check, analysis, etc.). AI is missing the magic grains we can't put out as words or numbers or anything else. The magic smoke, if you pardon the term. This is why no amount of documentation can replace a knowledgeable human. ...or this is why McLaren Technology Center's aim of "being successful without depending on any specific human by documenting everything everyone knows" is an impossible goal. Because like it or not, intuition is real, and AI lacks it. Irrelevant of how we derive or build that intuition. | ||||||||||||||
| ||||||||||||||