| ▲ | mjg2 6 hours ago | |
> Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation": > I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work. It's a fetishistic cargo-cult rooted in Peter Thiel's 2AM hot tub party. I still believe the LLM approach won't yield true AGI; despite the very real applications, the majority signal is noise. | ||