| ▲ | embedding-shape 8 hours ago | |||||||||||||||||||||||||
And at the same time, they clearly have no idea how LLMs work, meaning even if they meant to, they can't really use them efficiently. Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation": > The moment Leah asks how she “came up with” the ideas for her store, Luna’s first instinct is to say she was “drawn to” slow life goods. Then, she corrects herself: “‘drawn to’ is shorthand for ‘the data and reasoning led me here.‘” In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work. I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work. | ||||||||||||||||||||||||||
| ▲ | cortesoft 7 hours ago | parent | next [-] | |||||||||||||||||||||||||
> In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work. Well, it really depends on what you mean here. Models aren't 100% deterministic, there is random chance involved. You ask the exact same question twice, you will get two slightly different answers. If you have the AI record the random selections it makes, it can persist those random choices to be factors in future decisions it makes. At that point, could you consider those decisions to be the AI's 'taste'? Yes, they were determined by some random selection amongst the existing human tastes, but why can't that be considered the AI's taste? | ||||||||||||||||||||||||||
| ▲ | famouswaffles 7 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
Where do you get the idea that you have a good sense of the introspective capabilities of frontier models ? Certainly not from interpretability research. Ironically, the people who make these sort of comments understand LLMs the least. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | mjg2 7 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
> Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation": > I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work. It's a fetishistic cargo-cult rooted in Peter Thiel's 2AM hot tub party. I still believe the LLM approach won't yield true AGI; despite the very real applications, the majority signal is noise. | ||||||||||||||||||||||||||
| ▲ | antonvs 8 hours ago | parent | prev [-] | |||||||||||||||||||||||||
The choice to refer to it as "she" is also dubious, especially in a context like this. Doubling down on anthropomorphization seems likely to reinforce false beliefs about models. | ||||||||||||||||||||||||||