| ▲ | troupo a day ago | |
> That's exactly how the long term memory works in humans as well. Where this is applicable when is you go away from a problem for a while. And yet I don't lose the entire context and have to rebuild it from scratch when I go for lunch, for example. Models have to rebuild the entire world from scratch for every small task. > This is akin to opposing calling processor next tier because it still needs RAM and bus to communicate with it and SSD as well. You're so lost in your own metaphor that it makes no sense. > You think it should have everything in cache to be worthy of calling it next tier. No. "Next tier" implies something significantly and observably better. I don't. And here you are trying to tell me "if you use all the exact same tools that you have already used before with 'previous tier models' you will see it is somehow next tier". If your "next tier" needs an equator-length list of caveats and all the same tools, it's not next tier is it? BTW. I'm literally coding with this "next tier" tool with "long memory just like people". After just doing the "plan/execute/write notes" bullshit incantations I had to correct it:
So next tier. So long memory. So person-like.Oh. Within about 10 seconds after that it started compacting the "non-crippled" context window and immediately forgot most of what it had just been doing. So I had to clear out the context and teach it the world from the start again. Edit. And now this amazing next tier model completely ignored that there already exists code to discover network interfaces, and wrote bullshit code calling CLI tools from Rust. So once again it needed to be reminded of this. > It's fine to have your own standards for applying words. But expect further confusion and miscommunication with other people if don't intend to realign. I mean, just like crypto bros before them, AI bros do sure love to invent their own terminology and their own realities that have nothing to do with anything real and observable. | ||
| ▲ | scotty79 a day ago | parent [-] | |
> "You're right, I fucked up on all three counts:" It very well might be that AI tools are not for you, if you are getting such poor results with your methods of approaching them. If you would like to improve your outcomes at some point, ask people who achieve better results for pointers and try them out. Here's a freebie, never tell AI it fucked up. | ||