| ▲ | wps 14 hours ago |
| Could someone explain the appeal of account-wide memory to me? Anthropic’s marketing indicates that nothing bleeds over, but I’m just so protective of my context that I cannot imagine having even a majorly distilled version of my other chats and preferences having on weight on the output. As for certain preferences like code styling or response length, these are all fit for custom instructions, with more detailed things in Skills. Ultimately like many things in LLM web UX, it seems to cater to how the masses use these tools. |
|
| ▲ | jjmarr 14 hours ago | parent | next [-] |
| Most normal people want the LLM to remember their interests and favourite things, so they don't have to manually re-explain when asking for advice. They also don't know what "context" is or that the LLM has a limited number of tokens it can understand at any given time. They just believe it knows everything at once. |
| |
| ▲ | deaux 14 hours ago | parent [-] | | Do you have example prompts where this would be usual? Why would you want an LLM to know your favorite type of cheese? Now that I say that, I guess if you use it for recipes then it's useful if it remembers things like dietary restrictions. And even then a project seems like the better option. I can't think of much else though so I'm still curious what you or others use it for. | | |
| ▲ | peteforde 12 hours ago | parent | next [-] | | ChatGPT knows what's in my bar and what types of base liquors I love and/or can't drink. It knows what fruit, syrups and mixes are in my fridge. It knows that my friend is allergic to mint. It knows that when I ask for recommendations, I tend to want a choice between spirit forward, tiki, martini and herbaceous. ChatGPT knows the broad strokes of the 3-4 main hardware projects I have on the go, and depending on the questions I'm asking, it will often structure its responses in a way that differentiates based on which one I'm thinking about. It knows what resistor and capacitor values I have on my pick and place machine, and when I ask for divider ratios it will do its best to calculate based on those values to the degree that it will chain 1-2 resistors together to achieve those ratios. I knows what kind of solder I use, and has warned me about components with sensitive reflow temperature concerns. It's an extraordinarily useful feature for engineering and drinking, two things that are commonly found in the same Venn diagram. | | |
| ▲ | lkbm 5 hours ago | parent | next [-] | | > It knows what resistor and capacitor values I have on my pick and place machine, and when I ask for divider ratios it will do its best to calculate based on those values to the degree that it will chain 1-2 resistors together to achieve those ratios. Also relevant: it knows that you know what a resistor and capacitor is, and is able to tune responses to your level of knowledge. (It's not great at this, in my experience, since domain knowledge is still so jagged, but I think it's better than nothing.) | |
| ▲ | deaux 10 hours ago | parent | prev | next [-] | | Thank you! That helped me understand. Hobbies that you regularly do, and an LLM is continuously helpful for, benefiting from memory. Personally, I would still be wary of the black box aspect -not knowing what it does remember and what it doesn't - so I would probably still use projects to make it more deterministic. But that's probably being overcautious and unnecessary in most common cases. | |
| ▲ | Peaches4Rent an hour ago | parent | prev [-] | | It it just me who's getting freaked out by this? I know it's a boiling frog situation, but seeing it spelt out feels so icky vs how google ads feel. I really want my personality data deleted from big tech... Sigh |
| |
| ▲ | foogazi 5 hours ago | parent | prev | next [-] | | I will say graciously that seeing this question asked here is absolutely stunning to me If I ask a question about vehicles it know what cars I have and what I like in cars If I ask for a question about vacation spots it know my parties composition or preferences Things like that | |
| ▲ | Mashimo 12 hours ago | parent | prev | next [-] | | I asked chatgpt a car related question in a fresh chat, and it answered it specifically with my car in mind. Turns out a few month befor I told it in a prompt what car I was driving. I turned memory of that day. | |
| ▲ | IanCal 14 hours ago | parent | prev | next [-] | | Can projects overlap? If not there’s general context information that’s often useful. My job, my kids and time preferences around those things, my preferred tech setup and way of working and types of tech I’m better at. Things I already have (home assistant, little nuc, etc). I can throw a random question and not have to add this kind of information or manage it. | | |
| ▲ | deaux 13 hours ago | parent [-] | | I get that those are the things that go into memory. What I don't get is what kind of prompt your job and kids are useful information for. Especially on the regular. | | |
| ▲ | IanCal 11 hours ago | parent [-] | | Let’s see, recently: Home automation fixing Proposed integrations with some services locally Science experiments explained at a few levels, finding good background info and where to read up about some safety information Maths help for specific areas my kids are looking at and proposed games for that Evaluation of coding options for my kids How to link up some ideas on coding, electronics and using the home automation side as some fun outputs LED strip info and work, again integrating with smart homes and what’s good around the kids Framework evaluations for automation at work and home Crystal identification Looking up local council info Relevant music suggestions for kids to play on the piano Here some things cross over. I’m happy writing code, I typically want easy open source options, I have languages and tech I prefer, I’m moving g things to matter, I have home assistant, my son is excellent at maths given his age but I’m working more on comprehension of problems, and a lot more. All those are things that with a bit of background info change the types of answers I get and make it more useful. |
|
| |
| ▲ | tikotus 14 hours ago | parent | prev | next [-] | | I had the same question a few days ago here: https://news.ycombinator.com/item?id=47162828 I didn't receive an answer besides "that's what people like", but I still can't think of (m)any situations where anyone would prefer it. | | |
| ▲ | deaux 13 hours ago | parent [-] | | The reply about knowledge about their job and familt made me think. The only thing I can now think of is using it as a personal therapist. Or asking how to approach their kids. And they're a bit embarrassed about it, because it's still outside the Overton window -especially on HN - which is why they aren't sharing it. If someone has different usecases, please do prove me wrong! Maybe I just lack imagination. | | |
| ▲ | 0_____0 8 hours ago | parent | next [-] | | Such an incredible amount of personal, intimate knowledge to share with a company. Sure, Google can figure out where I live and who I visit because I have an Android phone, but they'll never know the contents of those relationships. I have a line in the sand with the AI vendors. It's a work relationship. If I wouldn't share it with a colleague I didn't know super well, I'm not telling it to a AI vendor. | |
| ▲ | lkbm 5 hours ago | parent | prev | next [-] | | I recently asked about baby-led weaning. If my baby were 2 months old, it would have been smart to mention "not yet!" but it knows she's 8 months old and was able to give contextual advice. | |
| ▲ | randrus 11 hours ago | parent | prev [-] | | I ask gpt a lot of questions about plants and gardening - I’m happy that it remembers where I live and understands the implications. I could remind it in every question, but this is convenient. |
|
| |
| ▲ | damontal 5 hours ago | parent | prev | next [-] | | I broke my ankle and have multiple chats related to medicine, physical therapy, pain management, lawyer questions, how to handle messaging to boss and HR | |
| ▲ | vishnugupta 12 hours ago | parent | prev | next [-] | | I use it for my work. So i went it to remember everything about my business, website, the domain, which country we operate and on and on. It’s a ton of context which I don’t want to repeat each time. | | |
| ▲ | Kye 10 hours ago | parent [-] | | That's what projects are for. All the major chatbot companies have some equivalent and all support a standard instruction where you can include anything you need automatically. |
| |
| ▲ | ssl-3 8 hours ago | parent | prev [-] | | Sure. ChatGPT "knows" (has context that includes) some of the things I'm good at, and some of the things I'm not good at. I have my own tolerances for communication and it has context about that, too. I use the bot for mostly techy things. So, for instance, I'm alright with using tools, and building electronics, and punting around on a Linux box so I don't need my hand held for that. But I'm terrible at writing code, so baby steps and detailed explanation there helps me a lot. I strongly prefer pragmatism and verifiable facts. I despise sycophant speech, the empty positivity of corpo-speak, assumptions, false praise, superfluous verbosity, and apologies and/or the implication of feelings from bots. Through a combination of some deliberate training (custom instructions, memory), and just using it (shared context), it mostly does what I want in the way that I want it done -- the first time. I don't have to steer in the right direction with every new session. There was a time when that was necessary, but it is no longer that way. Adjustments happen increasingly automatically these days. That saves me time and frustration, and enhances the utility of the bot. Meanwhile: Others have their own skills and preferences that may be very different in comparison to my own. That's OK. We each get to have our own experience. |
|
|
|
| ▲ | AllegedAlec 14 hours ago | parent | prev | next [-] |
| In online Claude I often use incognito mode precisely because I don't want results to be influenced by what we talked about earlier. It's getting rather annoying to be honest. |
| |
| ▲ | visarga 2 hours ago | parent | next [-] | | I'm switching from Claude Web to Claude Code. Local files give me memory I actually control, unlike Anthropic's implementation. CC doesn't carry state between sessions — you just put whatever project context it needs in a file. | |
| ▲ | qwertox 14 hours ago | parent | prev | next [-] | | Keep your user prefs minimal and use project memory instead: create a new project, it will only have access to your user prefs, everything else is fresh. | | |
| ▲ | hbarka 14 hours ago | parent | next [-] | | I did /init and now CLAUDE.md is on several layers. I wish there was a reverse init and minimum as needed init. | |
| ▲ | AllegedAlec 13 hours ago | parent | prev | next [-] | | I'll have to try projects I guess, but I just want to sometimes ask questions without it bringing up shit I asked about in the past which isn't relevant to what I'm asking this time. | | | |
| ▲ | 14 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | KellyCriterion 14 hours ago | parent | prev [-] | | exactly! |
| |
| ▲ | Mashimo 12 hours ago | parent | prev [-] | | Why not turn it off then? |
|
|
| ▲ | bouzouk 6 hours ago | parent | prev | next [-] |
| On the contrary, I cannot understand how people are seriously using LLM outside of software engineering without account-wide memory.
When I ask things like "what do you think John should do next on project A?", I don’t want to have to explain in detail who is John, what is project A and what John was working on before. |
|
| ▲ | 7734128 13 hours ago | parent | prev | next [-] |
| The few times I've switched over to chatGPT I've been dumbfounded by lines like "...since you already are using SQLite...", referring to projects from months ago. I know the "memory" function can be disabled, but I have a hard time seeing that it would ever really be useful. |
| |
| ▲ | cedws 11 hours ago | parent | next [-] | | Yeah for me it only ever polluted the context. Irrelevant information tends to oversteer the LLM and produce worse output. | |
| ▲ | astrange 2 hours ago | parent | prev [-] | | Gemini is terrible with personalization. It brings up everything in my bio nonstop no matter what the topic is. |
|
|
| ▲ | gverrilla 7 hours ago | parent | prev | next [-] |
| It all depends on your usecase(s). For me, "account-wide" memory has only: (a) short description of my hardware/os/display system/etc; (b) mobile hardware and os version; and (c) my age, gender, city/country of residence, and health conditions. |
|
| ▲ | pfix 14 hours ago | parent | prev | next [-] |
| I can try! I currently use ChatGPT for random insights and discussions about a variety of topics. The memory is basically a grown context about me and my preferences and interests and ChatGPT uses it to tailor responses to my knowledge, so I could relate better. This is for me far more natural and easier than either craft a default prompt preset or create each conversation individually, that would be way too much overhead to discuss random shower thoughts between real life stuff. This is my use case and I discovered that this can be detrimental to specific questions and prompts and I see that it can be more beneficial to have careful written prompts each time. But my use case is really ad hoc usage without the time. At least for ChatGPT. When coding, this fails fast. There regular context resets seem to be a more viable strategy. |
| |
| ▲ | wps 14 hours ago | parent [-] | | I see what you mean, but I like having a clean slate even for those one off questions. I don’t want a differing answer to a philosophical inquiry just because the LLM remembers a prior position I’ve written about you know? | | |
| ▲ | Retr0id 10 hours ago | parent | next [-] | | I have all the history settings off for this reason, but something that worries me is that there's a fair bit of information about me trained right into the model weights. I'm not "famous" by any stretch but claude has awareness of some of my HN-front-page-hitting projects, etc., which I think should be enough to bias responses (although I haven't tried to measure it). I set my name to "User" in the settings, so in a clean-slate chat it has nothing to go on, but the moment claude code does something like `git log` it knows who I am again. I've even considered writing some kind of redaction proxy. | |
| ▲ | e1g 14 hours ago | parent | prev [-] | | FWIW, both OpenAI and Anthropic have a toggle to do a “Temporary/Incognito Chat” that does not use or update memory. I too wish this was the default, and then you could opt in at the end of the chat to save some long term aspects into memory. | | |
| ▲ | pfix 14 hours ago | parent [-] | | That would be interesting, also at the start. As an option what to pull in. ChatGPT memory "improved" and now you normally don't even see anymore what it commits to memory! |
|
|
|
|
| ▲ | jtokoph 14 hours ago | parent | prev | next [-] |
| I've told the LLMs that, when traveling, I don't care about nightlife and alcohol. Because they have a memory of this, when I ask for a sample itinerary for a 2 day stay in a new city, it won't waste hours in the day on the party street, wine tasting, etc. For example, instead of recommending a popular night club, it will recommend the stroll along the river to view the lit up skyline or to visit the night market instead. It knows other preferences as well (exploring quirky neighborhoods, trying local fast food joints and markets) |
| |
| ▲ | cyrusmg 14 hours ago | parent [-] | | So it's because they want to be more like ChatGPT instead of being more Claude Code. I guess that makes sense - bigger market | | |
| ▲ | echelon 14 hours ago | parent [-] | | Is it? Isn't there much more money in automating business processes than in answering consumer questions (sans ads)? Automating software development has to be a multi-trillion dollar market. And that doesn't account for future growth. | | |
| ▲ | bluGill 11 hours ago | parent [-] | | maybe. Software is big, but it is only a tiny percentage of the ecconomy. they need to help a lot more than software to justify their datacenter investments. even if we add all engineering that isn't a large percentage. How can they help insurance agents (or eliminate - I don't care either way), plumbers, zoo keepers, and every other job in my city? Some might be they can't - but if they can is a question worth asking. |
|
|
|
|
| ▲ | Panoramix 3 hours ago | parent | prev | next [-] |
| Think of things like your preferred units (meters, kg, cups, tablespoons, milliliters). Or, do not suggest recipes with x ingredient. Language preferences. Etc etc etc. |
|
| ▲ | bmurphy1976 9 hours ago | parent | prev | next [-] |
| "Stop asking me to apply the plan. I will tell you when I'm ready." That alone drives me batty. I can easily spend a couple hours and multiple revisions iterating on a plan. Asking me me every single time if I want to apply it is obnoxious. |
|
| ▲ | joenot443 7 hours ago | parent | prev | next [-] |
| I own a lot of dirt bikes, boats, snowmobiles, mowers, and blowers. It's much easier for me to ask about "My Polaris" than it is to ask about my "2011 Polaris Switchback Assault". Similarly, it remembers the dimensions of my truck, so towing/loading questions don't need extra clarification. It's the small things. |
|
| ▲ | __alexander 9 hours ago | parent | prev | next [-] |
| The appeal for me is not having to constantly repeat instructions. Imagine having to repeat dietary restrictions every time you ask for a recipe. |
|
| ▲ | gbalduzzi 14 hours ago | parent | prev | next [-] |
| > it seems to cater to how the masses use these tools. Are you suggesting that they should ignore the needs of the vast majority of their users? I mean, of course they do, it would be worse otherwise |
| |
| ▲ | wps 14 hours ago | parent [-] | | Well, the masses are wrong. See: insane amounts of compute wasted on “thank you”, “haha true”, “redo it”, etc. I think the UI should be designed to avoid misuse, and I think an ever growing distillation of your most common traits is not a good use of context length. If you want it, specify it. Maybe even hard limits on chat length, why are we 20 replies deep in a single chat? A user friendly option could be a single button that distills that chat down, and opens a new one with prebuilt instructions to continue the conversation. I’m no product designer though, just some thoughts. |
|
|
| ▲ | MagicMoonlight 8 hours ago | parent | prev | next [-] |
| Because I can say “do what you did before, but about the romans this time” And it will give me a complete rundown of Roman life, because it knows what I was interested in before. Or you can ask a tax question and it will know you’re an organic rice farmer or whatever. Claude has the best implementation because it has both memory, and previous chat searching. So it will actually read through relevant chats, rather than guessing based on memories. |
|
| ▲ | CGamesPlay 14 hours ago | parent | prev [-] |
| Sure, it's for those customers who don't have any idea what a "context window" is. |
| |
| ▲ | wps 14 hours ago | parent [-] | | This seems to imply that customers assume by default that the LLM remembers their past chats? I feel like the UI makes it incredibly obvious it’s a clean slate every time? But then again people ask ridiculous meta questions all the time to these chatbots expecting a correct answer. | | |
| ▲ | CGamesPlay 11 hours ago | parent [-] | | Yeah, but then they went and added "memories" and in particular automatic memory management, and now it isn't a clean slate each time. And that's exactly what this is importing: those automatically curated memories that make the chat bot "feel like" it knows you. |
|
|