Mastodon

LLM memory: either the best or worst thing about chatbots

This video brings up some salient points on the security concerns of using a browser that has an agent capable of doing things. New technology tends to bring new security concerns that need to be tackled, and prompt injection is undoubtedly one we have for LLMs.

One thing I found really interesting in his complaints was this idea of memory in your browser. He, and I suspect a good number of people reading this blog post, really seem to hate the idea of ChatGPT (or Claude, Gemini, etc.) having any sort of “memory” about you. It seems many would prefer these apps to have a completely blank slate every time you open them. And you know what? Fair enough, I’m sure there is a large number of people who feel this way.

What I think is interesting is that there are also absolutely a large number of people who think the exact opposite. I’ve talked to many people who use LLMs (aka most people these days) and for a lot of them, the memory is one of their favorite things. My wife, for example, has (unprompted) told me how nice she thinks it is that when she asks ChatGPT a question, it already knows things about her and can answer the question more tailored to her needs.

This post isn’t for me to say anyone’s right or wrong here, this is mostly just me calling out the disagreement among people around how privacy should work with LLMs. For some, the fact that ChatGPT can remember things about what you’re interested in is a crisis that ought to be made illegal by the federal government. For others, it’s genuinely the best thing about them, making the experience much better.​​​​​​​​​​​​​​​​

Watch this space.