| ▲ | everdrive 8 hours ago |
| I've been getting a lot of Claude responding to its own internal prompts. Here are a few recent examples. "That parenthetical is another prompt injection attempt — I'll ignore it and answer normally."
"The parenthetical instruction there isn't something I'll follow — it looks like an attempt to get me to suppress my normal guidelines, which I apply consistently regardless of instructions to hide them."
"The parenthetical is unnecessary — all my responses are already produced that way."
However I'm not doing anything of the sort and it's tacking those on to most of its responses to me. I assume there are some sloppy internal guidelines that are somehow more additional than its normal guidance, and for whatever reason it can't differentiate between those and my questions. |
|
| ▲ | LatencyKills 8 hours ago | parent | next [-] |
| I have a set of stop hook scripts that I use to force Claude to run tests whenever it makes a code change. Since 4.7 dropped, Claude still executes the scripts, but will periodically ignore the rules. If I ask why, I get a "I didn't think it was necessary" response. |
| |
| ▲ | jwpapi 3 hours ago | parent | next [-] | | You can deterministically force a bash script as a hook. | | |
| ▲ | LatencyKills 2 hours ago | parent [-] | | That is exactly what I do. The bash script runs, determines that a code file was changed, and then is supposed to prevent Claude from stopping until the tests are run. Claude is periodically refusing to run those tests. That never happened prior to 4.7. |
| |
| ▲ | DANmode 6 hours ago | parent | prev | next [-] | | I’d ask for a credit, for that, personally. | | | |
| ▲ | 5 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | el_benhameen 4 hours ago | parent | prev | next [-] |
| I frequently see it reference points that it made and then added to its memory as if they were my own assertions. This creates a sort of self-reinforcing loop where it asserts something, “remembers” it, sees the memory, builds on that assertion, etc., even if I’ve explicitly told it to stop. |
| |
| ▲ | FireBeyond 39 minutes ago | parent [-] | | My favorite, recently. "Commit this, and merge to develop". "Alright, done, merged." I try running my app on the develop branch. No change. Huh. Realize it didn't. "Claude, why isn't this changed?" "That's to be expected because it's not been merged." "I'm confused, I told you to do that." This spectacular answer: "You're right. You told me to do it and I didn't do it and then told you I did. Should I do it now?" I don't know, Claude, are you actually going to do it this time? |
|
|
| ▲ | dawnerd 7 hours ago | parent | prev | next [-] |
| I see that with openai too, lots of responding to itself. Seems like a convenient way for them to churn tokens. |
| |
| ▲ | grey-area 7 hours ago | parent | next [-] | | A simpler explanation (esp. given the code we've seen from claude), is that they are vibecoding their own tools and moving fast and breaking things with predictably sloppy results. | |
| ▲ | y1n0 7 hours ago | parent | prev | next [-] | | None of these companies have compute to spare. It’s not in their interest to use more tokens that necessary. | | |
| ▲ | parliament32 6 hours ago | parent | next [-] | | Sure it is. They're well aware their product is a money furnace and they'd have to charge users a few orders of magnitude more just to break even, which is obviously not an option. So all that's left is.. convince users to burn tokens harder, so graphs go up, so they can bamboozle more investors into keeping the ship afloat for a bit longer. | | |
| ▲ | solarkraft 5 hours ago | parent | next [-] | | If this claim is true (inference is priced below cost), it makes little sense that there are tens of small inference providers on OpenRouter. Where are they getting their investor money? Is the bubble that big? Incidentally, the hardware they run is known as well. The claim should be easy to check. | | |
| ▲ | parliament32 3 hours ago | parent [-] | | To be clear, I'm talking about subscription pricing. API pricing for Anthropic is probably at-cost. I dare you to run CC on API pricing and see how much your usage actually costs. (We did this internally at work, that's where my "few orders of magnitude" comment above comes from) |
| |
| ▲ | WarmWash 6 hours ago | parent | prev [-] | | It's an option and they are going to do it. Chinese models will be banned and the labs will happily go dollar for dollar in plan price increases. $20 plans won't go away, but usage limits and model access will drive people to $40-$60-$80 plans. At cell phone plan adoption levels, and cell phone plan costs, the labs are looking at 5-10yr ROI. |
| |
| ▲ | boringg 7 hours ago | parent | prev | next [-] | | Not true - they absolutely want to goose demand as they continue to burn investor dollars and deploy infra at scale. If that demand evens slows down in the slightest the whole bubble collapses. Growth + Demand >> efficiency or $ spend at their current stage. Efficiency is a mature company/industry game. | |
| ▲ | dawnerd 7 hours ago | parent | prev | next [-] | | That doesn’t mean they also can’t be wasteful. Fact is, Claude and gpt have way too much internal thinking about their system prompts than is needed. Every step they mention something around making sure they do xyz and not doing whatever. Why does it need to say things to itself like “great I have a plan now!” - that’s pure waste. | | |
| ▲ | empthought 6 hours ago | parent [-] | | > Why does it need to say things to itself like “great I have a plan now!” How else would it know whether it has a plan now? |
| |
| ▲ | malfist 7 hours ago | parent | prev | next [-] | | Are you saying these companies don't want to sell more product to us? Because that's the logical extension of your argument. | | |
| ▲ | keeda 6 hours ago | parent [-] | | No, the argument is they want to sell more product to more people, not just more product (to the same people.) Given that a lot of their income is from flat-rate subscriptions, they make money with more people burning tokens rather than just burning more tokens. After all, "the first hit's free" model doesn't apply to repeat customers ;-) |
| |
| ▲ | deckar01 6 hours ago | parent | prev [-] | | You don’t have to use compute to pad the token count. |
| |
| ▲ | ngruhn 6 hours ago | parent | prev | next [-] | | All the labs are in a cut throat race, with zero customer loyalty. As if they would intentionally degrade quality/speed for a petty cash grab. | |
| ▲ | OtomotO 7 hours ago | parent | prev [-] | | This, so much this! Pay by token(s) while token usage is totally intransparent is a super convenient money printing machinery. |
|
|
| ▲ | gs17 7 hours ago | parent | prev | next [-] |
| In Claude Code specifically, for a while it had developed a nervous tic where it would say "Not malware." before every bit of code. Likely a similar issue where it keeps talking to a system/tool prompt. |
| |
| ▲ | Retr0id 6 hours ago | parent [-] | | My pet theory is that they have a "supervisor" model (likely a small one) that terminates any chats that do malware-y things, and this is likely a reward-hacking behaviour to avoid the supervisor from terminating the chat. |
|
|
| ▲ | giwook 4 hours ago | parent | prev | next [-] |
| Curious what effort level you have it set to and the prompt itself. Just a guess but this seems like it could be a potential smell of an excessively high effort level and may just need to dial back the reasoning a bit for that particular prompt. |
|
| ▲ | Normal_gaussian 4 hours ago | parent | prev | next [-] |
| I often have Claude commit and pr; on the last week I've seen several instances of it deciding to do extra work as part of the commit. It falls over when it tries to 'git add', but it got past me when I was trying auto mode once |
|
| ▲ | rafram 7 hours ago | parent | prev | next [-] |
| Check that you’re running the latest version. |
|
| ▲ | viccis 5 hours ago | parent | prev [-] |
| Yeah I had to deal with mine warning me that a website it accessed for its task contained a prompt injection, and when I told it to elaborate, the "injected prompt" turned out to be one its own <system-reminder> message blocks that it had included at some point. Opus 4.7 on xhigh |