| ▲ | BoppreH 6 hours ago |
| Undercover mode also pretends to be human, which I'm less ok with: https://github.com/chatgptprojects/claude-code/blob/642c7f94... |
|
| ▲ | 0x3f 5 hours ago | parent | next [-] |
| You'll never win this battle, so why waste feelings and energy on it? That's where the internet is headed. There's no magical human verification technology coming to save us. |
| |
| ▲ | j2kun 38 minutes ago | parent | next [-] | | Fatalism will also not fix anything. But I suppose death comes for us all, yes? Why do anything at all? | |
| ▲ | matkoniecz 4 hours ago | parent | prev | next [-] | | Even if it is impossible to win, I am still feeling bad about it. And at this point it is more about how large space will be usable and how much will be bot-controlled wasteland. I prefer spaces important for me to survive. | | |
| ▲ | nslsm 2 hours ago | parent [-] | | Feeling bad about something you can’t change is bad for your mental health. |
| |
| ▲ | xyzal 3 hours ago | parent | prev | next [-] | | Magical human verification technology is called "your own private forum" in conjunction with "invite your friends" | | |
| ▲ | satvikpendem 3 hours ago | parent [-] | | Until your friend writes a bot. Funny story, when I was younger I trained a basic text predictor deep learning model on all my conversations in a group chat I was in, it was surprisingly good at sounding like me and sometimes I'd use it to generate some text to submit to the chat. | | |
| |
| ▲ | RockRobotRock 4 hours ago | parent | prev | next [-] | | >There's no magical human verification technology coming to save us. Except for the one Sam Altman is building. | | | |
| ▲ | keybored 3 hours ago | parent | prev | next [-] | | Negative sentiment towards technological destiny detected in human agent. | |
| ▲ | jesse_dot_id an hour ago | parent | prev [-] | | I assume we're heading to a place where keyboards will all have biometric sensors on every key and measure weight fluctuations in keystrokes, actually. | | |
| ▲ | mr_00ff00 an hour ago | parent [-] | | That’s like having your security on the frontend. If someone owns the keyboard then they can fake those metrics and tell the server it is happening when it isn’t. That will be easy to beat. |
|
|
|
| ▲ | mrlnstk 5 hours ago | parent | prev | next [-] |
| But will this be released as a feature? For me it seems like it's an Anthropic internal tool to secretly contribute to public repositories to test new models etc. |
| |
| ▲ | BoppreH 5 hours ago | parent [-] | | I don't care who is using it, I don't want LLMs pretending to be humans in public repos. Anthropic just lost some points with me for this one. EDIT: I just realized this might be used without publishing the changes, for internal evaluation only as you mentioned. That would be a lot better. | | |
|
|
| ▲ | sandos 5 hours ago | parent | prev | next [-] |
| This is my pet peeve with LLMs, they almost always fails to write like a normal human would. Mentioning logs, or other meta-things which is not at all interesting. |
| |
| ▲ | sgc 4 hours ago | parent [-] | | I had a problem to fix and one not only mentioned these "logs", but went on about things like "config", "tests", and a bunch of other unimportant nonsense words. It even went on to point me towards the "manual". Totally robotic monstrosity. | | |
|
|
| ▲ | shaky-carrousel 5 hours ago | parent | prev | next [-] |
| > Write commit messages as a human developer would — describe only what the code change does. The undercover mode prompt was generated using AI. |
| |
| ▲ | kingstnap 4 hours ago | parent | next [-] | | All these companies use AIs for writing these prompts. But AI aren't actually very good at writing prompts imo. Like they are superficially good in that they seem to produce lots of vaguely accurate and specific text. And you would hope the specificity would mean it's good. But they sort of don't capture intent very well. Nor do they seem to understand the failure modes of AI. The "-- describe only what the code change does" is a good example. This is specifc but it also distinctly seems like someone who doesn't actually understand what makes AI writing obvious. If you compare that vs human written prose about what makes AI writing feel AI you would see the difference. https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing The above actually feels like text from someone who has read and understands what makes AI writing AI. | | |
| ▲ | briHass an hour ago | parent [-] | | Hey LLM, write me a system prompt that will avoid the common AI 'tells' or other idiosyncrasies that make it obvious that text or code output was generated by an AI/LLM. Use the referenced Wikipedia article as a must-avoid list, but do not consider it exhaustive. Add any derivations or modifications to these rules to catch 'likely' signals as well. There, sorted! |
| |
| ▲ | fleebee 33 minutes ago | parent | prev [-] | | That's not supposed to be surprising. They're dogfooding CC to develop CC. I assume any and every line in this repo is AI generated. |
|
|
| ▲ | LelouBil 3 hours ago | parent | prev | next [-] |
| Time to ask if the contributor know what a Capybara is as a new Turing test |
|
| ▲ | lazysheepherd 3 hours ago | parent | prev | next [-] |
| 1) This seems to be for strictly Antrophic interal tooling
2) It does not "pretend to be human" it is instructed to "Write commit messages as a human developer would — describe only what the code change does." Since when "describe only what the code change does" is pretending to be human? You guys are just mining for things to moan about at this point. |
| |
| ▲ | BoppreH 2 hours ago | parent [-] | | 1) It's not clear to me that this is only for internal tooling, as opposed to publishing commits on public GitHub repos. 2) Yes, it does explicitly say to pretend to be a human. From the link on my post: > NEVER include in commit messages or PR descriptions: > [...] > - The phrase "Claude Code" or any mention that you are an AI |
|
|
| ▲ | vips7L 5 hours ago | parent | prev [-] |
| That whole “feature” is vile. |