| ▲ | Claude Code users hitting usage limits 'way faster than expected'(bbc.com) |
| 16 points by steveharing1 11 hours ago | 15 comments |
| |
|
| ▲ | GuestFAUniverse 9 hours ago | parent | next [-] |
| For a start they could make the answers less talkative? I switched back to ChatGPT out of necessity, because Claude stopped working after two queries, where it gave overly elaborate answers (about a simple web app config). But Claude isn't alone.
It seems a recent (subjective) trend that Claude and ChatGPT give very lengthy answers, with a lot of repetition from the original query on the free plans. I got used to add "answer briefly", to keep the noise in check. |
| |
| ▲ | steveharing1 7 hours ago | parent | next [-] | | Yes lately i've also noticed the same pattern that Model try to provide over explaination to even simple stuff & that points to its system prompt or something internal instructions to waste tokens to hit limits | |
| ▲ | goalieca 7 hours ago | parent | prev [-] | | And just as with a real human rambler, the longer they rambler, the more likely they are to start making stuff up and asserting false truths |
|
|
| ▲ | mentalgear 8 hours ago | parent | prev | next [-] |
| > Anthropic recently accidentally released part of its internal source code for Claude Code due to "human error". I wonder who that human was counting on leading up to this "human error" ... |
|
| ▲ | akmarinov 11 hours ago | parent | prev | next [-] |
| Yeah the whole OpenAI exodus brought in a ton of people and Anthropic was struggling to meet the previous usage already That’s why there’re now work hours restrictions |
| |
| ▲ | steveharing1 10 hours ago | parent | next [-] | | Yes that make sense also Since Anthropic says other Chinese companies using their data for their models, they might be limiting use on new accounts. | | |
| ▲ | mentalgear 8 hours ago | parent [-] | | How ironic: once the exfiltrators of all of the web's data have consolidated it into their own walled-garden it becomes 'proprietary' and must - of course - be protected from exfiltration by others as if it was their own. | | |
| ▲ | steveharing1 7 hours ago | parent [-] | | This is something these tech giants ignore intentionally. Infact many people don't even know about how they train their model by scraping data for free & when it comes to their code being open source, you see Takedowns lol. Interestingly, Anthropic made a bigger Contribution to open source itself. |
|
| |
| ▲ | jamiemallers 10 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | gregoriol 11 hours ago | parent | prev | next [-] |
| Is that really on BBC? what a world we live in... |
| |
| ▲ | illwrks 11 hours ago | parent [-] | | Anthropic launched in the UK recently (Feb I think) so I expect it’s as a consequence of that. | | |
|
|
| ▲ | general_reveal 11 hours ago | parent | prev [-] |
| [flagged] |
| |
| ▲ | roomey 11 hours ago | parent [-] | | Your gonna get flagged and all for this comment..... But I agree. Is there a HN frontend that filters out mentions of AI, it would make a nice change... Maybe I should AI code it /just joking | | |
| ▲ | theblazehen 10 hours ago | parent [-] | | I'm unironically working on a proxy that filters sites like reddit, hn, etc by using user provided LLM rules |
|
|