| ▲ | jjcm 6 hours ago |
| No amount of valuation can fix global supply issues for GPUs for inference unfortunately. I suspect they're highly oversubscribed, thus the reason why we're seeing them do other things to cut down on inference cost (ie changing their default thinking length). |
|
| ▲ | natpalmer1776 6 hours ago | parent | next [-] |
| Remember when OpenAI wasn’t allowing new subscriptions to their ChatGPT pro plans because they were oversubscribed? Pepperidge Farms remembers. |
| |
| ▲ | andai 5 hours ago | parent [-] | | Wouldn't that be good? I remember back in the day you could only get Gmail thru an invite, it was an awesome strategy. "Currently closed for applications" creates FOMO. They'd just need to actually get the GPUs in relatively short supply. They could do it in bursts though, right? "Now accepting applications for a short time." I'm not an internet marketer but that sounds like a win win to me. People feel special, they get extra hype, and the service isn't broken. | | |
| ▲ | hirako2000 5 hours ago | parent | next [-] | | In the case of Gmail that was fake scarcity. In the case of Anthropic is fake availability. Sam Altman explained the idea is to scale the thing up, and see what happens. He hadn't claimed to offer a solution to the supply problem that would unfold. | | |
| ▲ | bruckie 3 hours ago | parent | next [-] | | Are you sure it was fake scarcity for Gmail? IIRC they did it because they were worried about systems falling over if it grew too fast, and discovered the marketing benefits as a side effect. | |
| ▲ | iainmerrick 3 hours ago | parent | prev [-] | | Are you mixing up Anthropic and OpenAI here? |
| |
| ▲ | joquarky 3 hours ago | parent | prev | next [-] | | Google Wave demonstrated that this doesn't always work. | |
| ▲ | the_gipsy 5 hours ago | parent | prev [-] | | Yes, "Pepperidge farm remembers" is usually about how something used to be good. | | |
| ▲ | CoastalCoder 3 hours ago | parent [-] | | Yeah, but there was a spoof on that (in Family Guy?). It was a tie in to the movie "I Know what you Did last Summer", IIRC. |
|
|
|
|
| ▲ | scratchyone 5 hours ago | parent | prev | next [-] |
| maybe, but the response to GPU shortages being increased error rates is the concern imo. they could implement queuing or delayed response times. it's been long enough that they've had plenty of time to implement things like this, at least on their web-ui where they have full control. instead it still just errors with no further information. |
| |
| ▲ | skeledrew 5 hours ago | parent | next [-] | | I've been experiencing a good amount of delays (says it's taking extra time to really think, etc), and I'm using during off-peak time. | | |
| ▲ | scratchyone 5 hours ago | parent [-] | | i notice that as well. most of the time when i see those it has a retry counter also and i can see it trying and failing multiple requests haha. almost never succeeds in producing a response when i see those though, eventually just errors out completely. |
| |
| ▲ | hirako2000 5 hours ago | parent | prev [-] | | Coding is a problem solved. Claud writes the code. I edit it. I code around it. Engineer roles dead in 6 months. | | |
| ▲ | post-it 4 hours ago | parent | next [-] | | > I edit it. I code around it. You're never gonna guess what software engineers do. | |
| ▲ | bulbar 3 hours ago | parent | prev | next [-] | | Because of the context I would think this is sarcasm, but I am not sure. | |
| ▲ | 3 hours ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | AlecSchueler 3 hours ago | parent | prev | next [-] |
| > thus the reason why we're seeing them do other things to cut down on inference cost (ie changing their default thinking length). The dynamic thinking and response length is funny enough the best upgrade I've experienced with the service for more than a year. I really appreciate that when I say or ask something simple the answer now just comes back as a single sentence without having to manually toggle "concise" mode on and off again. |
|
| ▲ | zachncst 5 hours ago | parent | prev | next [-] |
| Sure but we don't need GPUs to log in. |
|
| ▲ | sobellian 5 hours ago | parent | prev | next [-] |
| Their issues seem to extend well beyond inference into services like auth. |
| |
| ▲ | ryandrake 5 hours ago | parent [-] | | Yes. Whenever these outages happen, it always seems that it's their login system that is broken. | | |
| ▲ | bostik 3 hours ago | parent [-] | | That implies that either the auth is too heavy (possible, ish) or their systems don't degrade gracefully enough and many different types of failures propagate up and out all the way to their outermost layer, ie. auth (more plausible). Disclosure: I have scars from a distributed system where errors propagated outwards and took down auth... |
|
|
|
| ▲ | paulddraper 3 hours ago | parent | prev [-] |
| A. These aren’t rate limit errors from the API. B. Everything is down, even auth. |