| |
| ▲ | DrewADesign 10 hours ago | parent | next [-] | | It appears that your company experienced an incident during which a blog entry was made available in which readers became informed about certain information about a server condition that resulted in certain users receiving a barrage of indirect clauses etc. etc. etc. Be more direct. Be concise. This blog post sounds like a cagey customer service CYA response. It defeats the purpose of publishing a blog post showing that you’re mature, aware, accountable, and transparent. | |
| ▲ | codechicago277 11 hours ago | parent | prev [-] | | The problem is that these visible errors make us wonder what other errors in the post are less visible. Fixing them doesn’t fix the process that led to them. | | |
| ▲ | slopinthebag 11 hours ago | parent [-] | | I'm pretty sure it's AI. https://x.com/JustJake/status/2007730898192744751 I wouldn't be surprised if most of Railway's infra is running on Claude at this point. | | |
| ▲ | antics 10 hours ago | parent | next [-] | | The CEO says it's not: https://x.com/JustJake/status/2038799619640250864 A lot of people are confident in enough in their ability to spot AI infra that they are willing to dismiss a firsthand source on this, and I admit I have no idea why. There isn't any upside to making this claim, and anyway, I assure you that people need no help at all from AI to make these kinds of mistakes. | | |
| ▲ | slopinthebag 8 hours ago | parent [-] | | Their reply doesn't make much sense, they're supposedly soc2 compliant. How are they compliant but letting a single engineer push out a change like that? I'm sure Claude didn't literally ship the feature itself with no oversight, but I also find it hard to believe that their approach to adopting AI didn't factor in at all. Even just like, the mental overhead of moving faster and adopting AI code with less stringent review leading to an increase in codebase complexity could cause it. Couple that with an AI hallucinating an answer to the engineer who shipped this change, I'm not sure why people are so quick to discount this as a potential source of the issue. Surely none of us want our infra to become less secure and reliable, and so part of preventing that from happening is being honest about the challenges of integrating AI into our development processes. | | |
| ▲ | antics 7 hours ago | parent [-] | | > I'm not sure why people are so quick to discount [AI] as a potential source of the issue. Because (per the link above) the CEO said that (1) it was their fault, and (2) it had nothing to do with AI. I understand that on this forum statements like this are inevitably greeted with some amount of skepticism, but right now I'm seeing no particular reason to disbelieve Jake, and the reason that "if they did use AI they'd deny it" should frankly not be considered good enough to fly around here. Like probably everyone in this comment section I'm open to evidence that they used AI to slop-incident themselves, but until we can reach that standard let's please calm down and focus on what we actually know to be true. | | |
| ▲ | hihicoderhi an hour ago | parent | next [-] | | During this whole incident, Railway have made a wide range of misleading and straight out false claims to cover themselves, so them saying it wasn't AI is pretty much meaningless | | | |
| ▲ | slopinthebag 7 hours ago | parent | prev [-] | | Come on man, their CEO is a massive vibe coding proponent and his company spent $300,000 on Claude this month. But yeah, I'm sure Claude had nothing to do with any of it. I bet they don't use it to write any code. https://xcancel.com/JustJake/status/2030063630709096483#m | | |
| ▲ | stingraycharles 5 hours ago | parent [-] | | Both things can be true: they’re doing a lot of vibe coding, and this was a human error that didn’t involve AI. | | |
|
|
|
| |
| ▲ | stingraycharles 10 hours ago | parent | prev [-] | | It's fine they use AI, it's not fine they don't proofread things. |
|
|
|