| ▲ | NinjaTrance 3 hours ago |
| Engineers have been vibe coding a lot recently... |
|
| ▲ | jsheard 3 hours ago | parent | next [-] |
| The featured blog post where one of their senior engineering PMs presented an allegedly "production grade" Matrix implementation, in which authentication was stubbed out as a TODO, says it all really. I'm glad a quarter of the internet is in such responsible hands. |
| |
| ▲ | gtowey 2 hours ago | parent | next [-] | | It's spreading and only going to get worse. Management thinks AI tools should make everyone 10x as productive, so they're all trying to run lean teams and load up the remaining engineers with all the work. This will end about as well as the great offshoring of the early 2000s. | |
| ▲ | blibble 3 hours ago | parent | prev | next [-] | | there was also a post here where an engineer was parading around a vibe-coded oauth library he'd made as a demonstration of how great LLMs were at which point the CVEs started to fly in | |
| ▲ | ranger_danger 28 minutes ago | parent | prev | next [-] | | Matrix doesn't actually define how one should do authentication though... every homeserver software is free to implement it however they want. | |
| ▲ | dana321 3 hours ago | parent | prev [-] | | Thats a classic claude move, even the new sonnet 4.6 still does this. | | |
| ▲ | bonesss 3 hours ago | parent | next [-] | | It’s almost as classic as just short circuiting tests in lightly obfuscated ways. I could be quite the kernel developer if making the test green was the only criteria. | |
| ▲ | brutalc 3 hours ago | parent | prev [-] | | [dead] |
|
|
|
| ▲ | dakiol 3 hours ago | parent | prev [-] |
| No joke. In my company we "sabotaged" the AI initiative led by the CTO. We used LLMs to deliver features as requested by the CTO, but we introduced a couple of bugs here and there (intentionally). As a result, the quarter ended up with more time allocated to fix bugs and tons of customer claims. The CTO is now undoing his initiative. We all have now some time more to keep our jobs. |
| |
| ▲ | samrus 2 hours ago | parent | next [-] | | Thats actively malicious. I understand not going out of your way to catch the LLMs' bugs so as to show the folly of the initiative, but actively sabotaging it is legitimately dangerous behavior. Its acting in bad faith. And i say this as someone who would mostly oppose such an initiative myself I would go so far as to say that you shouldnt be employed in the industry. Malicious actors like you will contribute to an erosion of trust thatll make everything worse | | |
| ▲ | sp00chy 2 hours ago | parent | next [-] | | Might be but sometimes you don’t have another choice when employers are enforcing AIs which have no „feeling“ for context of all business processes involved created by human workers in the years before. Those who spent a lot of love and energy for them mostly. And who are now forced to work against an inferior but overpowered workforce. Don’t stop sabotaging AI efforts. | |
| ▲ | tovej an hour ago | parent | prev [-] | | Forcing developers to use unsafe LLM tools is also malicious. This is completely ethical to me. Not commenting on legality. But ethically, this is correct. |
| |
| ▲ | hypeatei 2 hours ago | parent | prev | next [-] | | That's extremely unethical. You're being paid to do something and you deliberately broke it which not only cost your employer additional time and money, but it also cost your customers time and money. If I were you, I'd probably just quit and find another profession. | |
| ▲ | renegade-otter 2 hours ago | parent | prev | next [-] | | I see someone is not familiar with the joys of the current job market. | |
| ▲ | logicchains 3 hours ago | parent | prev [-] | | That's not "sabotaged", that's sabotaged, if you intentionally introduced the bugs. Be very careful admitting something like that publicly unless you're absolutely completely sure nobody could map your HN username to your real identity. |
|