| ▲ | Kiro 4 hours ago |
| LLMs definitely write more robust code than most. They don't take shortcuts or resort to ugly hacks. They have no problem writing tedious guards against edge cases that humans brush off. They also keep comments up to date and obsess over tests. |
|
| ▲ | BoorishBears 2 hours ago | parent | next [-] |
| I had 5.3-Codex take two tries to satisfy a linter on Typescript type definitions. It gave up, removed the code it had written directly accessing the correct property, and replaced it with a new function that did a BFS to walk through every single field in the API response object while applying a regex "looksLikeHttpsUrl" and hoping the first valid URL that had https:// would be the correct key to use. On the contrary, the shift from pretraining driving most gains to RL driving most gains is pressuring these models resort to new hacks and shortcuts that are increasingly novel and disturbing! |
|
| ▲ | devmor 4 hours ago | parent | prev [-] |
| Interesting and completely wrong statement, what gave you this impression? |
| |
| ▲ | Kiro 3 hours ago | parent | next [-] | | The discourse around LLMs has created this notion that humans are not lazy and write perfect code. They get compared to an ideal programmer instead of real devs. | | |
| ▲ | joe_mamba 2 hours ago | parent | next [-] | | This. The hacks, shortcuts and bugs I saw in our product code after i got hired, were stuff every LLM would tell you not to do. | |
| ▲ | gxs 2 hours ago | parent | prev | next [-] | | Amen. On top of that, especially now, with good prompting you can get closer to that better than you think. | |
| ▲ | salawat 3 hours ago | parent | prev [-] | | LLM's at best asymptotically approach a human doing the same task. They are trained on the best and the worst. Nothing they output deserves faith other than what can be proven beyond a shadow of a doubt with your own eyes and tooling. I'll say the same thing to anyone vibe coding that I'd say to programmatically illiterate. Trust this only insofar as you can prove it works, and you can stay ahead of the machine. Dabble if you want, but to use something safely enough to rely on, you need to be 10% smarter than it is. |
| |
| ▲ | dylanowen 3 hours ago | parent | prev | next [-] | | I know right. I kept waiting for a sarcasm tag at the end | |
| ▲ | majorchord 3 hours ago | parent | prev [-] | | right and wrong don't exist when evaluating subjective quantifiers |
|