| ▲ | BrenBarn 21 hours ago |
| > Sure, you could use blob-util, but then you’d be taking on an extra dependency, with unknown performance, maintenance, and supply-chain risks. Use of an AI to write your code is also a form of dependency. When the LLM spits out code and you just dump it in your project with limited vetting, that's not really that different from vendoring a dependency. It has a different set of risks, but it still has risks. |
|
| ▲ | cortesoft 20 hours ago | parent | next [-] |
| Part of the benefit over a dependency is that the code added will (hopefully) be narrowly tailored to your specific need, rather than the generic implementation from a library that likely has support for unused features. Not including the unused features both makes the code you are adding easier to read and understand, but it also may be more efficient for your specific use case, since you don't have to take into account all the other possible use cases you don't care about. |
| |
| ▲ | bloomca 20 hours ago | parent | next [-] | | But in a lot of cases you can't know all the dependencies, so you lean on the community trusting that a package solves the problem well enough that you can abstract it. You can pin the dependency and review the changes for security reasons, but fully grasping the logic is non-trivial. Smaller dependencies are fine to copy at first, but at some point the codebase becomes too big, so you abstract it and at that point it becomes a self-maintained dependency. Which is a fair decision, but it is all about tradeoffs and sometimes too costly. | | |
| ▲ | mkj 18 hours ago | parent [-] | | You'd get those benefits from traditional dependencies if you copy them in and never update. Is an AI dependency going to have the equivalent of "upstream fixes"? | | |
| ▲ | cortesoft 13 hours ago | parent [-] | | Probably? LLMs will train on fixes, then if you run the code through the LLM again to fix it. |
|
| |
| ▲ | lmm 13 hours ago | parent | prev [-] | | > Part of the benefit over a dependency is that the code added will (hopefully) be narrowly tailored to your specific need, rather than the generic implementation from a library that likely has support for unused features. In decent ecosystems there should be low or zero overhead to that. > Not including the unused features both makes the code you are adding easier to read and understand, but it also may be more efficient for your specific use case, since you don't have to take into account all the other possible use cases you don't care about. Maybe. I find generic code is often easier to read than specialised custom implementations, because there is necessarily a proper separation of concerns in the generic version. |
|
|
| ▲ | nolanl 20 hours ago | parent | prev | next [-] |
| Right, but you do avoid worries like "will I have to update this dependency every week and deal with breaking changes?" or "will the author be compromised in a supply-chain attack, or do a deliberate protestware attack?" etc. As for performance, a lot of npm packages don't have proper tree-shaking, so you might be taking on extra bloat (or installation cost). Your point is well-taken, though. |
| |
| ▲ | rcxdude 19 hours ago | parent | next [-] | | You can avoid all those worries by vendoring the code anyway. you only 'need' to update it if you are pulling it in as a separate dependency. | |
| ▲ | KPGv2 18 hours ago | parent | prev [-] | | > you do avoid worries like "will I have to update this dependency every week and deal with breaking changes? This is not a worry with NPM. You can just specify a specific version of a dependency in your package.json, and it'll never be updated ever. I have noticed for years that the JS community is obsessed with updating every package to the latest version no matter what. It's maddening. If it's not broke, don't fix it! |
|
|
| ▲ | danelski 18 hours ago | parent | prev | next [-] |
| Wouldn't call it a risk in itself, but part of the benefit of using a library, a good and tailored one at least, is that it'll get modernised without my intervention. Even if the code produced for you was state-of-the-art at the moment of inclusion, will it remain that way 5 years from now? |
|
| ▲ | ronbenton 20 hours ago | parent | prev [-] |
| > and you just dump it in your project with limited vetting Well yes there’s your problem. But people have been doing this with random snippets found on the internet for a while now. The truth is that irresponsibles developers will produce irresponsible code, with or without LLMs |
| |
| ▲ | fullofideas 20 hours ago | parent [-] | | > The truth is that irresponsibles developers will produce irresponsible code, with or without LLMs
True. But the difference is the scale and ease of doing with code generators. With a few clicks you can add hundreds of lines of code which supposedly does the right thing. While in the past, you would get code snippets for a particular aspect of the problem that you are trying to solve. You still had to figure out how to add it to your code base and somehow make it “work” | | |
| ▲ | ninalanyon 20 hours ago | parent [-] | | Surely in any responsible development environment those hundreds of lines of code still have to be reviewed. Or don't people do code review any more? I suppose one could outsource the code review to an AI, preferably not the one that wrote it though. But if you do that surely you will end up building systems that no one understands at all. | | |
| ▲ | fullofideas 19 hours ago | parent [-] | | Agree. Any reasonable team should have code reviews in place, but an irresponsible coder would push the responsibility of code quality and correctness to code reviewers. They were doing it earlier too, but the scale and scope was much smaller. |
|
|
|