| ▲ | testdelacc1 6 hours ago | |
I hate to be the guy saying AI will solve it, but this is a case where AI can help. I think in the next couple of years we’ll see people writing small functions with Claude/codex/whatever instead of pulling in a dependency. We might or might not like the quality of software we see, but it will be more resistant to supply chain attacks. | ||
| ▲ | elondaits 24 minutes ago | parent | next [-] | |
I don’t think I’ll live long enough to trust AI coding assistants with something like schema validation, just to name one thing I use dependencies for. | ||
| ▲ | jeromegv 3 hours ago | parent | prev | next [-] | |
When there's a depedency, it's typically not for a small function. If you want to replace a full dependency package by your own generated code, you'll need to review hundreds of even thousands of line of code. Now will you trust that AI didn't include its own set of security issues and will you have the ability to review so much code? | ||
| ▲ | delaminator 4 hours ago | parent | prev | next [-] | |
For sure. I don't think the software ecosystem has come to terms with how things are going to change. Libraries will be providing raw tools like - Sockets, Regex Engine, Cryptography, Syscalls, specific file format libraries LLMs will be building the next layer. I have build successful running projects now in Erlang, Scheme, Rust - I know the basic syntax of two of those but I couldn't write my deployed software in any of them in the couple of hours of prompting. The scheme it had to do a lot of code from first principles and warned me how laborious it would be - "I don't care, you are doing it." I have tools now I could not have imagined I could build in a reasonable time. | ||
| ▲ | viraptor 6 hours ago | parent | prev | next [-] | |
I wonder what the actual result will be. LLMs can generate functions quickly, but they're also keen to include packages without asking. I've had to add a "don't add new dependencies unless explicitly asked" to a few project configs. | ||
| ▲ | short_sells_poo 6 hours ago | parent | prev | next [-] | |
How is this going to solve the supply chain attack problem at all though? It just obfuscates things even more, because once an LLM gets "infected" with malicious code, it'll become much more difficult to trace where it came from. If anything, blind reliance on LLMs will make this problem much worse. | ||
| ▲ | scotty79 2 hours ago | parent | prev | next [-] | |
Then your dependency will be "AI getting it right every single time". | ||
| ▲ | brigandish 6 hours ago | parent | prev [-] | |
An approach I learnt from a talk posted to HN (I forget the talk, not the lesson) is to not depend on the outside project for its code, just lift that code directly in to your project, but to rely on it for the tests, requiring/importing it etc when running your own tests. That protects you from a lot of things (this kind of attack was not mentioned, afaic recall) but doesn’t allow bugs found by the other project to be missed either. | ||