Remix.run Logo
jzelinskie 13 hours ago

Security has always been a game of just how much money your adversary is willing to commit. The conclusions drawn in lots of these articles are just already well understood systems design concepts, but for some reason people are acting like they are novel or that LLMs have changed anything besides the price.

For example from this article:

> Karpathy: Classical software engineering would have you believe that dependencies are good (we’re building pyramids from bricks), but imo this has to be re-evaluated, and it’s why I’ve been so growingly averse to them, preferring to use LLMs to “yoink” functionality when it’s simple enough and possible.

Anyone who's heard of "leftpad" or is a Go programmer ("A little copying is better than a little dependency" is literally a "Go Proverb") knows this.

Another recent set of posts to HN had a company close-sourcing their code for security, but "security through obscurity" has been a well understand fallacy in open source circles for decades.

lelanthran 3 hours ago | parent | next [-]

> Another recent set of posts to HN had a company close-sourcing their code for security, but "security through obscurity" has been a well understand fallacy in open source circles for decades.

I dunno about that quoted bit; "Defense in depth" (Or defense via depth) is a good thing, and obscurity is just one of those layers.

"Security through obscurity" is indeed wrong if the obscurity is a large component of the security, but it helps if it is just another layer of defense in the stack.

IOW, harden your system as if it were completely transparent, and only then make it opaque.

pmontra 4 hours ago | parent | prev [-]

Yes, there is nothing novel in "to harden a system we need to spend more tokens discovering exploits than attackers spend exploiting them." That's what security always looked like, physical security included (burglars, snipers, etc.) So when AI is available you have to throw more AI at securing your system than your adversaries do. What a surprise.

Maybe we could start with the prompts for the code generation models used by developers.