Remix.run Logo
tmoertel a day ago

> But I delegate checks to tools all the time. e.g. I could spend my time checking whether locks are all used correctly in our code, or I could use libraries designed to force correctness[0].

Do you believe that because you can delegate some responsibilities without sacrificing important requirements that it follows that you can delegate all responsibilities without sacrificing important requirements? Do you not understand the difference between delegating to the computer proofs such as type checking that the computer can discharge faithfully without error and delegating something as wide and perilous as security to something as currently flawed as AI?

> An LLM isn't an ideal solution to linting, but if you're stuck with a language with a weak type system maybe that's all you can reasonably do.

No, in such a situation you can add LLM-based checks to your responsibility for security. But you can’t delegate away your responsibility to LLMs and say that you care about security. AI ain’t there yet.

> The actual problem is that you're using strings at all.

What percentage of the world’s existing code do you believe does not use strings at all? Tragically, that is the world we live in.

> Basically, work in the domain, not in the serialized representation (strings).

Sure, but you can’t do all your work in the domain. At some point you must take data from the outside world as input or emit data as output. And, even if you are lucky enough to work in a domain where someone has done the parsing and serialization and modeling work for you so that you have the luxury of a semantic model to work with instead of strings, who had to write that domain library? What rules did that person have to know to write that library without introducing security holes?

> [ChatGPT] did also several times indicate there was more to the story.

Great. Then show me how a person who didn’t know of the existence of the rules I shared with you in my previous post would naturally arrive at them by continuing your conversation with ChatGPT.

> security (which is an active area of improvement to sell to professionals) will probably be "solved" before ease-of-use.

I think that this is a naive hope. Security is different from virtually all other responsibilities in computing, such as ease of use, because getting it right 99.99% of the time isn’t good enough. In security, there is no “happy path”: it takes just one vulnerability to thoroughly sink a system. Security is also different because you must expect that adversaries exist who will search unceasingly for vulnerabilities, and they will use increasingly novel and clever methods. Users won’t probe your system looking for ease-of-use failures in the UI. So if you think that AIs are going to get security right before ease-of-use, I think you are likely to be mistaken.