▲ | collinmcnulty 7 hours ago | ||||||||||||||||
As a mechanical engineer by background, this article feels weak. Yes it is common to “throw more steel at it” to use a modern version of the sentiment, but that’s still based on knowing in detail the many different ways a structure can fail. The lethal trifecta is a failure mode, you put your “steel” into making sure it doesn’t occur. You would never say “this bridge vibrates violently, how can we make it safe to cross a vibrating bridge”, you’d change the bridge to make it not vibrate out of control. | |||||||||||||||||
▲ | scuff3d 4 hours ago | parent | next [-] | ||||||||||||||||
Sometimes I feel like the entire world has lost its god damn mind. To use their bridge analogy, it would be like if hundreds of years ago we developed a technique for building bridges that technically worked, but occasionally and totally unpredictability, the bottom just dropped out and everyone on the bridge fell into the water. And instead of saying "hey, maybe there is something fundamentally wrong with this approach, maybe we should find a better way to build bridges" we just said "fuck it, just invest in nets and other mechanisms to catch the people who fall". We are spending billions to build infrastructure on top of technology that is inherently deeply unpredictable, and we're just slapping all the guard rails on it we can. It's fucking nuts. | |||||||||||||||||
| |||||||||||||||||
▲ | switchbak 6 hours ago | parent | prev [-] | ||||||||||||||||
When a byline starts with "coders need to" I immediately start to tune out. It felt like the analogy was a bit off, and it sounds like that's true to someone with knowledge in the actual domain. "If a company, eager to offer a powerful ai assistant to its employees, gives an LLM access to untrusted data, the ability to read valuable secrets and the ability to communicate with the outside world at the same time" - that's quite the "if", and therein lies the problem. If your company is so enthusiastic to offer functionality that it does so at the cost of security (often knowingly), then you're not taking the situation seriously. And this is a great many companies at present. "Unlike most software, LLMs are probabilistic ... A deterministic approach to safety is thus inadequate" - complete non-sequitur there. Why if a system is non-deterministic is a deterministic approach inadequate? That doesn't even pass the sniff test. That's like saying a virtual machine is inadequate to sandbox a process if the process does non-deterministic things - which is not a sensible argument. As usual, these contrived analogies are taken beyond any reasonable measure and end up making the whole article have very little value. Skipping the analogies and using terminology relevant to the domain would be a good start - but that's probably not as easy to sell to The Economist. | |||||||||||||||||
|