| ▲ | godelski 6 hours ago | |
Does anyone else find it concerning how we're just shipping alpha code these days? I know it's really hard to find all bugs internally and you gotta ship, but it seems like we're just outsourcing all bug finding to people, making them vulnerable in the meantime. A "bug" like this seems like one that could have and should have been found internally. I mean it's Google, not some no-name startup. And companies like Microsoft are ready to ship this alpha software into the OS? Doesn't this kinda sound insane? I mean regardless of how you feel about AI, we can all agree that security is still a concern, right? We can still move fast while not pushing out alpha software. If you're really hyped on AI then aren't you concerned that low hanging fruit risks bringing it all down? People won't even give it a chance if you just show them the shittest version of things | ||
| ▲ | funnybeam 5 hours ago | parent [-] | |
This isn’t a bug, it is known behaviour that is inherent and fundamental to the way LLMs function. All the AI companies are aware of this and are pressing ahead anyway - it is completely irresponsible. If you haven’t come across it before, check out Simon Willisons “lethal trifecta” concept which neatly sums up the issue and explains why there is no way to use these things safely for many of the things that they would be most useful for | ||