| ▲ | boston_clone 3 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Thanks, ChatGPT. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | patates 3 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
For fun (not because I'm trying to defend the capabilities of LLMs, but really just out of boredom), I tried to make this better with this prompt: > make this sound less AI-generated and more like a proper hacker news comment with some SUBSTANCE: (insert the OP comment from above) and Gemini 3 Flash generated this: > This is exactly why CVSS scores can be misleading in practice. You see this constantly at Pwn2Own: it’s rarely one massive architectural failure, but a daisy-chain of three or four "low severity" quirks—an info leak here, a minor logic error there. > To a product manager, these look like edge cases or wontfix tickets. But to an attacker, they aren't bugs; they’re primitives. You use the info leak to bypass ASLR, the logic error to groom the heap, and suddenly a nuisance crash becomes full RCE. We tend to model threats based on distinct barriers, ignoring that the complexity of the system guarantees these gadgets will eventually align. It feels like it's fun when one plays with it on their own but it's really boring when reading the content others have generated (and therefore I'm sorry for adding to the pile - just wanted to see if the "HN style" was already baked-in to the LLMs and share the result: Yes it is). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||