| ▲ | dkdcio 13 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||
I was pointing out one screenshot from twitter isn’t proof of anything just to be clear; it’s a silly way to make a point. yes AI makes leaking keys on GH more prevalent, but so what? it’s the same problem as before with roughly the same solution I’m saying neural networks being probabilistic doesn’t matter — everything is probabilistic. you can still practically use the tools to great effect, just like we use everything else that has underlying probabilities OpenAI did not have to describe it as sycophancy, they chose to, and I’d contend it was a stupid choice and yes, you can explain what went wrong just like you can with CPUs. we don’t (usually) talk about quantum-level physics when discussing CPUs; talking about neurons in LLMs is the wrong level of abstraction | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | rvz 12 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
> I was pointing out one screenshot from twitter isn’t proof of anything just to be clear; it’s a silly way to make a point. Verses your anecdote being a proof of what? Skill issue for vibe coders? Someone else prompting it wrong? You do realize you are proving my entire point? > yes AI makes leaking keys on GH more prevalent, but so what? it’s the same problem as before with roughly the same solution Again, it exacerbates my point such that it makes the existing issue even worse. Additionally, that wasn't even the only point I made on the subject. > I’m saying neural networks being probabilistic doesn’t matter — everything is probabilistic. When you scale neural networks to become say, production-grade LLMs, then it does matter. Just like it does matter for CPUs to be reliable when you scale them in production-grade data centers. But your earlier (fallacious) comparison ignores the reliability differences between them (CPUs vs LLMs.) and determinism is a hard requirement for that; which the latter, LLMs are not. > OpenAI did not have to describe it as sycophancy, they chose to, and I’d contend it was a stupid choice For the press, they had to, but no-one knows the real reason, because it is unexplainable; going back to my other point on reliability. > and yes, you can explain what went wrong just like you can with CPUs. we don’t (usually) talk about quantum-level physics when discussing CPUs; talking about neurons in LLMs is the wrong level of abstraction It is indeed wrong for LLMs because not even the researchers can practically give an explanation why a single neuron (for every neuron in the network) gives different values on every fine-tune or training run. Even if it is "good enough", it can still go wrong at the inference-level for other unexplainable reasons other than it "overfitted". CPUs on the other hand, have formal verification methods which verify that the CPU conforms to its specification and we can trust that it works as intended and can diagnose the problem accurately without going into atomic-level details. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||