Remix.run Logo
Terr_ 3 hours ago

I'd emphasize that prompting LLMs to generate code isn't just metaphorical gambling in the sense of "taking a risk", the scary part is the more-literal gambling involving addictive behaviors and how those affect the way the user interacts with the machine and the world.

Heck, this technology also offers a parasocial relationship at the same time! Plopping tokens into a slot-machine which also projects a holographic "best friend" that gives you "encouragement" would fit fine in any cyberpunk dystopia.

RhythmFox an hour ago | parent | next [-]

Having used agents some I think 'addictive behavior' is really the closest thing to the feeling it gives me as well. I don't find it engaging my critical thinking brain, and in fact it often subverts that in favor of 'get the next dopamine hit faster' behavior (ie just rerun it, leading to the metaphor the OP is using). It takes a conscious effort for me to get back out of that cycle and start thinking of the fine details of what the code really does, or why I wanted it to do that in the first place. I have called it 'smoking vibes' and 'chasing rAInbows' in my sillier moments. It really does feel good... too good :P

interestpiqued 2 hours ago | parent | prev [-]

I think AI literally makes even being wrong feel like getting something done. And that is the addictive part for people.

rsoto2 2 hours ago | parent | next [-]

Look at all this text I have! It can't be worthless right?!

cyanydeez an hour ago | parent | prev | next [-]

"Near-Miss" effect: https://harprehab.com/blogs/the-psychology-of-risk-why-gambl...

I believe that's the strongest pattern in LLM gambling. Was listening the Syntax and they described that "Even though theLLM did it wrong 4 times, that 5th time could be right, so why not just go!"; paraphrased of course.

It also explains the meta-LLM business, where all these CEO types put in some question and because the LLM just knows all these words, they believe it's valuable because it's "almost" correct, even when that last correction might be forever elusive because these machines arn't thinking, they're patterning a highly regularized language beneath the more loose descriptions.

There'll definitely be a winner in the AI bubble, but it'll be seen after it pops.

Terr_ 2 hours ago | parent | prev [-]

[dead]