Remix.run Logo
ramraj07 3 hours ago

I used to think that the people who keep saying (in March 2026) that AI does not generate good code are just not smart and ask stupid prompts.

I think I've amended that thought. They are not necessarily lacking in intelligence. I hypothesize that LLMs pick up on optimism and pessimism among other sentiments in the incoming prompt: someone prompting with no hope that the result will be useful end up with useless garbage output and vice versa.

oh_my_goodness an hour ago | parent | next [-]

Exactly. You have to manifest at a high vibrational frequency.

jplusequalt an hour ago | parent [-]

Thanks for the laugh.

bitwize 2 hours ago | parent | prev | next [-]

This is kinda like that thing about how psychic mediums supposedly can't medium if there's a skeptic in the room. Goes to show that AI really is a modern-day ouija board.

ctrust 2 hours ago | parent [-]

The accurate inferences that can be drawn from subtle linguistic attributes should freak you out more than they do.

alchemism 37 minutes ago | parent [-]

Switching one good synonym can send the model off an entirely different direction in response, or so I’ve observed.

robbbbbbbbbbbb 18 minutes ago | parent | prev | next [-]

Don’t know why you’re getting downvoted, this is a fascinating hypothesis and honestly super believable. It makes way more sense than the intuitive belief that there’s actually something under the human skin suit understanding any of this code.

cyanydeez 21 minutes ago | parent | prev [-]

It's probably more to do with the intelligence required to know when a specific type of code will yield poor future coding integrations and large scale implementation.

It's pretty clear that people think greenfield projects can constantly be slopified and that AI will always be able to dig them another logical connection, so it doesn't matter which abstraction the AI chose this time; it can always be better.

This is akin to people who think we can just keep using oil to fuel technological growth because it'll some how improve the ability of technology to solve climate problems.

It's akin to the techno capitalist cult of "effective altruism" that assumes there's no way you could f'up the world that you can't fix with "good deeds"

There's a lot of hidden context in evaluating the output of LLMs, and if you're just looking at todays success, you'll come away with a much different view that if you're looking at next year's.

Optimism is only then, in this case, that you believe the AI will keep getting more powerful that it'll always clean up todays mess.

I call this techno magic, indistinguishable from religious 'optimism'