Remix.run Logo
raincole 7 hours ago

> AI-generated code feels like progress and efficiency, while AI-generated articles and posts feel low-effort

I've noticed that attitude a lot. Everyone thinks their use of AI is perfectly justified while the others are generating slops. In gamedev it's especially prominent - artists think generating code is perfectly ok but get acute stress response when someone suggests generating art assets.

joshuaissac 7 hours ago | parent | next [-]

AI-generated code is meant for the machine, or for the author/prompter. AI-generated text is typically meant for other people. I think that makes a meaningful difference.

ripe 7 hours ago | parent | next [-]

Code can be viewed as design [1]. By this view, generating code using LLMs is a low-effort, low-value activity.

[1] Code as design, essays by Jack Reeves: https://www.developerdotstar.com/mag/articles/reeves_design_...

acedTrex 7 hours ago | parent | prev | next [-]

Compiled code is meant for the machine, Written code is for other humans.

gordonhart 7 hours ago | parent | next [-]

For better or worse, a lot of people seem to disagree with this, and believe that humans reading code is only necessary at the margins, similarly to debugging compiler outputs. Personally I don't believe we're there yet (and may not get there for some time) but this is where comments like GP's come from: human legibility is a secondary or tertiary concern and it's fine to give it up if the code meets its requirements and can be maintained effectively by LLMs.

threetonesun 6 hours ago | parent | next [-]

I rarely see LLMs generate code that is less readable than the rest of the codebase it's been created for. I've seen humans who are short on time or economic incentive produce some truly unreadable code.

Of more concern to me is that when it's unleashed on the ephemera of coding (Jira tickets, bug reports, update logs) it generates so much noise you need another AI to summarize it for you.

gordonhart 5 hours ago | parent [-]

The main coding agent failure modes I've seen:

- Proliferation of utils/helpers when there are already ones defined in the codebase. Particularly a problem for larger codebases

- Tests with bad mocks and bail-outs due to missing things in the agent's runtime environment ("I see that X isn't available, let me just stub around that...")

- Overly defensive off-happy-path handling, returning null or the semantic "empty" response when the correct behavior is to throw an exception that will be properly handled somewhere up the call chain

- Locally optimal design choices with very little "thought" given to ownership or separation of concerns

All of these can pretty quickly turn into a maintainability problem if you aren't keeping a close eye on things. But broadly I agree that line-per-line frontier LLM code is generally better than what humans write and miles better than what a stressed-out human developer with a short deadline usually produces.

hinkley 7 hours ago | parent | prev [-]

And Sturgeon tells us 90% of people are wrong, so what can you do.

philipp-gayret 7 hours ago | parent | prev [-]

Compiled natural language is meant for the machine, Written natural language is for other humans.

CivBase 7 hours ago | parent [-]

If AI is the key to compiling natural language into machine code like so many claim, then the AI should output machine code directly.

But of course it doesn't do that becaude we can't trust it the way we do a traditional compiler. Someone has to validate its output, meaning it most certainly IS meant for humans. Maybe that will change someday, but we're not there yet.

jvanderbot 7 hours ago | parent | prev | next [-]

This is precisely correct IMHO.

Communication is for humans. It's our super power. Delegating it loses all the context, all the trust-building potential from effort signals, and all the back-and-forth discussion in which ideas and bonds are formed.

ginsider_oaks 5 hours ago | parent | prev | next [-]

> Programs must be written for people to read, and only incidentally for machines to execute.

from the preface of SICP.

everforward 7 hours ago | parent | prev | next [-]

A lot of writing (maybe most) is almost the same. Code is a means of translating a process into semantics a computer understands. Most non-fiction writing is a means of translating information or an idea into semantics that allow other people to understand that information or idea.

I don’t think either is inherently bad because it’s AI, but it can definitely be bad if the AI is less good at encoding those ideas into their respective formats.

askvictor 3 hours ago | parent | prev [-]

At the same time, AI-generated code has to be correct and precise, whereas AI-generated text doesn't. There's often no 'correct solution' in AI-generated text.

elxr 2 hours ago | parent | prev | next [-]

Considering the rise of transformer-based upscaling today (like the newest DLSS), lots of game devs are already indirectly ok with generated art. Sure the assets themselves might be handmade, but if the render pipeline involves a generated, upscaled image at the end then the line between AI and not AI is obviously very blurry.

Also, is a hand modeled final asset built based on AI-generated concept art still "AI"?

Who cares if a bush or a tree is fully AI-generated anyway? These "no AI whatsover on any game" people virtue signal too much to make a fair argument for whatever they're preaching about. Sure, I agree with the value of human creativity, but I also want people to be able to use whatever tools they like.

acedTrex 7 hours ago | parent | prev | next [-]

Ya i hate the idea that theres a difference, Code to me has always been as expressive about a person as normal prose. LLMs you lose a lot of vital information about the programmers personality. It leads to worse outcomes because it makes the failures less predictable.

jama211 7 hours ago | parent [-]

Code _can_ be expressive but it also can not, it depends on its purpose.

Some code I cobbled together to pass a badly written assignment at school. Other code I curated to be beautiful for my own benefit or someone else’s.

I think the better analogy in writing would be… using an LLM to draft a reply to a hawkish car dealer you’re trying to not get screwed by is absolutely fine. Using it to write a birthday card for someone you care about is terrible.

acedTrex 6 hours ago | parent [-]

All code is expressive, if a person emitted it, it is expressive about their state of mind, their values and their context.

pseudosavant 5 hours ago | parent | prev | next [-]

I think there’s an uncanny valley effect with writing now.

Yesterday I left a code review comment that someone asked if AI wrote it. The investigation and reasoning were 100% me. I spent over an hour chasing a nuanced timezone/DST edge case, iterating until I was sure the explanation was correct. I did use Codex CLI along the way, but as a power tool, not a ghostwriter.

The comment was good, but it was also “too polished” in a way that felt inorganic. If you know a domain well (code, art, etc.), you start to notice the tells even when the output is high quality.

Now I’m trying to keep my writing conspicuously human, even when a tool can phrase it perfectly. If it doesn’t feel human, it triggers the whole ai;dr reaction.

mrisoli 5 hours ago | parent | prev | next [-]

Wehad a junior engineer do some research on a handful of different solutions for a technical design and present the team, he came up with a 27-page document with 70+ references(2/3 of which were reddit threads), no more than a few hours later after the task was assigned.

I would have been more okay with AI generated code, it would likely have been more objective and less verbose, I refused to review something that he obviously didn't put enough effort himself to do a POC on. When I asked for his own opinion on the different solutions evaluated he didn't have one

It's not about the document per se, but the actual value of these verbose AI-generated slop, code that is executable, even if poorly reviewed, it's still executable and likely to produce the output that satisfies functional requirements.

Our PM is now evaluating tools to generate documentation for our platform based on interpreting source code, it includes description of things such as what is the title and what the back button is for but wouldn't inform valid inputs for the creation of a new artefact. This AI-generated doc is in addition to our human made Confluence docs, which is likely to add to spam and reduce quality of search results for useful information.

renato_shira 4 hours ago | parent | prev | next [-]

the gamedev version of this is wild. i'm working on a mobile game right now and the internal calculus is genuinely confusing: using AI to help write networking code feels totally normal, using it to generate placeholder UI feels fine, but using it for the actual visual identity of the game feels like cheating, even though technically it's all "content creation."

i think the real line is about whether the AI output is the product or a tool to build the product. AI-generated code that ships isn't really the product, the behavior it creates is. but AI-generated art that ships is the product in a way the user directly perceives. the uncanny valley isn't in the quality, it's in the relationship between the creator and the output.

nkrisc 4 hours ago | parent [-]

Because your users don’t see the network code or the GUI framework.

But to your users, the visual identity is the identity of the game. Do you really want to outsource that to AI?

hinkley 7 hours ago | parent | prev | next [-]

A flavor of the Primary Attribution Error perhaps? It’s not a snug fit, but it’s close.

dgxyz 5 hours ago | parent | prev | next [-]

My perspective as an eng lead is it’s all shit. Words, code, the lot. It’s literally an enabler for the worst characteristics of humanity: laziness and disinterested incompetence.

People are happy to shovel shit if they can get away with it.

stock_toaster 2 hours ago | parent [-]

Same here.

In addition, I feel like there has been an overall drop in software quality along with the rise of AI driven code development. Perhaps there are other driving factors (socioeconomic, psychological, etc) and perhaps I am misattributing it to AI. Then again, could also just be all the slop.

dgxyz an hour ago | parent [-]

I would agree. The stuff that hasn’t been updated in the last 3-4 years seems to be pretty solid still. Almost nothing else is.

HarHarVeryFunny 7 hours ago | parent | prev | next [-]

> Everyone thinks their use of AI is perfectly justified while the others are generating slops

No doubt, but I think there a bit of a difference between AI generating something utilitarian vs something expected to at least have some taste/flavor.

AI generated code may not be the best compared to what you could hand craft, along almost any axis you could suggest, but sometimes you just want to get the job done. If it works, it works, and maybe (at least sometimes) that's all the measure of success/progress you need.

Writing articles and posts is a bit different - it's not just about the content, it's about how it's expressed and did someone bother to make it interesting to read, and put some of their own personality into it. Writing is part communication, part art, and even the utilitarian communication part of it works better if it keeps the reader engaged and displays good theory of mind as to where the average reader may be coming from.

So, yeah, getting AI to do your grunt work programming is progress, and a post that reads like a washing machine manual can fairly be judged as slop in a context where you might have hoped for/expected better.

dfxm12 6 hours ago | parent | prev | next [-]

The author is a blogger (creator and consumer) and coder though. They are speaking from experience in both cases, so it's not apt to your metaphor.

It's worth pointing out that AI is not a monolith. It might be better at writing code than making art assets. I don't work with gaming, but I've worked with Veo 3, and I can tell you, AI is not replacing Vince Gilligan and Rhea Seehorn. That statement has nothing to do with Claude though...

jama211 7 hours ago | parent | prev | next [-]

Generating art is worse than generating code though IMO. It’s more personal. Everything exists on a spectrum, even slop.

elxr 2 hours ago | parent [-]

Code can be art. Art can be formulaic and lazy and disposable.

The context in which both the code or art is used matters more than whether or not what you're AI-generating is "art".

Blackthorn 7 hours ago | parent | prev [-]

Turns out it's only slop if it comes from anyone else, if you generated it it's just smart AI usage.