Remix.run Logo
antirez 4 days ago

To understand why this is too optimistic, you have to look at things where AI is already almost human-level. Translations are more and more done exclusively with AI or with a massive AI help (with the effect of destroying many jobs anyway) at this point. Now ebook reading is switching to AI. Book and music album covers are often done with AI (even if this is most of the times NOT advertised), and so forth. If AI progresses more in a short timeframe (the big "if" in my blog post), we will see a lot of things done exclusively (and even better 90% of the times, since most humans doing a given work are not excellence in what they do) by AI. This will be fine if governments immediately react and the system changes. Otherwise there will be a lot of people to feed without a job.

keiferski 4 days ago | parent | next [-]

I can buy the idea that simple specific tasks like translation will be dramatically cut down by AI.

But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills.

AI art seems to basically only be viable when it can’t be identified as AI art. Which might not matter if the intention is to replace cheap graphic design work. But it’s certainly nowhere near developed enough to create anything more sophisticated, sophisticated enough to both read as human-made and have the imperfect artifacts of a human creator. A lot of the modern arts are also personality-driven, where the identity and publicity of the artist is a key part of their reception. There are relatively few totally anonymous artists.

Beyond these very specific examples, however, I don’t think it follows that all or most jobs are going to be replaced by an AI, for the reasons I already stated. You have to factor in the sociopolitical effects of technology on its adoption and spread, not merely the technical ones.

int_19h 3 days ago | parent | next [-]

It's kinda hilarious to see "simple task ... like translation". If you are familiar with the history of the field, or if you remember what automated translation looked like even just 15 years ago, it should be obvious that it's not simple at all.

If it were simple, we wouldn't need neural nets for it - we'd just code the algorithm directly. Or, at least, we'd be able to explain exactly how they work by looking at the weights. But now that we have our Babelfish, we still don't know how it really works in details. This is ipso facto evidence that the task is very much not simple.

oinfoalgo 3 days ago | parent | prev | next [-]

I use AI as a tool to make digital art but I don't make "AI Art".

Imperfection is not the problem with "AI Art". The problem is that it is really hard to not get the models to produce the same visual motifs and cliches. People can spot AI art so easy because of the motifs.

I think midjourney took this to another level with their human feedback. It became harder and harder to not produce the same visual motifs in the images to the point it is basically useless for me now.

dekimir 3 days ago | parent | prev | next [-]

> any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct

I hope you're right, but when I think about all those lawyers caught submitting unproofread LLM output to a judge... I'm not sure humankind is wise enough to avoid the slopification.

Davidzheng 4 days ago | parent | prev | next [-]

Isn't "But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills." only true if the false positive rate of the verifier is not much higher than the failure rate of the AI? At some point it's like asking a human to double check a calculator

crote 4 days ago | parent | prev | next [-]

> But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct.

The usual solution is to specify one language as binding, with that language taking priority if there turns out to be discrepancies between the multiple version.

griffzhowl 4 days ago | parent | prev [-]

You might still need humans in the loop for many things, but it can still have a profound effect if the work that used to be done by ten people can now be done by two or three. In the sectors that you mention, legal, graphic design, translation, that might be a conservative estimate.

There are bound to be all kinds of complicated sociopolitical effects, and as you say there is a backlash against obvious AI slop, but what about when teams of humans working with AI become more skillful at hiding that?

Wowfunhappy 4 days ago | parent | prev | next [-]

> Now ebook reading is switching to AI.

IMO these are terrible, I don't understand how anyone uses them. This is coming from someone who has always loved audiobooks but has never been particularly precious about the narrator. I find the AI stuff unlistenable.

evanelias 4 days ago | parent | prev | next [-]

> Book and music album covers are often done with AI (even if this is most of the times NOT advertised)

This simply isn't true, unless you're considering any minor refinement to a human-created design to be "often done with AI".

It certainly sounds like you're implying AI is often the initial designer or primary design tool, which is completely incorrect for major publishers and record labels, as well as many smaller independent ones.

spenrose 4 days ago | parent | prev | next [-]

Look at your examples. Translation is a closed domain; the LLM is loaded with all the data and can traverse it. Book and music album covers _don't matter_ and have always been arbitrary reworkings of previous ideas. (Not sure what “ebook reading” means in this context.) Math, where LLMs also excel, is a domain full of internal mappings.

I found your post “Coding with LLMs in the summer of 2025 (an update)” very insightful. LLMs are memory extensions and cognitive aides which provide several valuable primitives: finding connections adjacent to your understanding, filling in boilerplate, and offloading your mental mapping needs. But there remains a chasm between those abilities and much work.

apwell23 4 days ago | parent | prev | next [-]

> Book and music album covers are often done with AI

These suck. Things made with AI just suck big time. Not only are they stupid but they have negative value on your product.

I cannot think of single purely AI made video, song or any form of art that is any a good.

All AI has done is falsely convince ppl that they can now create things that they had no skills to do before AI.

antirez 4 days ago | parent | next [-]

This is not inherent to AI, but how the AI models were recently trained (by preference agreement of many random users). Look for the latest Krea / Black Forest Labs paper on AI style. The "AI look" can be removed.

Songs right now are terrible. For the videos, things are going to be very different once people can create full movies in their computers. Many will have access to the ability to create movies, and a few will be very good, and this will likely change many things. Btw this stupid "AI look" is only transient and is nowhere needed. It will be fixed, and AI images/videos generation will be impossible to stop.

nprateem 4 days ago | parent [-]

The trouble is, I'm perfectly well aware I can go to the AI tools, ask it to do something and it'll do it. So there's no point me wasting time eg reading AI blog posts as they'll probably just tell me what I've just read. The same goes for any media.

It'll only stand on its own when significant work is required. This is possible today with writing, provided the AI is directed to incorporate original insights.

And unless it's immediately obvious to consumers a high level of work has gone into it, it'll all be tarred by the same brush.

Any workforce needs direction. Thinking an AI can creatively execute when not given a vision is flawed.

Either people will spaff out easy to generate media (which will therefore have no value due to abundance), or they'll spend time providing insight and direction to create genuinely good content... but again unless it's immediately obvious this has been done, it will again suffer the tarring through association.

The issue is really one of deciding to whom to give your attention. It's the reason an ordinary song produced by a megastar is a hit vs when it's performed by an unsigned artist. Or, as in the famous experiment, the same world class violinist gets paid about $22 for a recital while busking vs selling out a concert hall for $100 per seat that same week.

This is the issue AI, no matter how good, will have to overcome.

HDThoreaun 4 days ago | parent | prev | next [-]

Ive made a ton of songs I enjoy with suno. Theyre not the greatest, but theyre definitely not the worst either.

apetresc 4 days ago | parent | prev | next [-]

I mean, test after test have shown that the vast, vast majority of humans are woefully unable to distinguish good AI art made by SOTA models from human art, and in many/most cases actively prefer it.

Maybe you’re a gentleman of such discerningly superior taste that you can always manage to identify the spark of human creativity that eludes the rest of us. Or maybe you’ve just told yourself you hate it and therefore you say you always do. I dunno.

apwell23 4 days ago | parent | next [-]

you couldve given me an example instead of this butthurt comment :)

apetresc 2 days ago | parent [-]

Sure! Here's the results of Scott Alexander's fairly large-scale "AI Art Turing Test": https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-ar...

Note that this was already almost a year ago, and the results should be even more one-sided now.

4 days ago | parent | prev | next [-]
[deleted]
varelse 4 days ago | parent | prev [-]

[dead]

jordanpg 4 days ago | parent | prev [-]

Of course, your opinion may be subject to selection bias (i.e., you are only judging the art that you became aware was AI generated).

WesleyLivesay 4 days ago | parent [-]

Reminds me of the issue with bad CGI in movies. The only CGI you notice is the bad CGI, the good stuff just works. Same for AI generated art, you see the bad stuff but do not realize when you see a good one.

apwell23 4 days ago | parent [-]

care to give me some examples from youtube ? I am talking about videos that ppl on youtube connected to for the content in the video ( not AI demo videos).

crote 4 days ago | parent | prev [-]

> Translations are more and more done exclusively with AI or with a massive AI help

As someone who speaks more than one language fairly well: We can tell. AI translations are awful. Sure, they have gotten good enough for a casual "let's translate this restaurant menu" task, but they are not even remotely close to reaching human-like quality for nontrivial content.

Unfortunately I fear that it might not matter. There are going to be plenty of publishers who are perfectly happy to shovel AI-generated slop when it means saving a few bucks on translation, and the fact that AI translation exists is going to put serious pricing pressure on human translators - which means quality is inevitably going to suffer.

An interesting development I've been seeing is that a lot of creative communities treat AI-generated material like it is radioactive. Any use of AI will lead to authors or even entire publishers getting blacklisted by a significant part of the community - people simply aren't willing to consume it! When you are paying for human creativity, receiving AI-generated material feels like you have been scammed. I wouldn't be surprised to see a shift towards companies explicitly profiling themselves as anti-AI.

int_19h 3 days ago | parent [-]

As someone whose native language isn't English, I disagree. SOTA models are scary good at translations, at least for some languages. They do make mistakes, but at this point it's the kind of mistake that someone who is non-native but still highly proficient in the language might make - very subtle word order issues or word choices that betray that the translator is still thinking in another language (which for LLMs almost always tends to be English because of its dominance in the training set).

I also disagree that it's "not even remotely close to reaching human-like quality". I have translated large chunks of books into languages I know, and the results are often better than what commercial translators do.