| ▲ | zarzavat 4 days ago |
| The assumption here is that LLMs will never pass the Turing test for copywriting, i.e. AI writing will always be distinguishable from human writing. Given that models that produce intelligible writing didn't exist a few years ago, that's a very bold assumption. |
|
| ▲ | keiferski 4 days ago | parent | next [-] |
| No, I’m sure they will at some point, but I don’t think that eliminates the actual usefulness of a talented writer. It just makes unique styles more valuable, raises the baseline acceptable copy to something better (in the way that Bootstrap increased website design quality), and shifts the role of writer to more of an editor. Someone still has to choose what to prompt and I don’t think a boilerplate “make me a marketing plan then write pages for it” will be enough to stand out. And I’d bet that the cyborg writers using AI will outcompete the purely AI ones. (I also was just using it as a point to show how being identified as AI-made is already starting to have a negative connotation. Maybe the future is one where everything is an AI but no one admits it.) |
| |
| ▲ | zarzavat 4 days ago | parent [-] | | Why couldn't an AI do all of that? > And I’d bet that the cyborg writers using AI will outcompete the purely AI ones. In the early days of chess engines there were similar hopes for cyborg chess, whereby a human and engine would team up to be better than an engine alone. What actually happened was that the engines quickly got so good that the expected value of human intervention was negative - the engine crunching so much information than the human ever could. Marketing is also a kind of game. Will humans always be better at it? We have a poor track record so far. | | |
| ▲ | CuriouslyC 4 days ago | parent | next [-] | | Chess is objective, stories and style are subjective. Humans crave novelty, fresh voices, connection and layers of meaning. It's possible that the connection can be forged and it can get smart enough to bake layers of meaning in there, but AI will never be good at bringing novelty or a fresh voice just by its very nature. | | |
| ▲ | dingnuts 4 days ago | parent [-] | | LLMs are frozen in time and do not have experiences so there's nothing to relate to. I'd pay extra for writing with some kind of "no AI used" certification, especially for art or information |
| |
| ▲ | cobbzilla 4 days ago | parent | prev | next [-] | | no matter what you ask AI to do, it’s going to give you an “average“ answer. Even if you tell it to use a very distinct specific voice and write in a very specific tone, it’s going to give you the “average“ specific voice and tone you’ve asked for. AI is the antithesis of creativity and originality. This gives me hope. | | |
| ▲ | IX-103 4 days ago | parent | next [-] | | That's mostly true of humans though. They almost always give average answers. That works out because
1) most of the work that needs to be done is repetitive, not new so average answers are okay
2) the solution space that has been explored by humans is not convex, so average answers will still hit unexplored territory most of the time | | |
| ▲ | cobbzilla 4 days ago | parent [-] | | Absolutely! You can communicate with without (or with minimal) creativity. It’s not required in most cases. So AI is definitely very useful, and it can ape creativity better and better, but it will always be “faking it”. |
| |
| ▲ | chuckadams 4 days ago | parent | prev [-] | | What is creative or original thought? You are not the first person to say this after all. | | |
| ▲ | cobbzilla 4 days ago | parent [-] | | Not being 100% algorithmically or mathematically derived is a good start. I’m certain there’s more but to me this is a minimum bar. | | |
| ▲ | int_19h 3 days ago | parent [-] | | If your brain is not running algorithms (which are ultimately just math regardless of the compute substrate), how do you imagine it working then, aside from religious woo like "souls"? | | |
| ▲ | chuckadams 2 days ago | parent | next [-] | | I dunno, I think artificiality is a pretty reasonable criterion to go by, but it doesn't seem at all related to originality, nor does originality really stack up when we too are also repeating and remixing what we were previously taught. Clearly we do a lot more than that as well, but when it comes to defining creativity, I don't think we're any closer to nailing that Jello to the tree. | |
| ▲ | cobbzilla 2 days ago | parent | prev [-] | | sure and physics is all strings and pure math at the the most fundamental level; kind of misses the point. | | |
| ▲ | int_19h a day ago | parent [-] | | If that's your take, then you need to explain how the gap between this fundamental level and the level that you're concerned with is different in those two cases. |
|
|
|
|
| |
| ▲ | slowlyform 4 days ago | parent | prev | next [-] | | I tried asking chatgpt for brainrot speech and all examples they gave me sound very different from what the new kids on the internet are using. Maybe language will always evolve faster than whatever amount of Data openAI can train their model with :). | |
| ▲ | oblio 4 days ago | parent | prev [-] | | Intellectuals have a strong fetish for complete information games such as chess. Reality and especially human interaction are basically the complete opposite. |
|
|
|
| ▲ | wybiral 4 days ago | parent | prev | next [-] |
| AI will probably pass that test. But art is about experience and communicating more subtle things that we humans experience. AI will not be out in society being a person and gaining experience to train on. So if we're not writing it somewhere for it to regurgitate... It will always feel lacking in the subtlety of a real human writer. It depends on us creating content with context in order to mimic someone that can create those stories. EDIT: As in, it can make really good derivative works. But it will always lag behind a human that has been in real life situations of the time and experienced being a human throughout them. It won't be able to hit the subtle notes that we crave in art. |
| |
| ▲ | int_19h 3 days ago | parent [-] | | > AI will not be out in society being a person and gaining experience to train on. It can absolutely do that, even today - you could update the weights after every interaction. The only reason why we don't do it is because it's insanely computationally expensive. |
|
|
| ▲ | j45 4 days ago | parent | prev [-] |
| Today’s models are tuned to output the average quality of their corpus. This could change with varying results. What is average quality? For some it’s a massive upgrade. For others it’s a step down. For the experienced it’s seeing through it. |
| |
| ▲ | zarzavat 4 days ago | parent [-] | | You're absolutely right, but AIs still have their little quirks that set them apart. Every model has a faint personality, but since the personality gets "mass produced" any personality or writing style makes it easier to detect it as AI rather than harder. e.g. em dashes, etc. But reducing personality doesn't help either because then the writing becomes insipid — slop. Human writing has more variance, but it's not "temperature" (i.e. token level variance), it's per-human variance. Every writer has their own individual style. While it's certainly possible to achieve a unique writing style with LLMs through fine-tuning it's not cost effective for something like ChatGPT, so the only control is through the system prompt, which is a blunt instrument. | | |
| ▲ | j45 4 days ago | parent [-] | | It’s not a personality. There is no concept of replicating a person, personality or behaviours because the software is not the simulation of a living being. It is a query/input and response format. Which can be modeled to simulate a conversation. It can be a search engine that responds on the inputs provided, plus the system, account, project, user prompts (as constraints/filters) before the current turn being input. The result can sure look like magic. It’s still a statistically present response format based on the average of its training corpus. Take that average and then add a user to it with their varying range and then the beauty varies. LLMs can have many ways to explain the same thing more than 1 can be valid sometimes; other times not. |
|
|