| ▲ | rorylaitila 4 hours ago | |
Yes I noticed this as well. I was last writing up a landing page for our new studio. Emotion filled. Telling a story. I sent it through grok to improve it. It removed all of the character despite whatever prompt I gave. I'm not a great writer, but I think those rough edges are necessary to convey the soul of the concept. I think AI writing is better used for ideation and "what have I missed?" and then write out the changes yourself. | ||
| ▲ | gnutrino 2 hours ago | parent | next [-] | |
I've found LLMs to be terrible with ideation. I've been using GPT 5.x to come up with ideas and plot lines for a Dungeon World campaign I've been running. I'm no fantasy author, and my prose leaves much to be desired. The stuff the LLM comes up with is so mind numbingly bland. I've given up on having it write descriptions of any characters or locations. I just use it for very general ideas and plot lines, and then come up with the rest of the details on the fly myself. The plot lines and ideas it comes up with are very generic and bland. I mainly do it just to save time, but I throw away 50% of the "ideas" because they make no sense or are really lame. What i have found LLMs to be helpful with is writing up fun post-session recaps I share with the adventurers. I recap in my own words what happened during the session, then have the LLM structure it into a "fun to read" narrative style. ChatGPT seems to prefer a Sanderson jokey tone, but I could probably tailor this. Then I go through it, and tweak some of the boring / bland bits. The end result is really fun to read, and took 1/20th the time it would have taken me to write it all out myself. The LLM would have never been able to come up with the unique and fun story lines, but it is good at making an existing story have some narrative flare in a short amount of time. | ||
| ▲ | co_king_5 4 hours ago | parent | prev [-] | |
> I think AI writing is better used for ideation It shocks me when proponents of AI writing for ideation aren't concerned with *Metaphoric Cleansing* and *Lexical Flattening* (to use two of the terms defined in the article) Doesn't it concern you that the explanation of a concept by the AI may represent only a highly distorted caricature of the way that concept is actually understood by those who use it fluently? Don't get me wrong, I think that LLMs are very useful as a sort of search engine for yet-unknown terms. But once you know *how* to talk about a concept (meaning you understand enough jargon to do traditional research), I find that I'm far better off tracking down books and human authored resources than I am trying to get the LLM to regurgitate its training data. | ||