| ▲ | thadt 3 days ago |
| Yesterday my wife burst into my office: "You used AI to generate that (podcast) episode summary, we don't sound like that!" In point of fact, I had not. After the security reporting issue, the next problem on the list is "trust in other people's writing". |
|
| ▲ | bob1029 3 days ago | parent | next [-] |
| I think one potential downside of using LLMs or exposing yourself to their generated content is that you may subconsciously adopt their quirks over time. Even if you aren't actively using AI for a particular task, prior exposure to their outputs could be biasing your thoughts. This has additional layers to it as well. For example, I actively avoid using em dash or anything that resembles it right now. If I had no exposure to the drama around AI, I wouldn't even be thinking about this. I am constraining my writing simply to avoid the implication. |
| |
| ▲ | jerf 3 days ago | parent | next [-] | | I didn't make heavy use of it, but I did sometimes use "It's not X, it's Y" or some closely related variant. I've had to strike that from my writing, because whether or not it makes anyone else cringe, it's making me cringe now. My usage doesn't even match the ones the LLMs favor, my X & Y were typically full clauses with many words rather than the LLM's use of short, punchy X & Ys... but still. Close enough. Can't write it anymore. I'm still using bullet lists sometimes, as they have their place, and I'm hoping LLMs don't totally nuke them. | | |
| ▲ | ziml77 3 days ago | parent [-] | | I wasn't too averse to using the construction until I saw the latest Eddy Burback video where he had an AI encouraging the delusions he was faking for the sake of the video. The sheer number of times the AI said "it's not just X, it's Y" absolutely broke any willingness I had to ever say anything like that again. |
| |
| ▲ | code51 3 days ago | parent | prev | next [-] | | Exactly and this is hell for programming. You don't know whose style the LLM would pick for that particular prompt and project. You might end up with Carmack or maybe that buggy, test-failing piece of junk project on Github. | | |
| ▲ | Taek 3 days ago | parent | next [-] | | You can tell it who's style to copy, it's actually decent at following instructions like that. | | |
| ▲ | noduerme 3 days ago | parent | next [-] | | It's not bad at following my own style. I have longstanding quirks like naming any string that will end up in a DB query with a "q_" in front of the variable name, and shockingly Claude picks up on those and mimicks them. Wouldn't trust it to write anything without thorough review, but it's great at syntax. | | |
| ▲ | dingnuts 3 days ago | parent [-] | | this isn't shocking, they are very good at repeating patterns in the immediate context. they're just not very good at anything else. your quirk is part of the immediate pattern |
| |
| ▲ | alchemism 3 days ago | parent | prev [-] | | My first experiments with LLM chat was to ask to produce text mimicking the style of a distinct, well-known author. It was also quite good at producing hybrid fusions of unique fictional styles, A + B = AB. |
| |
| ▲ | taneq 3 days ago | parent | prev [-] | | Can you just tell it it’s Carmack? :P |
| |
| ▲ | imiric 3 days ago | parent | prev | next [-] | | Isn't the alternative far more likely? These tools were trained on the way people write in certain settings, which includes a lot of curated technical articles like this one, and we're seeing that echoed in their output. There's no "LLM style". There's "human style mimicked by LLMs". If they default to a specific style, then that's on the human user who chooses to go with it, or, likely, doesn't care. They could just as well make it output text in the style of Shakespeare or a pirate, eschew emojis and bulleted lists, etc. If you're finding yourself influenced by LLMs—don't be. Here's why: • It doesn't matter. • Keep whatever style you had before LLMs. :tada: | | |
| ▲ | jerf 3 days ago | parent [-] | | There is no "LLM style". There is a "default LLM style", which is why I call it that. Or technically, one per LLM, but they seem to have converged pretty hard since they're all convergently evolving in the same environment. It's trivial to prompt it out of that style. Word about how to do it and that you should do it has gotten around in the academic world where the incentives to not be caught are high. So I don't call it "the LLM style". But if you don't prompt for anything in particular, yes, there is a very very strong "default LLM style". | | |
| |
| ▲ | keybored 3 days ago | parent | prev | next [-] | | Out of the mountains of content, one single symbol would provoke the ire of non-ascii reactionaries. https://news.ycombinator.com/item?id=44072922 https://news.ycombinator.com/item?id=45766969 https://news.ycombinator.com/item?id=45073287 | |
| ▲ | riskable 3 days ago | parent | prev [-] | | I suddenly have the urge to reply to this with a bulleted list where the bullets are emoji. |
|
|
| ▲ | jobigoud 3 days ago | parent | prev | next [-] |
| Already a big problem in art, people go on witch hunt over what they think are signs of AI use. It's sad because people that are ok with AI art are still enjoying the human art just the same. Somehow their visceral hate of AI-art managed to ruin human art for themselves as well. |
| |
| ▲ | dingnuts 3 days ago | parent | next [-] | | AI visual output mimicks art sufficiently that it is now more difficult to identify authenticity and humanity, which are important for the human connection audiences want from art. AI outputs mimicking art rob audiences of the ability to appreciate art on its own in the wild without further markers of authenticity, which steals joy from a whole generation of digital artists that have grown up sharing their creativity with each other If you lack the empathy to understand why AI art-like outputs are abhorrent, I hope someone wastes a significant portion of your near future with generated meaningless material presented to you as something that is valuable and was time consuming to make, and you gain nothing from it, so that you can understand the problem for yourself first hand. | |
| ▲ | whywhywhywhy 3 days ago | parent | prev | next [-] | | This ultimately will only ever harm human artists accused of it. AI artists can just say “yeah, I did, so what” defusing the criticism. | |
| ▲ | robby_w_g 3 days ago | parent | prev [-] | | If there wasn't global-scale theft of art and content or if LLMs could produce something better than an inferior facsimile, I bet there would be less backlash. But instead we had a 'non-profit' called 'Open'AI that irresponsibly unleashed this technology on the world and lied about its capabilities with no care of how it would affect the average person. |
|
|
| ▲ | acedTrex 3 days ago | parent | prev | next [-] |
| I blogged about this fundamental demolition of trust a few months ago. HN discussed it here https://news.ycombinator.com/item?id=44384610 The responses were a surprisingly mixed bag. What I thought was a very common sense observation had some heavy detractors in those threads. |
| |
| ▲ | gdulli 3 days ago | parent [-] | | You're on a forum full of people trying to profit from this tech. In that context the pushback is obvious. |
|
|
| ▲ | riskable 3 days ago | parent | prev [-] |
| Exposure to AI leads to people writing like AI. Just like when you're hanging out in certain circles, you start to talk like those people. It's human nature. |