Remix.run Logo
circuit10 6 days ago

Is it just me or does the formatting of this feel like ChatGPT (numbered lists, "Key Takeaways", and just the general phrasing of things)? It's not necessarily an issue if you checked over it properly but if you did use it then it might be good to mention that for transparency, because people can tell anyway and it might feel slightly otherwise

(or maybe you just have a similar writing style)

segfault22 6 days ago | parent | next [-]

Yea, the core was written by me, i just used llm to fix my broken english.

benreesman 6 days ago | parent | next [-]

Don't pay any attention to people giving you shit for using translation software. A lot of us sometimes forget that the whole world knows a little English and most of us native speakers have a ridiculous luxury in getting away with being two lazy to learn a few other languages.

Fade_Dance 6 days ago | parent | next [-]

I think it's good form to mention it as a little disclaimer, just so people don't take it the wrong day. Just write (this post was originally written by me but formatted and corrected with LLM since English is not my primary language).

From what I've seen, people generally do not like reading a generated content, but every time I've seen the author come back and say "I used it because it isn't my main language" the community always takes back the criticism. So I'd just be upfront about it and get ahead of it.

6 days ago | parent | next [-]
[deleted]
drewbitt 6 days ago | parent | prev [-]

That was already added before this reply.

benatkin 6 days ago | parent | prev | next [-]

> using translation software

It's clear that this isn't what OP was doing. The LLM was writing, not merely translating. dang put it well:

> we want people to speak in their own voice

https://news.ycombinator.com/item?id=44704054

Latty 6 days ago | parent | prev | next [-]

Part of the problems with using LLMs for translation is precisely that they alter the tone and structure of what you give it, writing using the LLM cliches and style, and it's unsurprising people see that and just assume completely generated slop. It's unfortunate, and I would probably try and use LLMs if English wasn't my first language, but I don't think it's as simple as "using translation software", I've not seen people called out in that way for dodgy Google Translate translations, for example, it's a problem specific to LLMs and the output they make having fundamental issues.

bogwog 6 days ago | parent | next [-]

LLM writing style does to the brain what Microsoft Sam does to the ears.

goopypoop 6 days ago | parent | prev [-]

my nipples explode with delight!

6 days ago | parent [-]
[deleted]
circuit10 6 days ago | parent | prev | next [-]

I wasn't annoyed about it, I just said it might be good to mention because people will notice anyway, and at this point there's enough AI slop around that it can make people automatically ignore it so it would be good to explain that. I'm surprised I got downvotes and pushback for this, I thought it was a common view that it's good to disclose this kind of thing and I thought I was polite about it

circuit10 6 days ago | parent [-]

To be clear I think this has good information and I upvoted it, it’s just that as someone else said it’s good to get ahead of anyone who won’t like it by explaining why and also it can feel a little disingenuous otherwise (I don’t like getting other people to phrase things for me either for this reason but maybe that’s just me)

throwaway83977 6 days ago | parent | prev | next [-]

God forbid people actually learn the language they're trying to communicate in. I'd much rather read someone's earnest but broken English than LLM slop anyway.

lynndotpy 6 days ago | parent | prev [-]

It's disingenuous to call LLMs "translation software", and it's bad advice to say "don't pay attention those people".

Even if you don't agree with it, publishing AI generated content will exclude from ones audience the people who won't read AI generated content. It is a tradeoff one has to decide whether or not to make.

I'm sympathetic to someone who has to decide whether to publish in 'broken english' or to run it through the latest in grammar software. For my time, I far prefer the former (and have been consuming "broken english" for a long while, it's one of the beautiful things about the internet!)

atdt 6 days ago | parent | prev | next [-]

Your content is great, and the participation of non-native English speakers in this community makes it better and richer.

userbinator 6 days ago | parent | prev [-]

I'd rather you write in broken English than filter it through an LLM. At least that way I know I'm reading the thoughts of a real human rather than something that may have its meaning slightly perturbed.

marksomnian 6 days ago | parent | prev | next [-]

> might be good to mention that for transparency, because people can tell anyway and it might feel slightly otherwise

Devil's advocate: why does it matter (apart from "it feels wrong")? As long as the conclusions are sound, why is it relevant whether AI helped with the writing of the report?

chrisnight 6 days ago | parent | next [-]

It is relevant because it wastes time and adds nothing of substance. An AI can only output as much information as was inputted into it. Using it to write a text then just makes it unnecessarily more verbose.

The last few sections could have been cut entirely and nothing would have been lost.

Edit: In the process of writing this comment, the author removed 2 sections (and added an LLM acknowledgement), of which I referred to in my previous statement. To the author, thank you for reducing the verbosity with that.

6 days ago | parent [-]
[deleted]
lynndotpy 6 days ago | parent | prev | next [-]

AI-generated content is rarely published with the intention of being informative. * Something being apparently AI-generated is a strong heuristic that something isn't worth reading.

We've been reading highly-informative articles with "bad English" for decades. It's okay and good to write in English without perfect mastery of the language. I'd rather read the source, rather than the output of a txt2txt model.

* edit -- I want to clarify, I don't mean to imply that the author has ill will or intent to misinform. Rather, I intend to describe the pitfalls of using an LLM to adapt ones text, inadvertently adding a very strong flavor of spam to something that is not spam.

davrosthedalek 6 days ago | parent [-]

True, but there are many more people that speak no English, or so badly that an article would be hard to understand. I face this problem now with the classes I teach. It's an electronics lab for physics majors. They have to write reports about the experiments they are doing. For a large fraction, this task is extraordinary hard not because of the physics, but because of writing in English. So for those, LLMs can be a gift from heaven. On the other hand, how do I make sure that the text is not fully LLM generated? If anyone has ideas, I'm all ears.

lynndotpy 6 days ago | parent [-]

I don't have any ideas to help you there. I was a TA in a university, but that was before ChatGPT, and it was an expectation to provide answers in English. For non-native English speakers, one of the big reasons to attend an English-speaking university was to get the experience in speaking and reading English.

But I also think it's a different thing entirely. It's different being the sole reader of text produced by your students (with responsibility to read the text) compared to being someone using the internet choosing what to read.

jdiff 6 days ago | parent | prev | next [-]

Because AI use is often a strong indicator of a lack of soundness. Especially if it's used to the point where its structural quirks (like a love for lists) shine through.

KapKap66 6 days ago | parent | prev | next [-]

I just wanna read stuff written by people and not bots

simple as

IncreasePosts 6 days ago | parent | prev | next [-]

Because AI isn't so hot on the "I" yet, and if you ask it to generate this kind of document it might just make stuff up. And there is too much content on the internet to delve deep on whatever you come across to understand the soundness of it. Obviously you need to do it at some point with some things, but few people do it all the time with everything.

Pretty much everyone has heuristics for content that feels like low quality garbage, and currently seeing the hallmarks of AI seems like a mostly reasonable one. Other heuristics are content filled with marketing speak, tons of typos, whatever.

pessimizer 6 days ago | parent | prev | next [-]

> As long as the conclusions are sound

I can't decide to read something because the conclusions are sound. I have to read the entire thing to find out if the conclusions are sound. What's more, if it's an LLM, it's going to try its gradient-following best to make unsound reasoning seem sound. I have to be an expert to tell that it is a moron.

I can't put that kind of work into every piece of worthless slop on the internet. If an LLM says something interesting, I'm sure a human will tell me about it.

The reason people are smelling LLMs everywhere is because LLMs are low-signal, high-effort. The disappointment one feels when a model starts going off the rails is conditioning people to detect and be repulsed by even the slightest whiff of a robotic word choice.

edit: I feel like we discovered the direction in which AGI lies but we don't have the math to make it converge, so every AI we make goes completely insane after being asked three to five questions. So we've created architectures where models keep copious notes about what they're doing, and we carefully watch them to see if they've gone insane yet. When they inevitably do, we quickly kill them, create a new one from scratch, and feed it the notes the old one left. AI slop reads like a dozen cycles of that. A group effort, created by a series of new hires, silently killed after a single interaction with the work.

farmeroy 6 days ago | parent [-]

I want this to be the plot of bladerunner - deckard must hunt down errant replicants before they completely go insane due to context limits

Alifatisk 6 days ago | parent | prev | next [-]

Because it helps me decide if I should skim through or actually read it

unethical_ban 6 days ago | parent | prev | next [-]

Theory: Using AI and having an AI voice makes it less likely the conclusions are sound.

circuit10 6 days ago | parent | prev | next [-]

Looks like I missed a word here (probably “disingenuous”)

slacktivism123 6 days ago | parent | prev [-]

> As long as the conclusions are sound, why is it relevant whether AI helped with the writing of the report?

TL;DR: Because of the bullshit asymmetry principle. Maybe the conclusions below are sound, have a read and try to wade through ;-)

Let us address the underlying assumptions and implications in the argument that the provenance of a report, specifically whether it was written with the assistance of AI, should not matter as long as the conclusions are sound.

This position, while intuitively appealing in its focus on the end result, overlooks several important dimensions of communication, trust, and epistemic responsibility. The process by which information is generated is not merely a trivial detail, it is a critical component of how that information is evaluated, contextualized, and ultimately trusted by its audience. The notion that it feels wrong is not simply a matter of subjective discomfort, but often reflects deeper concerns about transparency, accountability, and the potential for subtle biases or errors introduced by automated systems.

In academic, journalistic, and technical contexts, the methodology is often as important as the findings themselves. If a report is generated or heavily assisted by AI, it may inherit certain limitations, such as a lack of domain-specific nuance, the potential for hallucinated facts, or the unintentional propagation of biases present in the training data. Disclosing the use of AI is not about stigmatizing the tool, but about providing the audience with the necessary context to critically assess the reliability and limitations of the information presented. This is especially pertinent in environments where accuracy and trust are paramount, and where the audience may need to know whether to apply additional scrutiny or verification.

Transparency about the use of AI is a matter of intellectual honesty and respect for the audience. When readers are aware of the tools and processes behind a piece of writing, they are better equipped to interpret its strengths and weaknesses. Concealing or omitting this information, even unintentionally, can erode trust if it is later discovered, leading to skepticism not just about the specific report, but about the integrity of the author or institution as a whole.

This is not a hypothetical concern, there are numerous documented cases (eg in legal filings https://www.damiencharlotin.com/hallucinations/) where lack of disclosure about AI involvement has led to public backlash or diminished credibility. Thus, the call for transparency is not a pedantic demand, but a practical safeguard for maintaining trust in an era where the boundaries between human and machine-generated content are increasingly blurred.

MangoCoffee 6 days ago | parent | prev [-]

who care? its like using a spell checker. why does it matter?

happycodinggg 4 days ago | parent [-]

yeah man, next time VSCode crashes and recovers your unsaved work, just remember: it knows way too much about you.

this kind of overreaction is exactly why real privacy concerns get ignored. it misses the point and just ends up misleading devs who actually care about meaningful issues.