Remix.run Logo
jaysonelliot 10 hours ago

You should use your own words. It might seem that a tool like Grammarly is just an advanced spellcheck, but what it's really doing is replacing your personal style of writing with its own.

It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them.

bruckie 9 hours ago | parent | next [-]

My elementary school kid came home yesterday and showed me a piece of writing that he was really proud of. It seemed more sophisticated than his typical writing (like, for example, it used the word "sophisticated"). He can be precocious and reads a ton, though, so it was still plausible that he wrote it. I asked him some questions about the writing process to try to tease out what happened, and he said (seemingly credibly) that he hadn't copied it from anywhere or referenced anything. He also said he didn't use any AI tools. After further discussion, I found out that Google Docs Smart Compose (suggested-next-few-words feature) is enabled by default on his school-issued Chromebook, and he had been using it. The structure of the writing was all his, but he said he sometimes used the Smart Compose suggestions (and sometimes didn't). He liked a lot of the suggestions and pressed tab to accept them, which probably bumped up the word choice by several grade levels in some places.

So yeah, it can change the character of your writing, even if it's just relatively subtle nudges here or there.

edit: we suggested that he disable that feature to help him learn to write independently, and he happily agreed.

Terr_ 9 hours ago | parent | next [-]

To rationalize my gut-feelings on this, I think it comes down to the spectrum between:

1. A system that suggests words, the child learns the word, determines whether it matches their intent, and proceeds if they like the result.

2. A system that suggests words, and the child almost-blindly accepts them to get the task over with ASAP.

The end-results may look the same for any single short document, but in the long run... Well, I fear #2 is going to be way more common.

zahlman 8 hours ago | parent | next [-]

The analogy with tab-completion of code seems apt. At first you blindly accept something because it has at least as good a chance of working as what you would have typed. Then you start to pay attention, and critically evaluate suggestions. Then you quickly if not blindly accept most suggestions, because they're clearly what you would have written anyway (or close enough to not care).

The phenomenon was observed in religious philosophy over a millennium ago (https://terebess.hu/zen/qingyuan.html).

abustamam 8 hours ago | parent [-]

Tab completion was so novel back when full e2e AI tooling was not really effective.

Now that it is, I just turn tab completion off totally when I write code by hand. It's almost never right.

skydhash 7 hours ago | parent [-]

Emacs has completion (but you can bind it to tab). The nice thing is that you can change the algorithm to select what options come up. I’ve not set it to auto, but by the time I press the shortcut, it’s either only one option or a small sets.

bruckie 8 hours ago | parent | prev | next [-]

From his description, it sounded like this was more of #1. He cared a lot about the topic he was writing about, and has high standards for himself, so it's very likely that he would have considered and rejected poor suggestions.

I have mixed feeling about it. On the one hand, you're right: carefully considering suggestions can be a learning opportunity. On the other hand, approval is easier than generation, and I suspect that without flexing the "come up with it from scratch" muscle frequently, that his mind won't develop as much.

yellowapple 5 hours ago | parent | prev [-]

#1 would be a net improvement over the status quo IMO. Seems like a great way for people to expand their vocabularies organically.

lossyalgo 5 hours ago | parent [-]

That reminds me of one of the biggest IMO missing feature of Wordle: They never give a definition of the word after the game is finished! I usually do end up googling words I don't know (which is quite often) but I'm guessing I'm one of the few who goes to the trouble. I've even written to The New York Times a couple times to suggest adding a short definition at the end as I honestly feel like a ton of people could totally up their vocabulary game and it surely could be added with minimum effort (considering they even added a Discord multiplayer mode).

Terr_ 2 minutes ago | parent | next [-]

[delayed]

yellowapple an hour ago | parent | prev [-]

That's a brilliant idea and now that you've mentioned it it seems like a rather glaring omission.

comboy 9 hours ago | parent | prev | next [-]

Oh how I despise these suggestions. You sometimes look for a way to express something and you are on the verge of giving the world something truly original, but as soon as your brain sees the suggestion it goes "oh yeah that fits"

SchemaLoad 7 hours ago | parent | next [-]

I disabled them immediately, it feels like the tech version of the ADHD person who keeps interrupting you with what they think you are trying to say. Even if the suggestion is correct, it saves you at most 2 seconds at the cost of interrupting you constantly.

Terr_ 9 hours ago | parent | prev | next [-]

True! There's an important cybernetic aspect to all this, where an automatic suggestion can be an interruption, sometimes worse if the suggestion is decent.

A certain amount of friction is necessary, at least if the goal is to help the person learn or make something original.

lossyalgo 4 hours ago | parent | prev | next [-]

I look forward to reading studies in 10 years how we all became stupider thanks to this "feature". One step closer to the movie Idiocracy.

TimTheTinker 9 hours ago | parent | prev | next [-]

GK Chesterton would have something brilliant to say about the inauthenticity of it all or something.

jrockway 9 hours ago | parent | prev | next [-]

I see the suggestions and then choose something different anyway. I don't want to use one of the top 3 most popular responses to an email from a friend. Even if it's something transactional.

JumpCrisscross 9 hours ago | parent | prev [-]

> I despise these suggestions

As an adult, I do too. As a middle schooler, we absolutely used word processors’ thesaurus features to add big words to our essays because the teachers liked them.

Gibbon1 9 hours ago | parent [-]

Friend of mine was a English teacher. She quit because she's not going to waste her time 'grading' 30 essays written by AI.

Anyway before that she HATED the thesaurus. And she could tell when students were using it to make their writing more fancy pants.

zahlman 8 hours ago | parent | next [-]

One problem I see is that LLMs have a more nuanced... well, model of how words and their meanings relate to each other than a dead-tree thesaurus could ever present, what with its simplified "synonym" and "antonym" categories. Online versions try to give some similarity metrics, but don't get into the nuance. (It's not as if someone who takes either approach would want to spend the time reading and understanding that, anyway.)

tigen 2 hours ago | parent | prev | next [-]

In-class essays impossible? Pencil to paper?

JumpCrisscross 8 hours ago | parent | prev [-]

> she could tell when students were using it to make their writing more fancy pants

I had two teachers who called us out on this, and actually coached us on our writing, and I remember them fondly. (They were also fans of in-class essaying.)

The others wanted to count big words.

9 hours ago | parent | prev [-]
[deleted]
ma2kx 8 hours ago | parent | prev | next [-]

As a non native English speaker my own words wouldnt be in English. If I express myself in English I soon struggle for the right words. On the other hand I think when I read some English text I'm quite capable of sensing the nuances. So it feels when I auto translate my text to English an than read against it again and make some corrections, I can express my thoughts much better.

comboy 9 hours ago | parent | prev | next [-]

My broken english now officially bumps my comments up instead of down. Sweet.

zahlman 8 hours ago | parent [-]

For what it's worth, I had a quick look through your comment history and your English seems just fine to me as a native speaker (at least for informal communication).

ziml77 7 hours ago | parent [-]

People who don't have English as their first language often seem to underestimate how good their English actually is. I wonder if it's because their reference point is formal English rather than the much more forgiving English we use in casual day-to-day conversation.

lamontcg 9 hours ago | parent | prev | next [-]

Books and newspapers have had editors for centuries. It is just code review for the written word.

[It looks like MS Word 97 had the ability to detect passive voice as well, so we're talking 30 year old technology there that predates LLMs -- how far down the Butlerian Jihad are we going with this?]

MeetingsBrowser 9 hours ago | parent [-]

Editors are mostly tasked with maintaining a consistent style and standard.

There is no need for that here beyond maybe spellcheck. Use your own thoughts, voice, and words.

lamontcg 9 hours ago | parent [-]

I don't personally use AI/LLMs for any informal writing here or on reddit, etc. But I think it is pretty weird to be overly concerned around people, particularly ESL, who use tools to clean up their writing. The only thing I really care about is when someone posts LLM regurgitated information on topics they personally don't know anything about. If the information is coming from the human but the style and tone is being tweaked by a machine to make it more acceptable/receptive and fix the bugs in it, then I don't understand why you're telling me I need to care and gatekeeping it. It also is unlikely to be very detectable, and this thread seems to only serve a performative use for people to get offended about it.

pseudalopex 9 hours ago | parent [-]

Other tools to clean up writing are allowed. They did not tell you you must care. You told them they must not. The submisson's use was to tell you and others LLM generated tone was not more acceptable.

lamontcg 8 hours ago | parent [-]

Well good luck detecting it.

davorak 6 hours ago | parent [-]

If it never gets in the way of the humans communicate it probably won't be an issue. That is the reading I have of the rule and Dang's comments

> HN is for conversation between humans.

If it is enhancing that instead of detracting and wasting peoples time it does not seem to be against the spirt of the rules.

yellowapple 5 hours ago | parent [-]

Except the letter of the rule makes it verboten even “if it never gets in the way of the humans communicate”.

davorak 3 hours ago | parent [-]

> HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them too precise.

That is from dang's post in: https://news.ycombinator.com/item?id=47342616

That whole post is clarifying for the intent of the new rule(s).

yellowapple an hour ago | parent [-]

The problem with “spirit-of-the-law” is that having rules be subject to discretion is a pretty clear avenue for discrimination and abuse. Not as big of a deal for an Internet forum as it would be for, say, a country's legal code and the enforcement thereof, but the lack of a clear standard for a rule makes that rule hard to follow and harder to enforce impartially.

NewsaHackO 9 hours ago | parent | prev | next [-]

>It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better."

It is definitely not true that it is better for a poster to communicate like an individual when it comes to spelling and grammar. People ignore posts that have poor grammar or spelling mistakes, and communications that have poor grammar are seen as unprofessional. Even I do it at a semi-subconscious level. The more difficult or the more amount of attention someone has to pay to understand your post, the less people will be willing to put in that effort to do so.

RevEng 2 hours ago | parent [-]

Exactly. Tell that to whoever is grading your next paper, or reviewing your resume, or watching your presentation. People are judged by their linguistic ability even in cases where it shouldn't matter. It's a well known heuristic bias. It's no surprise that many of the people here denying it are themselves quite literate.

mjg2 9 hours ago | parent | prev | next [-]

I was just re-reading the passage from Plato's "The Phaedrus" on writing & the "art" of the letter for an essay I'm working on, and your remark is salient for this discussion on LLM-style AI and social media at large.

dbacar 9 hours ago | parent [-]

RIP Robert M.Pirsig.

llbbdd 8 hours ago | parent [-]

Oof, I haven't finished Zen yet. I didn't know he was gone. RIP

davebranton 7 hours ago | parent | prev | next [-]

Precisely. As I wrote in my assessment of AI for my workplace;

"Your unique human voice is more valuable than a thousand prompt-driven LLM doggerels."

jjk166 7 hours ago | parent | prev | next [-]

> It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them.

This is the opposite of how language works. You want people to understand the idea you're trying to communicate, not fixate on the semantics of how you communicated. Language is like fashion - you only want to break the rules deliberately. If AI or an editor or whatever changes your writing to be more clear and correct, and you don't look at it and say "no, I chose that phrasing for a reason" then the editor's version is much more likely to be understood correctly by the recipient.

Aldipower 9 hours ago | parent | prev | next [-]

That's true, but on the flip side I regularly get downvoted because my English is not the best, so say it mildly. So, now I need to be really careful, to a) write in a good English or b) not to be recognised as an LLM corrected version of my English. Where is the line? I shouldn't be downvoted for my English I think, but that is the reality.

Edit: I already got downvoted. :-) Sure, no one can tell exactly why. Maybe the combination of bad English _and_ talking sh*ce isn't ideal at all. :-D Anyways, I have enough karma, so I can last quite a while..

ssl-3 9 hours ago | parent | next [-]

It goes both ways.

The quality of my writing varies (based on my mood as much as anything else, I suppose), but when it is particularly good and error-free then I often get accused of being a bot.

Which is absurd, since I don't use the bot for writing at all.

8 hours ago | parent | prev | next [-]
[deleted]
colpabar 9 hours ago | parent | prev | next [-]

> I shouldn't be downvoted for my English I think, but that is the reality.

How do you know? Is it possible the downvoters just didn't like what you said?

phs318u 9 hours ago | parent [-]

It’s possible of course but reading all the comments from various non-native English speakers here it seems like a common story. It may indicate a subliminal bias in readers (most of whom are presumably American).

yorwba 9 hours ago | parent [-]

Note that those comments are written in perfectly understandable English. Further note how often you come across comments written in perfectly understandable English, but they're downvoted anyway.

It suggests a bias in writers to assume that people would agree with them if only they could express their thoughts accurately.

9 hours ago | parent | prev [-]
[deleted]
Teever 9 hours ago | parent | prev | next [-]

But the problem is that people with poor written language / english skills are 'competing' with people who have superb skills in this domain.

There are people here who sit at a desk all day banging out multipage emails for work who decide to write posts of a similar linguistic calibre for funsies.

Meanwhile you have someone in a developing country who just got off a brutal twelve hour shift doing manual labour in the sun who wants to participate in the conversation with an insightful message that they bang-out on a shitty little cellphone onscreen keyboard while riding on bumpy public transit.

You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post.

What's the solution for that?

magicalist 8 hours ago | parent | next [-]

> What's the solution for that?

Remember that you're on a message board and you're not actually 'competing' for anything?

Teever 8 hours ago | parent [-]

This is a perfect example of what I'm talking about.

I knew someone was going to comment on my use of the word there despite me putting it in quotes which was intended to let the reader know that I meant that word as an approximation of what I was meaning.

When I say competing I mean competing in the space of ideas here. There is a ranking system here that raises or lowers the visibility and prominance of your comments and it's based on upvotes by other uses. For better or worse people penalize comments with grammatical errors over ones that don't and that affects how much exposure other users have to the ideas that people write and how much interaction they get from them.

If that's the case why would somebody who has good ideas but poor expressive capability bother posting here if their comments are just going to get ignored over relatively vapid comments that are grammatically correct?

davorak 6 hours ago | parent | next [-]

> If that's the case why would somebody who has good ideas but poor expressive capability bother posting here if their comments are just going to get ignored over relatively vapid comments that are grammatically correct?

The main problem is that ai consistently is seeing making things worse. Take a look at the examples in Dang's link in their comment: https://news.ycombinator.com/item?id=47342616

In the ones I read the AI editing is either hurting or needs to be much, much better to help.

NewsaHackO 8 hours ago | parent | prev [-]

No, I get your point. Unfortunately, alot of people here try to act high and mighty like they are posting here for some altruistic reason. The reason why I, you, and everyone else posts here is the human reason that we want others to engage with our posts. In order to do that, you have to put your best foot forward, which includes making sure the spelling and grammar of your posts is correct. While I do not use an LLM for this, I think that it is vaild to use these tools to make sure nothing gets in the way of whatever point you are trying to make.

Teever 8 hours ago | parent [-]

> In order to do that, you have to put your best foot forward

In English. You have to put your best foot forward in English. And in your environment with the resources you have at your disposal.

For example, I'm currently engaging with you between steps in a chemistry process that's happening under the fumehood next to me while wearing a respirator, a muggy plastic chemical resistant gown and disposable gloves nitrile globes.

I am absolutely certain that these conditions are different than the ones I would need to 'put my best food forward' in this discussion. I'm also certain that quite certain that you and I would both absolutely stumble if we were obligated to particpate in this forum in a language that we're not proficient in as many users often attempt to do and are unfairly penalized for by other members of the community.

I'm with you on the LLM usage for grammatical issues for non-native speakers. I bet more in this community would feel the same way if Dang whimsically mandated that people had to use a language other than English on certain days of the week.

fragmede 6 hours ago | parent [-]

Oh shit that would be fun. Tuesday, we're going to do it in Mongolian, see how that goes.

12_throw_away 8 hours ago | parent | prev [-]

> You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post.

I absolutely do not understand this comment. Are you saying that posting is competitive and that comments have "metrics"?

fragmede 6 hours ago | parent [-]

Yes! If my comment is above yours in a thread, it means I got more upvotes than you did, which means I get special bonuses and more to eat and you go hungry in Internet land. Also it means I'm better than you (obviously) and I get to go to this secret club with all the pretty people and you're not invited. Isn't that how this all works?

fragmede 9 hours ago | parent | prev | next [-]

I disagree. HN is going to bury my raw unedited tirade of a comment about those fucking morons that couldn't code their way out of a paper bag. If I send a comment to ChatGPT and open up the prompt with "this poster is a fucking dumbass, how do I tell them this" and use that to get to a well reasoned response because that's the tool we have available today, we're all better off.

The guidelines state:

> Be kind. Don't be snarky. Converse > Edit out swipes. > Don't be curmudgeonly.

On the best of days I manage to follow the rules, but I'm only human. If I run my comment through ChatGPT to try and help me edit out swipes on the bad days, that's not ok?

I'm not using ChatGPT to generate comments, but I've got the -4 comments to show that my "thoughts exactly as they have written them" isn't a winning move.

zahlman 8 hours ago | parent | next [-]

If you see an incompetent coder and wish to communicate that the person responsible is a "fucking moron/dumbass", the tone with which you do so is not the problem. Tell us what is wrong with the code, as objectively as possible. That's what the guidelines are trying to convey.

yorwba 9 hours ago | parent | prev [-]

The guidelines don't say anything about not posting something because an LLM told you that you shouldn't...

drusepth 9 hours ago | parent | prev [-]

I'm not sure I agree with this. I don't really want to see someone else's stylistic "warts".

I just want clean, easy-to-read content and I don't care about the person who wrote it. A tool like Grammarly is the difference between readable and unreadable (or understandable and understandable) for many people.

timeinput 9 hours ago | parent [-]

You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

You could even write a plugin for your favorite web browser to do that to every site you visit.

It seems hard to achieve the inverse that is (would you rather I use i.e.?) rewrite this paragraph as the original author did before they had an AI re--write it to make it clean, (--do you like oxford commas, and em/en dashes! Just prompt your AI) and easier to read

phs318u 9 hours ago | parent | next [-]

> You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

For those coming from a language other than English, you are more likely to lose information by using a tool to “reconstruct” meaning from poorly phrased English as an input, as opposed to the poster using a tool to generate meaningful English from their (presumably) well-written native language.

kazinator 9 hours ago | parent | prev | next [-]

> You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

But that creates a private version of the text which the original poster didn't sign off on. You could have fixed something contrary to their intent.

tempestn 9 hours ago | parent | prev [-]

There's a big difference between me running a filter on other people's words, and those people themselves choosing to run one and then approving the results.

I personally don't see a problem with someone using a grammar checker as long as they aren't just blindly accepting its suggestions. That said, if someone actually is using it in that way, it shouldn't be detectable anyway, so it probably doesn't matter all that much whether or not it's included in the letter of the rule.