Remix.run Logo
hamdingers 7 hours ago

> LLM output is expressly prohibited for any direct communication

I would like to see this more. As a heavy user of LLMs I still write 100% of my own communication. Do not send me something an LLM wrote, if I wanted to read LLM outputs, I would ask an LLM.

adastra22 7 hours ago | parent | next [-]

I’m glad they have a carve out for using LLMs to translate to, or fix up English communications. LLMs are a great accessibility tool that is making open source development truly global. Translation and grammar fix up is something LLMs are very, very good at!

But that is translation, not “please generate a pull request message for these changes.”

SchemaLoad 7 hours ago | parent | next [-]

"I just used it to clean up my writing" seems to be the usual excuse when someone has generated the entire thing and copy pasted it in. No one believes it and it's blatantly obvious every time someone does this.

pixl97 5 hours ago | parent | next [-]

Not sure what you're talking about. Quite often I've written out a block of information and have found chunks of repeats or what would be hard to interpret by other stuck here or there. I'll stick it in an llm and have it suggest changes.

Simply put you seem to live in a different world where everyone around you has elegant diction. I have people I work with that if I could I would demand they take what they write and ask "would this make sense to any other human on this planet".

There are no shortages of people being lazy with LLMs, but at the same time it is a tool with valid and useful purpose.

ChadNauseam 6 hours ago | parent | prev | next [-]

Sometimes I ramble for a long time and ask an LLM to clean it up. It almost always slopifies it to shreds. Can't extract the core ideas, matches everything to the closest popular (i.e. boring to read) concept, etc.

3 hours ago | parent | prev [-]
[deleted]
username223 2 hours ago | parent | prev | next [-]

Machine translation is best used on the receiving end. Let me decide if I want to run your message through a machine, or read it with my own skills.

newsclues 6 hours ago | parent | prev | next [-]

Using software for translation is fine as long as the original source is also present for native speakers to check and any important information that is machine translated should be read by humans to test

adastra22 5 hours ago | parent [-]

It doesn’t hurt, but honestly machine translation (using LLMs) is so insanely good now. It usually does a better job than people.

Gigachad 7 hours ago | parent | prev | next [-]

Better to use Google Translate for this than ChatGPT. Either ChatGPT massively changes the text and slopifies it, or people are lying about using it for translation only because the outputs are horrendous. Google Translate won't fluff out the output with garbage or reformat everything with emoji.

6 hours ago | parent | next [-]
[deleted]
embedding-shape 6 hours ago | parent | prev | next [-]

"Translate this from X to X, don't change any meaning or anything else, only translate the text with idiomatic usage in target language: X"

Using Google Translate probably means you're using a language model in the end anyways behind the scenes. Initially, the Transformer was researched and published as an improvement for machine translation, which eventually led to LLMs. Using them for translation is pretty much exactly what they excel at :)

adastra22 5 hours ago | parent | prev | next [-]

Google Translate uses GPTs under the hood. GPT was invented by Google’s machine translation team. I think you are misunderstanding my point.

habinero 6 hours ago | parent | prev [-]

Yep. If you don't know the language, it's best not to pretend you do.

I've done this kind of thing, even if I think it's likely they speak English. (I speak zero Japanese here.) It's just polite and you never know who's going to be reading it first.

> Google翻訳を使用しました。問題が発生した場合はお詫び申し上げます。貴社のウェブサイトにコンピュータセキュリティ上の問題が見つかりました。詳細は下記をご覧ください。ありがとうございます。

> I have found a computer security issue on your website. Here are details. Thank you.

mort96 7 hours ago | parent | prev [-]

Why would you want to use a chat bot to translate? Either you know the source and destination language, in which case you'll almost certainly do a better job (certainly a more trustworthy job), or you don't, in which case you shouldn't be handling translations for that language anyway.

Same with grammar fixes. If you don't know the language, why are you submitting grammar changes??

denkmoon 7 hours ago | parent | next [-]

For translating communications like "Here is my PR, it does x, can you please review it", not localisation of the app.

MarsIronPI 7 hours ago | parent | prev [-]

No, I think GP means grammar fixes to your own communication. For example if I don't speak Japanese very well and I want to write to you in Japanese, I might write you a message in Japanese, then ask an LLM to fix up my grammar and check my writing to make sure I'm not sounding like a complete idiot.

mort96 7 hours ago | parent [-]

I have read a lot of bad grammar from people who aren't very good at the language but are trying their best. It's fine. Just try to express yourself clearly and we figure it out.

I have read text where people who aren't very good at the language try to "fix it up" by feeding it through a chat bot. It's horrible. It's incredibly obvious that they didn't write the text, the tone is totally off, it's full of obnoxious ChatGPT-isms, etc.

Just do your best. It's fine. Don't subject your collaborators to shitty chat bot output.

habinero 6 hours ago | parent | next [-]

Agreed. Humans are insanely good at figuring out intent and context, and running stuff through an LLM breaks that.

The times I've had to communicate IRL in a language I don't speak well, I do my best to speak slowly and enunciate and trust they'll try their best to figure it out. It's usually pretty obvious what you're asking lol. (Also a lot of people just reply with "Can I help you?" in English lol)

I've occasionally had to email sites in languages I don't speak (to tell them about malware or whatever) and I write up a message in the simplest, most basic English I can. I run that through machine translation that starts out with "This was generated by Google Translate" and include both in the email.

Just do your best to communicate intent and meaning, and don't worry about sounding like an idiot.

adastra22 5 hours ago | parent [-]

> Humans are insanely good at figuring out intent and context

I wish that was true.

pessimizer 6 hours ago | parent | prev [-]

You seem to be judging business communications by weird middle-class aesthetics while the people writing the emails are just trying to be clear.

If you think that every language level is always sufficient for every task (a fluency truther?), then you should agree that somebody who writes an email in a language that they are not confident in, puts it through an LLM, and decides the results better explain the idea they were trying to convey than they had managed to do is always correct in that assessment. Why are you second guessing them and indirectly criticizing their language skills?

mort96 6 hours ago | parent [-]

Running your words through ChatGPT isn't making you clear. If your own words are clear enough to be understood by ChatGPT, they're clear enough to be understood by your peers. Adding ChatGPT into the mix only ensures opportunity for meaning to be mangled. And text that's bad enough as to be ambiguous may be translated to perfectly clear text that reflects the wrong interpretation of your words, risking misunderstandings that wouldn't happen if the ambiguity was preserved instead of eliminated.

I have no idea what you're talking about with regard to being a "fluency truther", I think you're putting words into my mouth.

pixl97 5 hours ago | parent [-]

Eh, na dawg, I'll have to reject a lot of what you've typed here.

LLMs can do a lot of proof checking on what you've written. Asking it to check for logical contradictions in what I've stated and such. It will catch were I've forgot things like a 'not' in one statement so one sentence is giving a negative response and another gives a positive response unintentionally. This kind of error is quite often hard for me to pick up on, yet the LLM seems to do well.

epiccoleman 5 hours ago | parent | prev | next [-]

I completely agree. I let LLMs write a ton of my code, but I do my own writing.

It's actually kind of a weird "of two minds" thing. Why should I care that my writing is my own, but not my code?

The only explanation I have is that, on some level, the code is not the thing that matters. Users don't care how the code looks, they just care that the product works. Writing, on the other hand, is meant to communicate something directly from me, so it feels like there's something lost if I hand that job over to AI.

I often think of this quote from Ted Chiang's excellent story The Truth Of Fact, The Truth of Feeling:

> As he practiced his writing, Jijingi came to understand what Moseby had meant: writing was not just a way to record what someone said; it could help you decide what you would say before you said it. And words were not just the pieces of speaking; they were the pieces of thinking. When you wrote them down, you could grasp your thoughts like bricks in your hands and push them into different arrangements. Writing let you look at your thoughts in a way you couldn’t if you were just talking, and having seen them, you could improve them, make them stronger and more elaborate.”

But there is obviously some kind of tension in letting an LLM write code for me but not prose - because can't the same quote apply to my code?

I can't decide if there really is a difference in kind between prose and code that justifies letting the LLM write my code, or if I'm just ignoring unresolved cognitive dissonance because automating the coding part of my job is convenient.

IggleSniggle 3 hours ago | parent [-]

To me, you are describing a fluency problem. I don't know you or how fluent you are in code, but what you have described is the case where I have no problem with LLMs: translating from a native language to some other language.

If you are using LLMs to precisely translate a set of requirements into code, I don't really see a problem with that. If you are using LLMs to generate code that "does something" and you don't really understand what you were asking for nor how to evaluate whether the code produced matched what you wanted, then I have a very big problem with that for the same reasons you outline around prose: did you actually mean to say what you eventually said?

Of course something will get lost in any translation, but that's also true of translating your intent from brain to language in the first place, so I think affordances can be made.

Kerrick 7 hours ago | parent | prev | next [-]

Relevant: https://noslopgrenade.com

IggleSniggle 3 hours ago | parent [-]

What do you recommend if I've been regularly producing blog-length posts in Slack for years, no LLM present? It's where I write man...should I quit that out? I try to be information dense...

giancarlostoro 7 hours ago | parent | prev | next [-]

Yeah I use LLMs to show me how to shorten my emails because I can type for days. It helps a lot for when I feel like I just need a short concise email but I still write it all myself.

willio58 3 hours ago | parent [-]

Yeah I do the same. I’ve seen great results from writing out a large slack, copying it into ChatGPT and saying “write this more succinctly”.

Then, of course, I review the output and make some manual edits here and there.

That last thing is the key in both written communication and in code, you HAVE to review it and make manual edits if needed.

dawnerd 6 hours ago | parent | prev | next [-]

I see this on Reddit a lot. Someone will vibe code something then spam a bunch of subreddits with LLM marketing text. It’s all low effort low quality sooo.

pixl97 5 hours ago | parent [-]

I mean this is how spammers have always worked. If spam were high quality and useful people wouldn't complain about it.

gllmariuty 6 hours ago | parent | prev | next [-]

yeah, you could ask a LLM, but are you sure you know what to ask?

like in that joke with the mechanic which demands $100 for hitting the car once with his wrench

gonzalohm 7 hours ago | parent | prev [-]

Same can go for LLM code. I don't want to review your code if it was written by an LLM

I only use LLM to write text/communication because that's the part I don't like about my work