Remix.run Logo
Havoc 9 hours ago

I think there is a bit of wider social norms piece missing as well on AI use in knowledge work context.

Someone forwarded an enormous amount of text over teams the other day at work. From someone (bless her) that always means well but usually averages about one spelling mistake per word and rarely goes over 20 words per message. Clearly copy paste chatgpt.

For say hn gang that thinks in terms of context shifts, information load and things on THAT wave length the problem with that situation is obvious but I realised then that is not at all obvious to the average public. She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.

There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.

erentz 5 hours ago | parent | next [-]

Seems AI has made it cheap to produce information but now you have to spend more time parsing the information. And it’s now the less competent/useful people spending less time producing more information with the more useful people spending more of their valuable time parsing that information. This is why I’m skeptical of LLMs ever becoming a net benefit in most organizations.

anonymars an hour ago | parent | next [-]

Intellectual denial of service

scruple 3 hours ago | parent | prev | next [-]

LLMs are Brandolini's Law taken to an entirely different plane of existence.

an hour ago | parent [-]
[deleted]
jimbokun 2 hours ago | parent | prev | next [-]

Calling it “information“is generous.

trollbridge 4 hours ago | parent | prev [-]

Well, you can use LLMs to parse LLM-generated slop. They make nice summaries. I have taken this approach to people who send me obviously generated LLM text; I simply run it through an LLM, paste the summary, and ask them "Is this an accurate summary?" and then I ask the for their original prompt.

dodu_ an hour ago | parent | next [-]

Ah yes, take my single sentence, blow it up to 3 paragraphs with LLMs, and then the person reading it can have an LLM summarize it in a single sentence.

What the fuck are we even doing anymore?

ua709 3 minutes ago | parent [-]

I wonder if that even works. Kinda like when kids play telephone I think it’s unlikely the input and output sentences actually match.

stoorafa 2 hours ago | parent | prev | next [-]

LLMs are great at decompression [1]

[1] https://jabde.com/2026/02/02/utilizing-llms-as-a-data-decomp...

Sgt_Apone 3 hours ago | parent | prev | next [-]

Might as well donate money to the AI companies at this point.

erentz 3 hours ago | parent | prev [-]

But now even this is just producing more information and requires more work both of you and of the original sender.

Avicebron 6 hours ago | parent | prev | next [-]

My default is that I won't copy and paste anything that's AI generated in communications. I kind of think that's the line. Use whatever you want in the background, but I want to communicate with the synthesis of your thoughts.

I think this is a reasonable standard to hold, otherwise, like many before have said...send me the prompt. It's actually more interesting/better I know a coworker is struggling to communicate about something.

threecheese 5 hours ago | parent | next [-]

I follow the same strategy, but loosely - I need those emdashes to signal that I’m using the tools.

rdtsc 3 hours ago | parent [-]

That’s my latest joke — that we’ll have to pretend like we used the tools so they can feel validated they’ve spent all this money on hyped up technology. So, yes, it’s em-dashes and “it’s not just this, it’s that …” so they can hopefully leave us alone.

xp84 3 hours ago | parent [-]

I remember feeling embarrassed one time that I used a very early GPT thing to help organize perf reviews for employees from the various bullet points I had written for each (I had a lot of direct reports). But in current world, I assume I’d be praised for doing so.

jrumbut 15 minutes ago | parent | prev | next [-]

My typical practice is to write a reply using my own brain and whatever practices are called for, then attach any interesting chatbot responses that were generated as documents.

So there's a clear separation, a reply from me which I stand by and then some interesting chatbot stuff if you're into that.

pinkgolem 3 hours ago | parent | prev [-]

I mean, i struggle with spelling/wording and ask the LLM to proofread a lot.

I often send out the LLM version, but still check if it contains the original thoughts correctly.

It's not a bad way to extend your vocabulary & catch spelling mistakes

mkl an hour ago | parent | next [-]

You don't need a fake extended vocabulary. Just communicate directly and honestly. Underlining spelling errors as you type has been a standard feature of email software for nearly three decades.

stingraycharles 3 hours ago | parent | prev [-]

> I often send out the LLM version, but still check if it contains the original thoughts correctly.

Please don’t do this. You probably aren’t aware of how bad this can land. It’s not just about containing your original thoughts, it’s about the verbosity, repetitiveness, and absurdity of it all.

Grammarly is a much better tool for these kinds of purposes, and it actually guides and teaches you to improve your writing along the way.

adastra22 2 hours ago | parent [-]

Grammarly the honeypot?

stingraycharles an hour ago | parent [-]

You seem to be referring to something specific I’m not aware of, could you elaborate?

A Google search didn’t reveal anything specific other than them using famous author names for expert review.

adastra22 31 minutes ago | parent [-]

It's the nature of the product itself. It's a key logger software. That's literally what it does -- take every input on your computer and route it to their servers.

nlawalker 5 hours ago | parent | prev | next [-]

You have to call it out when you see it, politely and charitably.

"Hey, thanks! This is a great overview, and I actually asked ChatGPT before asking here and got a lot of the same information, but what I'm really looking for is..."

stingraycharles 3 hours ago | parent [-]

This is what I do, slightly more explicitly saying “just be the real you”. About 50% of colleagues take it well. The other 50% don’t understand the problem, and don’t understand when (and when not) to use AI.

They are at high risk.

Employees using ChatGPT to renegotiate their salary are showing a serious lack of cognitive awareness.

milkshakes 5 hours ago | parent | prev | next [-]

you could always do this: https://marketoonist.com/wp-content/uploads/2023/03/230327.n...

Aurornis 6 hours ago | parent | prev | next [-]

> She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.

This is the root frustration spreading across workplaces everywhere. Before AI the only way for someone to generate a design document, Jira ticket, or pull request without investing a lot of their own time and effort into producing what you saw.

LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible. You have to actually read and process the work, which takes 100 times more effort than it took them to make it.

For people in the working world who saw the workplace as a game of min-maxing their effort against the appearance of being a valuable contributor, LLMs are the perfect shortcut: They can now generate the appearance of doing a lot of work with no more than a few lines of asking an LLM to produce documents.

If anyone spends the 30 minutes to review the AI slop from their 15-second prompt, they'll copy your feedback into ChatGPT and send another document over with the fixes. Now they've even captured you into doing their work for them!

For teams or even entire companies that were relying on appearances of activity as a proxy for contributions, this is going to be a difficult transition. Everyone e-mail job worker in the world just received a tool that will generate the appearance of doing their job for them and even possibly be plausibly correct most of the time. One person can generate volumes of design documents, Jira tickets, and even copy and paste witty responses into the company Slack and appear to be the most engaged and dedicated employee by volume while doing less actual work than ever before.

I think teams that already had good review cultures with managers who cared about the output rather than the metrics are doing fine because anyone even a little bit engaged can spot the AI copy-and-paste employees with even a little inspection. The lazy managers who relied on skimming documents and plotting number of PRs or lines of code changed are in for a rude awakening when they discover the employees dominating their little games are the ones doing the most damage to the team.

ceejayoz 5 hours ago | parent | next [-]

> LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible.

Oh, we know. It's pretty clear in many cases.

Terr_ 3 hours ago | parent | next [-]

Perhaps a less-brittle version would be to replace "we don't know X" with "we can't easily prove X to the extent needed to deter it."

2wdfsd 5 hours ago | parent | prev [-]

lol yeeh... its obvious as hell.

And frankly the best signal now is: the shorter it is the greater the likelihood it was at least expensive for the human to produce. Said in another way - a shorter thing is easier to make sense of completely and if its garbage - its garbage. At least the cost borne on you was minimised!

xp84 2 hours ago | parent | prev | next [-]

Insightful take.

What’s funny to me is your last paragraph. A lot of companies are so gung-ho about “AI ALL the things!” that I’m not sure as a manager if I’d get in trouble for “spotting the AI copy paste” junk. I’m supposed to make sure everyone is using AI as much as possible, after all. So, rejecting someone’s output for being low-effort AI slop and asking for a “less AI” version of it might mark me as a silly old fashioned guy who doesn’t believe in AI.

alexandre_m 5 hours ago | parent | prev [-]

> This is the root frustration spreading across workplaces everywhere. Before AI the only way for someone to generate a design document, Jira ticket, or pull request without investing a lot of their own time and effort into producing what you saw.

That’s not really the point. Engineering has always operated on trust networks, not just artifacts.

Your review naturally adapts based on the level of trust you have in the author. If someone has consistently produced high-quality work, whether they used AI or not becomes mostly irrelevant.

gumby271 7 hours ago | parent | prev | next [-]

I've run into a similar thing where I'll be cc'd on support tickets with one of our customer support agents and they'll then reply to me with what is clearly an ai summary of the single email from the customer that I can already read. I do think they're trying to be helpful, but it's hard to not feel like they think I'm a child or an idiot. Back in the day we agreed that Googling something for someone was rude (letmegooglethatforyou.com being a good example), I don't know why ai summaries and slop aren't understood in the same way.

asib 7 hours ago | parent | next [-]

That’s not the intent of letmegooglethatforyou. It’s a pointed way of telling the recipient they should do the bare minimum research on their own before asking someone else for help. It’s not about being angry that someone told you something they found from a cursory google search

notatoad 3 hours ago | parent [-]

You’re right, but Lmgtfy links are incredibly similar in tone to sending somebody ai output.

Lmgtfy was a passive-aggressive (but not really passive) way to say “hey, are you too dumb to google this?”. Sending somebody ai output feels the same to me - the message you’re sending to the recipient is “here, you’re obviously too dumb to ask an LLM about this yourself”. Except some people don’t seem to realize that’s the message they’re sending

furyofantares 5 hours ago | parent | prev [-]

letmegooglethatforyou.com was to let someone know that not searching for themselves is rude - it was not because it was rude to search for someone else (it wasn't and isn't)

figassis 5 hours ago | parent | prev | next [-]

And it’s too soon to have these norms. Employers today are willing to part with them at the hint of the slimmest efficiency gains, you’ll waste time. So I think the correct response today is wait for it to settle. Norms will form on their own.

scruple 3 hours ago | parent | prev | next [-]

Yeah I write prompts asking it to misspell a few words, break a few grammar rules, forget to capitalize once in a while, miss some punctuation once in a while. No one will ever catch on.

analog31 2 hours ago | parent | prev | next [-]

In an ideal workplace, one could sit down with the colleague and have her experience untangling the slop, perhaps by a process akin to pair programming.

Sometimes I wonder if we're letting people graduate from school with no real grasp of the purpose of written communication. School strips writing of purpose, and creates artificial purposes such as using AI to combine words in order for AI to assign it a good grade. Even before the AI era, most human generated text was not worth reading.

otabdeveloper4 10 minutes ago | parent | prev | next [-]

You can use an LLM to fix spelling and grammar errors. You don't need to generate slop. (Cloud providers sell LLMs as "robot information workers" when they're actually "calculators for text".)

Mars008 3 hours ago | parent | prev | next [-]

I've seen manager obviously reading copilot's advises as his own thoughts on meetings.

adastra22 2 hours ago | parent [-]

copilot?!

fg137 an hour ago | parent [-]

Microsoft Copilot is used as the default "general" AI tool at most companies you have heard of

Forgeties79 5 hours ago | parent | prev | next [-]

My current bar is “if you know I’m expecting to hear from a person don’t paste unedited ChatGPT outputs and hit send.” Everybody wants to send out the efforts of their corner-cutting, but nobody wants to receive them.

Most people know when they are doing it. If you feel the need to obscure your LLM usage, it means you didn’t put enough of your own voice and work into the final draft and you need to do something about that.

notatoad 3 hours ago | parent | next [-]

I’d go a step further and say there is never a good reason to share unedited ai output.

The closest acceptable thing to share is the full chat, including your prompts. If the output is useful enough to share, then the human thought process that led to the ai output is almost always more useful than the output itself.

jimbokun an hour ago | parent | prev | next [-]

A lot of time I will just say “Gemini/Claude is telling me…” just like I would for a Google search result. Sometimes helpful to use the common wisdom embedded in the LLMs as a starting point for the discussion.

somewhatgoated an hour ago | parent [-]

As soon as I read this phrase my eyes glaze over and I skip everything that comes after it.

If I want the LLM answer I freaking ask it myself

bandrami 5 hours ago | parent | prev [-]

The asymmetry is that lots of people want to use LLMs to produce things, and nobody wants to consume the things LLMs produce.

The Nash equilibrium here is that the market has to find a way for the people producing things with LLMs to pay people to consume them, and the market always finds a way.

2wdfsd 4 hours ago | parent | next [-]

Not quite. Ultimately the lions-share of income of model producer's is coming from firms.

Firms are only going to pay out to model producers if they are getting more in excess of the cost of financing projects over time. If a firm does not see this happen, they reduce their spend on tokens. Simple.

Its a whole lot more nuanced than some shitty game theory.

somewhatgoated an hour ago | parent [-]

“Firms are only making perfectly rational decisions that result in meaningful real outcomes” - not my experience.

Firms waste literally billions on some bullshit that gets them nothing.

Forgeties79 4 hours ago | parent | prev [-]

>the market always finds a way

That may be the case but every day LLM’s feel less like the next big thing and more like 3D printing. Here to stay, but not nearly as ubiquitous and earth shattering as people made it out to be.

If I had to guess right now, I would say LLM’s are more significant than 3D printers, but less significant than the Internet.

bandrami an hour ago | parent | next [-]

I've thought the 3D-printing analogy is pretty apt for about a year now. It had a lot of promise at first but it never quite has the impact people thought it would. There are still 3D printers for sale, and people still prototype with them, but nobody's printing out a dustpan when they need one.

trollbridge 4 hours ago | parent | prev [-]

I'd say that's a pretty accurate analysis. Something that is easily generated by an LLM obviously has low value and there is no moat.

Agentic coding is a bit different, particularly if a great deal of effort and intelligence goes into it, but that's a quite different thing than just cranking out slop apps.

Forgeties79 2 hours ago | parent [-]

Yeah there is no doubt that some companies are going to radically change their operations because of agentic coding in particular. But the revolution that is being promised, and the investment that has gone along with it, is going to smash against some pretty nasty shoals of reality sooner rather than later

bandrami an hour ago | parent [-]

Some are going to radically change their operations, but we have yet to actually see if the ROI on that comes through for them. It will be an interesting thing to watch.

stavros 6 hours ago | parent | prev | next [-]

Well, sure, it's very new. Soon we'll adapt and it'll be just another tool we're using.

Rekindle8090 39 minutes ago | parent | prev [-]

[dead]