| ▲ | hattmall 8 hours ago | ||||||||||||||||
I find that highly unlikely, coding is the AIs best value use case by far. Right now office workers see marginal benefits but it's not like it's an order of magnitude difference. AI drafts an email, you have to check and edit it, then send it. In many cases it's a toss up if that actually saved time, and then if it did, it's not like the pace of work is break neck anyway, so the benefit is some office workers have a bit more idle time at the desk because you always tap some wall that's out of your control. Maybe AI saves you a Google search or a doc lookup here and there. You still need to check everything and it can cause mistakes that take longer too. Here's an example from today. Assistant is dispatching a courier to get medical records. AI auto completes to include the address. Normally they wouldn't put the address, the courier knows who we work with, but AI added it so why not. Except it's the wrong address because it's for a different doctor with the same name. At least they knew to verify it, but still mistakes like this happening at scale is making the other time savings pretty close to a wash. | |||||||||||||||||
| ▲ | majormajor 4 hours ago | parent | next [-] | ||||||||||||||||
Coding is a relatively verifiable and strict task: it has to pass the compiler, it has to pass the test suite, it has to meet the user's requests. There are a lot of white-collar tasks that have far lower quality and correctness bars. "Researching" by plugging things into google. Writing reports summarizing how a trend that an exec saw a report on can be applied to the company. Generating new values to share at a company all-hands. Tons of these that never touch the "real world." Your assistant story is like a coding task - maybe someone ran some tests, maybe they didn't, but it was verifiable. No shortage of "the tests passed, but they weren't the right test, this broke some customers and had to be fixed by hand" coding stories out there like it. There are pages and pages of unverifiable bullshit that people are sleepwalking through, too, though. Nobody already knows if those things helped or hurt, so nobody will ever even notice a hallucination. But everyone in all those fields is going to be trying really really hard to enumerate all the reasons it's special and AI won't work well for them. The "management says do more, workers figure out ways to be lazier" see-saw is ancient, but this could skew far towards "management demands more from fewer people" spectrum for a while. | |||||||||||||||||
| |||||||||||||||||
| ▲ | sanex 6 hours ago | parent | prev | next [-] | ||||||||||||||||
Not all code generates economic value. See slacks, jiras, etc constant ui updates. | |||||||||||||||||
| |||||||||||||||||
| ▲ | vrighter 3 hours ago | parent | prev | next [-] | ||||||||||||||||
Code is much much harder to check for errors than an email. Consider, for example, the following python code:
vs
One is a literal 5, and the other is a single element tuple containing the number 5. But more importantly, both are valid code.Now imagine trying to spot that one missing comma among the 20kloc of code one so proudly claims AI helped them "write", especially if it's in a cold path. You won't see it. | |||||||||||||||||
| |||||||||||||||||
| ▲ | nradov 7 hours ago | parent | prev [-] | ||||||||||||||||
LLMs might not save time but they certainly increase quality for at least some office work. I frequently use it to check my work before sending to colleagues or customers and it occasionally catches gaps or errors in my writing. | |||||||||||||||||
| |||||||||||||||||