Remix.run Logo
PhantomHour 3 days ago

Consider the kinds of jobs that are popular with outsourcing right now.

Jobs like customer/tech support aren't uniquely suited to outsourcing. (Quite the opposite; People rightfully complain about outsourced support being awful. Training outsourced workers on the fine details of your products/services & your own organisation, nevermind empowering them to do things is much harder)

They're jobs that companies can neglect. Terrible customer support will hurt your business, but it's not business-critical in the way that outsourced development breaking your ability to put out new features and fixes is.

AI is a perfect substitute for terrible outsourced support. LLMs aren't capable of handling genuinely complex problems that need to be handled with precision, nor can they be empowered to make configuration changes. (Consider: Prompt-injection leading to SIM hijacking and other such messes.)

But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.

thewebguyd 3 days ago | parent | next [-]

> But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.

I worked in a call center before getting into tech when I was young. I don't have any hard statistics, but by far the majority of calls to support were basic questions or situations (like Meemaw's router) that could easily be solved with a chatbot. If not that, the requests that did require action on accounts could be handled by an LLM with some guardrails, if we can secure against prompt injection.

Companies can most likely eliminate a large chunk of customer service employees with an LLM and the customers would barely notice a difference.

gausswho 3 days ago | parent | prev [-]

Also consider the mental health crisis among outsourced content moderation staff that have to appraise all kinds of depravity on a daily basis. This got some heavy reporting a year or two ago, in particular from Facebook. These folks for all their suffering are probably being culled right now.

You could anticipate a shift to using AI tools to achieve whatever content moderation goals these large networks have, with humans only handling the uncertain cases.

Still brain damage, but less. A good thing?