Remix.run Logo
doorhammer 3 days ago

Sentiment analysis, nuanced categorization by issue, detecting new issues, tracking trends, etc, are the bread and butter of any data team at a f500 call center.

I'm not going to say every project born out of that data makes good business sense (big enough companies have fluff everywhere), but ime anyway, projects grounded to that kind of data are typically some of the most straight-forward to concretely tie to a dollar value outcome.

la_fayette 3 days ago | parent | next [-]

Yes that sound like important and useful use cases. However, these are solved by boring old school ML models since years...

williamdclt 3 days ago | parent | next [-]

I think what they're saying is that you need the summaries to do these things

esafak 3 days ago | parent | prev | next [-]

It's easier and simpler to use an LLM service than to maintain those ad hoc models. Many replaced their old NLP pipelines with LLMs.

prashantsengar 3 days ago | parent [-]

The place I work at, we replaced our old NLP pipelines with LLMs because they are easier to maintain and reach the same level of accuracy with much less work.

We are not running a call centre ourselves but we are a SaaS offering the services for call centre data analysis.

aaomidi 3 days ago | parent | prev | next [-]

Sentiment analysis was not solved and companies were paying analyst firms shit tons of money to do that for them manually.

doorhammer 3 days ago | parent | prev [-]

So, I wouldn't be surprised if someone in charge of a QA/ops department chose LLMs over similarly effective existing ML models in part because the AI hype is hitting so hard right now.

Two things _would_ surprise me, though:

- That they'd integrate it into any meaningful process without having done actual analysis of the LLM based perf vs their existing tech

- That they'd integrate the LLM into a core process their department is judged on knowing it was substantially worse when they could find a less impactful place to sneak it in

I'm not saying those are impossible realities. I've certainly known call center senior management to make more hairbrained decisions than that, but barring more insight I personally default to assuming OP isn't among the hairbrained.

shortrounddev2 3 days ago | parent [-]

My company gets a bunch of product listings from our clients and we try to group them together (so that if you search for a product name you can see all the retailers who are selling that product). Since there arent reliable UPCs for the kinds of products we work with, we need to generate embeddings (vectors) for the products by their name/brand/category and do a nearest-neighbor search. This problem has many many many "old school" ML solutions to it, and when i was asked to design this system I came up with a few implementations and proposed them.

Instead of doing any of those (we have the infrastructure to do it) we are paying OpenAI for their embeddings APIs. Perhaps openAI is just doing old school ML under the hood but there is definitely an instinct among product managers to reach for shiny tools from shiny companies instead of considering more conservative options

doorhammer 3 days ago | parent [-]

Yeah, I don't want to downplay the reality of companies making bad decisions.

I think for me, the way the GP phrased things just made me want to give them the benefit of the doubt.

Given my experience, people I've worked with, and how the GP phrased things, in my mind it's more likely than not that their not making a naive "chase-the-AI" decision, and that a lot of replies didn't have a whole lot of call center experience.

The department I worked with when I did work in call centers was particularly competent and also pretty org savvy. Decisions were always a mix of pragmatism and optics. I don't think it's hard to find people like that in most companies. I also don't think it's hard to find the opposite.

But yeah, when I say something would be surprising, I don't mean it's impossible. I mean that the GP sounds informed and competent, and if I assume that, it'd be surprising to me if they sacrificed long-term success for an immediate boost by slotting LLMs into something so core to their success metrics.

But, I could be wrong. It's just my hunch, not a quantitative analysis or anything. Feature factory product influence is a real thing, for sure. It's why the _main_ question I ask in interviews is for everyone to describe the relationship between product and eng, so I definitely self-select toward a specific dynamic that probably unduly influences my perspective. I've been places where the balance is hard product, and it sucks working somewhere like that.

But yeah, for deciding if more standard ML techniques are worth replacing with LLMs, I'd ultimately need to see actual numbers from someone concretely comparing the two approaches. I just don't have that context

adrr 3 days ago | parent | prev [-]

Those have been done for 10+ years. We were running sentiment analysis on email support to determine prioritization back in 2013. Also ran bayesian categorization to offer support reps quick responses/actions. Don't need expensive LLMs it.

doorhammer 3 days ago | parent [-]

Yeah, I was a QA data analyst supporting three multi-thousand agent call-centers for an F500 in 2012 and we were using phoneme matching for transcript categorization. It was definitely good enough for pretty nuanced analysis.

I'm not saying any given department should, by some objective measure, switch to LLMs and I actually default to a certain level of skepticism whenever my department talks about applications.

I'm just saying I can imagine plausible realities where an intelligent and competent person would choose to switch toward using LLMs in a call center context.

There are also a ton of plausible realities where someone is just riding the hype train gunning for the next promotion.

I think it's useful to talk about alternate strategies and how they might compare, but I'm personally just defaulting to assuming the OP made a reasonable decision and didn't want to write a novel to justify it (a trait I don't suffer from, apparently), vs assuming they just have no idea what they're doing.

Everyone is free to decide which assumed reality they want to respond to. I just have a different default.