| ▲ | joshribakoff 9 hours ago | |
I think this is a pretty narrow take. Without going into too much detail I can imagine many use cases. For example, there is a whole class of predictive algorithms and one limitation is that data has to be cleaned, ingested, and feature engineered. For example, clustering is only as good as your vectorization. With an LLM, it is easy to imagine predictive use cases that skip entire etl pipelines and just directly operate on less structured inputs, not just summarizing those inputs, but actually making decisions or predictions. You’re already seeing frameworks like BERT-topic integrating LLMs (for labeling topics), that is already far removed from the “3 use cases” listed here. By fine tuning llm based predictive systems we might unlock entirely new use cases, and prediction is just one thing i imagine, there are many other use cases. And then it’s not just the fact that frameworks like bert-topic are integrating LLMs. It’s also the fact that if you zoom out the architecture looks a lot like the architecture of an LLM… text -> embedding -> text An LLM could and is already being used to generate recommendations systems, like the ones used at Youtube and Netflix, it captures more semantics than older techniques. | ||