▲ | cjbgkagh 2 days ago | |||||||||||||||||||||||||||||||||||||||||||
Granted that capital costs are a barrier to entry and that barriers to entry leads to non-perfect competition, but the exploitability is limited in the case of LLMs because they exist on a sub-linear utility scale. In LLMs 2x the price is not 2x as useful, this means a new entrant can enter the lower end of the market and work their way up. The only way to prevent that is for the incumbent to keep costs as close to marginal as possible. There is a natural monopoly aspect given the ability to train and data mine on private usage data but in general improvements in the algorithms and training seem to be dominating advancements. Microsoft's search engine Bing paid an absolute fortune for access to usage data and they were unable to capitalize on it. LLMs have the unusual property that a lot of value can be extracted out of fine tuning for a specialized purposes which opens the door to a million little niches providing fertile ground for future competitors. This is one area where being a fast follower makes a lot of sense. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | mlyle 2 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
Almost anything has a utility scale which is diminishing. But we still see MR=MC pricing in industries with barriers to entry (IPR, capital costs). TSMC and Mercedes don't price cheap to avoid giving others a toehold. > There is a natural monopoly aspect given the ability to train and data mine on private usage data but in general improvements in the algorithms and training seem to be dominating advancements. There's pretty big economies of scale with inference-- the magic of how to route correctly with experts to conduct batching while keeping latency low. It's an expensive technology to create, and there's a large minimum scale where it works well. | ||||||||||||||||||||||||||||||||||||||||||||
|