▲ | etaioinshrdlu 20 hours ago | |||||||||||||
I've operated a top ~20 LLM service for over 2 years, very comfortably profitably with ads. As for the pure costs you can measure the cost of getting an LLM answer from say, OpenAI, and the equivalent search query from Bing/Google/Exa will cost over 10x more... | ||||||||||||||
▲ | johnecheck 14 hours ago | parent | next [-] | |||||||||||||
So you don't have any real info on the costs. The question is what OpenAI's profit margin is here, not yours. The theory is that these costs are subsidized by a flow of money from VCs and big tech as they race. How cheap is inference, really? What about 'thinking' inference? What are the prices going to be once growth starts to slow and investors start demanding returns on their billions? | ||||||||||||||
| ||||||||||||||
▲ | throwawayoldie 12 hours ago | parent | prev | next [-] | |||||||||||||
So you're not running an LLM, you're running a service built on top of a subsidized API. | ||||||||||||||
| ||||||||||||||
▲ | clarinificator 19 hours ago | parent | prev [-] | |||||||||||||
Profitably covering R&D or profitably using the subsidized models? |