▲ | yakhinvadim a day ago | |||||||||||||||||||||||||
I know it's not going to be popular, but to cover the cost of running ChatGPT on that many articles, I made it a part of a premium subscription: https://www.newsminimalist.com/premium#rss | ||||||||||||||||||||||||||
▲ | DrPhish a day ago | parent [-] | |||||||||||||||||||||||||
Do you need realtime results, or is an ongoing queue of article analysis good enough? Have you considered running your own hardware with a frontier MoE model like deepseek v3? It can be done for relatively low cost on CPU depending on your inference speed needs. Maybe a hybrid approach could at least reduce your API spend? source: I run inference locally and built the server for around $6k. I get upwards of 10t/s on deepseek v3 PS: thank you for running this service. I've been using it casually since launch and find it much better for my mental health than any other source of news I've tried in the past. | ||||||||||||||||||||||||||
|