| ▲ | echelon 2 hours ago | |||||||||||||||||||||||||||||||
I don't even need "open weights" to run on hardware I own. I am fine renting an H100 (or whatever), as long as I theoretically have access to and own everything running. I do not want my career to become dependent upon Anthropic. Honestly, the best thing for "open" might be for us to build open pipes and services and models where we can rent cloud. Large models will outpace small models: LLMs, video models, "world" models, etc. I'd even be fine time-sharing a running instance of a large model in a large cloud. As long as all the constituent pieces are open where I could (in theory) distill it, run it myself, spin up my own copy, etc. I do not deny that big models are superior. But I worry about the power the large hyperscalers are getting while we focus on small "open" models that really can't match the big ones. We should focus on competing with large models, not artisanal homebrew stuff that is irrelevant. | ||||||||||||||||||||||||||||||||
| ▲ | Aurornis an hour ago | parent [-] | |||||||||||||||||||||||||||||||
> I do not want my career to become dependent upon Anthropic As someone who switches between Anthropic and ChatGPT depending on the month and has dabbled with other providers and some local LLMs, I think this fear is unfounded. It's really easy to switch between models. The different models have some differences that you notice over time but the techniques you learn in one place aren't going to lock you into a provider anywhere. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||