▲ | KronisLV 4 days ago | |
> What's our AI strategy? In most cases, probably giving OpenAI a bunch of money. For whatever reason, the full stack hasn't been commoditized yet to a degree where you could self-host easily. For example, I can put the paid or free version of GitLab on my servers and get repo management, issue tracking, CI/CD, Wiki and a bunch of other stuff. It covers most use cases and works out of the box, even if not always in the ways I want. As for AI... there's OpenAPI and GitHub Copilot, even JetBrains has their AI solutions. You pay for access to the back end component and there's IDE plugins that integrate with that, even custom IDE's or editors like Cursor. But what if you want an editor/plugin that talks to models running on your own servers? Sure, you can get models off of HuggingFace and hook them up to run locally on a machine that has the hardware to take advantage of them... but then what? What about integrating with merge requests in the aforementioned GitLab instance? Obviously it's all possible, but somehow I haven't seen many solutions that offer you something similar to GitLab but for AI. Even GitLab's own solution talks to their servers: https://about.gitlab.com/solutions/code-suggestions/ > Code Suggestions is available to self-managed GitLab instances via a secure connection to GitLab.com. I'm guessing CodeGPT is probably a piece of that puzzle, or maybe the Tabnine enterprise setup. | ||
▲ | ffsm8 4 days ago | parent [-] | |
The core of the issue is that you need beefy GPUs to really run these models at production workloads. So I think what you're currently imagining won't happen until GPU prices go down massively |