| ▲ | toofy 4 hours ago | |
this is one of the reasons im hearing more and more people are using open/locally hosted models. particularly so we dont have to waste time to entirely redo everything when inevitably a company decides to pull the rug out from under us and change or remove something integral to our flow, which over the years we've seen countless times, and seems to be getting more and more common. products entirely disappearing or significantly changing will be more and more common in the llm arena as things move forward towards companies shutting down, bubbles deflating, brand priorities drastically reshifting, etc... i think, we're at or at least close to a time to really put some thought into which pieces of your flow could be done entirely with an open/local model and be honest with ourselves on which pieces of our flow truly needs sota or closed models that may entirely disappear or change. in the long run, putting a little bit of thought into this now will save a lot of headache later. | ||
| ▲ | thraxil 2 hours ago | parent | next [-] | |
Yeah. Back when Gemma2 came out we benchmarked it and were looking at open models. For our use case though, while the tasks are pretty simple, we do need a pretty large context window and Gemini had a big lead there over the open models for quite a while. I'll probably be evaluating the current batch of open models in the near future though. | ||
| ▲ | jimbokun 4 hours ago | parent | prev [-] | |
What’s interesting about this is that for previous technologies you could define a standard and demonstrate compliance with interfaces and behavior. But with LLMs, how do you know switching from one to another won’t change some behavior your system was implicitly relying on? | ||