| ▲ | smlacy 7 days ago |
| Watching ollama pivot from a somewhat scrappy yet amazingly important and well designed open source project to a regular "for-profit company" is going to be sad. Thankfully, this may just leave more room for other open source local inference engines. |
|
| ▲ | mchiang 7 days ago | parent | next [-] |
| we have always been building in the open, and so is Ollama. All the core pieces of Ollama are open. There are areas where we want to be opinionated on the design to build the world we want to see. There are areas we will make money, and I wholly believe if we follow our conscious we can create something amazing for the world while making sure we can keep it fueled to keep it going for the long term. Some of the ideas in Turbo mode (completely optional) is to serve the users who want a faster GPU, and adding in additional capabilities like web search. We loved the experience so much that we decided to give web search to non-paid users too. (Again, it's fully optional). Now to prevent abuse and make sure our costs don't go out of hand, we require login. Can't we all just work together and create a better world? Or does it have to be so zero sum? |
| |
| ▲ | xiphias2 7 days ago | parent | next [-] | | I wanted to try web search to increase my privacy but it wanted to do login. For Turbo mode I understand the need for paying but the main poing of running a local model with web search is browsing from my computer without using any LLM provider. Also I want to get rid of the latency to US servers from Europe. If ollama can't do it, maybe a fork. | | |
| ▲ | mchiang 7 days ago | parent [-] | | login does not mean payment. It is free to use. It costs us to perform the web search, so we want to make sure it is not subject to abuse. |
| |
| ▲ | dcreater 7 days ago | parent | prev [-] | | I'm sorry but your words don't match your actions. |
|
|
| ▲ | shepardrtc 7 days ago | parent | prev | next [-] |
| I think this offering is a perfectly reasonable option for them to make money. We all have bills to pay, and this isn't interfering with their open source project, so I don't see anything wrong with it. |
| |
| ▲ | Aeolun 7 days ago | parent [-] | | > this isn't interfering with their open source project Wait until it makes significant amounts of money. Suddenly the priorities will be different. I don’t begrudge them wanting to make some money off it though. | | |
|
|
| ▲ | smeeth 7 days ago | parent | prev | next [-] |
| Their FOSS local inference service didn't go anywhere. This isn't Anaconda, they didn't do a bait and switch to screw their core users. It isn't sinful for devs to try and earn a living. |
| |
| ▲ | kermatt 7 days ago | parent | next [-] | | Another perspective: If you earn a living using something someone else built, and expect them not to earn a living, your paycheck has a limited lifetime. “Someone” in this context could be a person, a team, or a corporate entity. Free may be temporary. | |
| ▲ | blitzar 7 days ago | parent | prev | next [-] | | Yet. Their FOSS local inference service hasn't go anywhere ... yet. | |
| ▲ | dcreater 7 days ago | parent | prev [-] | | You can build this and go build something else as well. You don't need to morph the thing you built. That's underhanded |
|
|
| ▲ | TuringNYC 7 days ago | parent | prev | next [-] |
| >> Watching ollama pivot from a somewhat scrappy yet amazingly important and well designed open source project to a regular "for-profit company" is going to be sad. if i could have consistent and seamless local-cloud dev that would be a nice win. everyone has to write things 3x over these days depending on your garden of choice, even with langchain/llamaindex |
|
| ▲ | mark_l_watson 6 days ago | parent | prev | next [-] |
| I don't blame them. As soon as they offer a few more models available with the Turbo mode I plan on subscribing to their Turbo plan for a couple of months - a buying them a coffee, or keeping the lights on kind of thing. The Ollama app using the signed-in-only web search tool is really pretty good. |
|
| ▲ | satvikpendem 7 days ago | parent | prev | next [-] |
| > important and well designed open source project It was always just a wrapper around the real well designed OSS, llama.cpp. Ollama even messes up the names of models by calling distilled models the name of the actual one, such as DeepSeek. Ollama's engineers created Docker Desktop, and you can see how that turned out, so I don't have much faith in them to continue to stay open given what a rugpull Docker Desktop became. |
| |
| ▲ | Philpax 7 days ago | parent [-] | | I wouldn't go as far as to say that llama.cpp is "well designed" (there be demons there), but I otherwise agree with the sentiment. |
|
|
| ▲ | user- 7 days ago | parent | prev | next [-] |
| I remember them pivoting from being infra.hq |
|
| ▲ | dangoodmanUT 7 days ago | parent | prev | next [-] |
| It was always a company |
|
| ▲ | mythz 7 days ago | parent | prev | next [-] |
| Same, was just after a small lightweight solution where I can download, manage and run local models. Really not a fan of boarding the enshittification train ride with them. Always had a bad feeling when they didn't give ggerganov/llama.cpp their deserved credit for making Ollama possible in the first place, if it were a true OSS project they would have, but now makes more sense through the lens of a VC-funded project looking to grab as much marketshare as possible to avoid raising awareness for alternatives in OSS projects they depend on. Together with their new closed-source UI [1] it's time for me to switch back to llama.cpp's cli/server. [1] https://www.reddit.com/r/LocalLLaMA/comments/1meeyee/ollamas... |
|
| ▲ | colesantiago 7 days ago | parent | prev | next [-] |
| ollama is YC and VC backed, this was inevitable and not surprising. All companies that raise outside investment follow this route. No exceptions. And yes this is how ollama will fall due to enshittification, for lack of a better word. |
|
| ▲ | otabdeveloper4 7 days ago | parent | prev [-] |
| [flagged] |
| |
| ▲ | dang 7 days ago | parent | next [-] | | "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something." https://news.ycombinator.com/newsguidelines.html | | | |
| ▲ | api 7 days ago | parent | prev | next [-] | | > Repackaging existing software while literally adding no useful functionality was always their gig. Developers continue to be blind to usability and UI/UX. Ollama lets you just install it, just install models, and go. The only other thing really like that is LM-Studio. It's not surprising that the people behind it are Docker people. Yes you can do everything Docker does with Linux kernel and shell commands, but do you want to? Making software usable is often many orders of magnitude more work than making software work. | | |
| ▲ | otabdeveloper4 7 days ago | parent [-] | | > Ollama lets you just install it, just install models, and go. So does the original llama.cpp. And you won't have to deal with mislabeled models and insane defaults out of the box. | | |
| ▲ | lxgr 6 days ago | parent [-] | | Can it easily run as a server process in the background? To me, not having to load the LLM into memory for every single interaction is a big win of Ollama. | | |
| ▲ | otabdeveloper4 6 days ago | parent [-] | | Yes, of course it can. | | |
| ▲ | lxgr 6 days ago | parent [-] | | I wouldn't consider that a given at all, but apparently there's indeed `llama-server` which looks promising! Then the only thing that's missing seems to be a canonical way for clients to instantiate that, ideally in some OS-native way (systemd, launchcd etc.), and a canonical port that they can connect to. |
|
|
|
| |
| ▲ | llmtosser 7 days ago | parent | prev | next [-] | | This is not true. No inference engine does all of: - Model switching - Unload after idle - Dynamic layer offload to CPU to avoid OOM | | |
| ▲ | ekianjo 7 days ago | parent [-] | | this can be added to llama.cpp with llama.swap currently so even without Ollama you are not far off |
| |
| ▲ | mchiang 7 days ago | parent | prev | next [-] | | sorry that you feel the way you feel. :( I'm not sure which package we use that is triggering this. My guess is llama.cpp based on what I see on social? Ollama has long shifted to using our own engine. We do use llama.cpp for legacy and backwards compatibility. I want to be clear it's not a knock on the llama.cpp project either. There are certain features we want to build into Ollama, and we want to be opinionated on the experience we want to build. Have you supported our past gigs before? Why not be more happy and optimistic in seeing everyone build their dreams (success or not). If you go build a project of your dreams, I'd be supportive of it too. | | |
| ▲ | Maxious 7 days ago | parent [-] | | > Have you supported our past gigs before? Docker Desktop? One of the most memorable private equity rugpulls in developer tooling? Fool me once shame on you, fool me twice shame on me |
| |
| ▲ | dangoodmanUT 7 days ago | parent | prev [-] | | Yes everyone should just write cpp to call local LLMs obviously | | |
| ▲ | otabdeveloper4 7 days ago | parent [-] | | Yes, but llama.cpp already comes with a ready-made OpenAI-compatible inference server. | | |
| ▲ | reverius42 6 days ago | parent [-] | | I think people are getting hung up on the "llama.cpp" name and thinking they need to write C++ code to use it. llama.cpp isn't (just) a C++ library/codebase -- it's a CLI application, server application (llama-server), etc. |
|
|
|