Remix.run Logo
dcreater 10 hours ago

Yeah it's been a steady pivot to profitable features. Wonderful to see them build a reputation through FOSS and codebase from free labor to then cash in.

kergonath 10 hours ago | parent | next [-]

As long as the software that runs locally gets maintained (and ideally improved, though if it is not I’ll simply move to something else), I find it difficult to be angry. I am more annoyed by software companies that offer a nerfed "community edition" whose only purpose is to coerce people into buying the commercial version.

dcreater 10 hours ago | parent [-]

> software companies that offer a nerfed "community edition" whose only purpose is to coerce people into buying the commercial version.

This is the play. Its only a matter of time till they do it. Investors will want their returns

Imustaskforhelp 8 hours ago | parent [-]

pardon me but is Ollama a company though? I didn't knew that actually.

And are they VC funded? Are they funded by Y-combinator or anything else..

I just thought it was a project by someone to write something similar to docker but for LLM's and that was its pitch for a really really long time I think

dcreater 8 hours ago | parent [-]

Yup thats exactly what I thought as well. I also found out late and to much surprise that its a VC backed startup: https://www.ycombinator.com/companies/ollama

all2 10 hours ago | parent | prev | next [-]

What sort of monetization model would you like to see? What model would you deem acceptable?

dcreater 9 hours ago | parent | next [-]

Ollama , the local inference platform, stays completely local. Maintained by a non-profit org with dev time contributed to by a for-profit company. That company can be VC backed and can make their cloud inference platform. And can use ollama as its backed, as a platform to market etc. But keep it as a separate product (not named ollama).

This is almost exactly how duckdb/motherduck functions and I think theyre doing an excellent job.

EDIT: grammar and readability

troyvit 9 hours ago | parent | prev | next [-]

If I were them I'd go whole-hog on local models and:

* Work with somebody like System76 or Framework to create great hardware systems come with their ecosystem preinstalled.

* Build out a PaaS, perhaps in partnership with an existing provider, that makes it easy for anybody to do what Ollama search does. I'm more than half certain I could convince our cash strapped organization to ditch elastic search for that.

* Partner with Home Assistant, get into home automation and wipe the floor with Echo and its ilk (yeah basically resurrect Mycroft but add whole-house automation to it).

Each of those are half-baked, but it also took me 7 minutes to come up with them, and they seem more in line with what Ollama tries to represent than a pure cloud play using low-power models.

Cheer2171 7 hours ago | parent | prev [-]

Have ollama server support auth / API keys (closed as out of scope) and monetize the way everyone else does around SSO.

Cheer2171 7 hours ago | parent | prev [-]

What reputation? People who actually know how to develop software or work with LLMs know ollama is a child's tricycle and to run the hell away from what is just a buggy shell around other people's inference engines.

Ollama is beloved by people who know how to write 5 lines of python and bash to do API calls, but can't possibly improve the actual app.

dcreater 6 hours ago | parent [-]

Thats what I thought so as well - that it was for people like me who arent professional SWEs and thus im sad to see them go this way. But what ive found is people are using it for "on-prem" style deployment, have no idea if this is common but I wouldnt be surprised given the reality of AI startups + the abundance of ollama in training dataset leading to relatively greater vibe coding success rate