Remix.run Logo
samsolomon 4 days ago

Is Open WebUI something like you are looking for? The design has some awkwardness, but overall it's incorporated a ton of great features.

https://openwebui.com/

mg 4 days ago | parent [-]

No, I'm looking for an html page with a button "Select LLM". After pressing that button and selecting a local LLM from disk, it would show an input field where you can type your question and then it would use the given LLM to create the answer.

I'm not sure what OpenWebUI is, but if it was what I mean, they would surely have the page live and not ask users to install Docker etc.

tmdetect 4 days ago | parent | next [-]

I think what you want is this: https://github.com/mlc-ai/web-llm

bravetraveler 4 days ago | parent | prev | next [-]

It's both what you want and not; the chat/question interface is as you describe, lack-of-installation is not. The LLM work is offloaded to other software, not the browser.

I would like to skip maintaining all this crap, though: I like your approach

Jemaclus 4 days ago | parent | prev [-]

You should install it, because it's exactly what you just described.

Edit: From a UI perspective, it's exactly what you described. There's a dropdown where you select the LLM, and there's a ChatGPT-style chatbox. You just docker-up and go to town.

Maybe I don't understand the rest of the request, but I can't imagine a software where a webpage exists and it just magically has LLMs available in the browser with no installation?

craftkiller 4 days ago | parent | next [-]

It doesn't seem exactly like what they are describing. The end-user interface is what they are describing but it sounds like they want the actual LLM to run in the browser (perhaps via webgpu compute shaders). Open WebUI seems to rely on some external executor like ollama/llama.cpp, which naturally can still be self-hosted but they are not executing INSIDE the browser.

Jemaclus 4 days ago | parent [-]

Does that even exist? It's basically what they described but with some additional installation? Once you install it, you can select the LLM on disk and run it? That's what they asked for.

Maybe I'm misunderstanding something.

craftkiller 4 days ago | parent [-]

Apparently it does, though I'm learning about it for the first time in this thread also. Personally, I just run llama.cpp locally in docker-compose with anythingllm for the UI but I can see the appeal of having it all just run in the browser.

  https://github.com/mlc-ai/web-llm
  https://github.com/ngxson/wllama
Jemaclus 4 days ago | parent [-]

Oh, interesting. Well, TIL.

andsoitis 4 days ago | parent | prev [-]

> You should install it, because it's exactly what you just described.

Not OP, but it really isn't what' they're looking for. Needing to install stuff VS simply going to a web page are two very different things.