Remix.run Logo
simonw 4 days ago

Transformers.js (https://huggingface.co/docs/transformers.js/en/index) is this. Some demos (should work in Chrome and Firefox on Windows, or Firefox Nightly on macOS and Linux):

https://huggingface.co/spaces/webml-community/llama-3.2-webg... loads a 1.24GB Llama 3.2 q4f16 ONNX build

https://huggingface.co/spaces/webml-community/janus-pro-webg... loads a 2.24 GB DeepSeek Janus Pro model which is multi-modal for output - it can respond with generated images in addition to text.

https://huggingface.co/blog/embeddinggemma#transformersjs loads 400MB for an EmbeddingGemma demo (embeddings, not LLMs)

I've collected a few more of these demos here: https://simonwillison.net/tags/transformers-js/

You can also get this working with web-llm - https://github.com/mlc-ai/web-llm - here's my write-up of a demo that uses that: https://simonwillison.net/2024/Nov/29/structured-generation-...

mg 4 days ago | parent [-]

This might be a misunderstanding. Did you see the "button that the user can click to select a model from their file system" part of my comment?

I tried some of the demos of transformers.js but they all seem to load the model from a server. Which is super slow. I would like to have a page the lets me use any model I have on my disk.

simonw 4 days ago | parent [-]

Oh sorry, I missed that bit.

I got Codex + GPT-5 to modify that Llama chat example to implement the "load from local directory" pattern. It appears to work.

First you'll need to grab the checkout of the local model (~1.3GB):

  git lfs install
  git clone https://huggingface.co/onnx-community/Llama-3.2-1B-Instruct-q4f16
Then visit this page: https://static.simonwillison.net/static/2025/llama-3.2-webgp... - in Chrome or Firefox Nightly.

Now click "Browse folder" and select the folder you just checked out with Git.

Click the confusing "Upload" confirmation (it doesn't upload anything, just opens those files in the current browser session).

Now click "Load local model" - and you should get a full working chat interface.

Code is here: https://github.com/simonw/transformers.js-examples/commit/cd...

Here's the full Codex session that I used to build this: https://gist.github.com/simonw/3c46c9e609f6ee77367a760b5ca01...

I ran Codex against the https://github.com/huggingface/transformers.js-examples/tree... folder and prompted:

> Modify this application such that it offers the user a file browse button for selecting their own local copy of the model file instead of loading it over the network. Provide a "download model" option too.

Then later:

> Build the production app and then make it available on localhost somehow

And:

> Uncaught (in promise) Error: Invalid configuration detected: both local and remote models are disabled. Fix by setting `env.allowLocalModels` or `env.allowRemoteModels` to `true`.

And:

> Add a bash script which will build the application such that I can upload a folder called llama-3.2-webgpu to http://static.simonwillison.net/static/2025/llama-3.2-webgpu... and http://static.simonwillison.net/static/2025/llama-3.2-webgpu... will serve the app

(Note that this doesn't allow you to use any model on your machine, but it proves that it's possible.)

simonw 4 days ago | parent | next [-]

Wrote this all up on my blog here, including a GIF demo showing how to use it: https://simonwillison.net/2025/Sep/8/webgpu-local-folder/

mg 4 days ago | parent | prev [-]

Awesome!

Bookmarked. I will surely try it out once FireFox or Chromium on Linux support WebGPU in their default config.