Remix.run Logo
simonw a year ago

I've been trying to find a good option for this for ages. The Deno/Pyodide one is genuinely one of the top contenders: https://til.simonwillison.net/deno/pyodide-sandbox

I'm hoping some day to find a recipe I really like for running Python code in a WASM container directly inside Python. Here's the closest I've got, using wasmtime: https://til.simonwillison.net/webassembly/python-in-a-wasm-s...

5rest 10 months ago | parent | next [-]

The demo looks really appealing. I have a real-world use case in mind: analyzing an Excel file and asking questions about its contents. The current approach (https://github.com/pydantic/pydantic-ai/blob/main/mcp-run-py...) seems limited to running standalone scripts—it doesn't support reading and processing files. Is there an extension or workaround to enable file input and processing?

abshkbh a year ago | parent | prev | next [-]

https://github.com/abshkbh/arrakis

Will come with MacOS support very soon :) Does work on Linux

Tsarp a year ago | parent [-]

I tried this path and found that MacOS has horrible support on firecracker and similar.

abshkbh a year ago | parent [-]

Crosvm (our original Google project) and its children projects Firecracker, Cloud-Hypervisor are all based on top of "/dev/kvm" i.e. the Linux Virtualization stack.

Apple's equivalent is the Apple Virtualization Framework which exposes kvm like functionality at a higher level.

singularity2001 a year ago | parent | prev | next [-]

one wasmtime dependency and a self contained python file with 100 loc seems reasonable!

much better than calling deno, at least if you have no pip dependencies...

just had to update to new api:

# store.add_fuel(fuel) store.set_fuel(fuel) fuel_consumed=fuel-store.get_fuel()

and it works!!

time to hello world: hello_wasm_python311.py 0.20s user 0.03s system 97% cpu 0.234 total

antonvs a year ago | parent | next [-]

I was interested in how this compares in a kind of absolute sense. For comparison, an optimized C hello world program gave these results using `perf` on my Dell XPS 13 laptop:

       0.000636230 seconds time elapsed
       0.000759000 seconds user
       0.000000000 seconds sys
That's 36,800% faster. Hand-written assembly was very slightly slower. Using the standard library for output instead of a syscall brought it down to 20,900% faster.

(Yes I used percentages to underscore how big the difference is. It's 368x and 209x respectively. That's huge.)

Begrudgingly, here are the standard Python numbers:

    real    0m0.019s
    user    0m0.015s
    sys     0m0.004s
About 1230% faster than the sandbox, i.e. 12.3x. About an order of magnitude, which is typical for these kinds of exercises.
singularity2001 a year ago | parent [-]

haha, 99% is startup time for the sandbox, but yeah, python via wasm is probably still 10-400 times slower than c.

lopuhin a year ago | parent | prev | next [-]

it's pretty difficult to package native python dependencies for wasmtime or other wasi runtimes, e.g. lxml

Already__Taken a year ago | parent [-]

yeh if you can't shove numpy in there its not really useful.

fzzzy a year ago | parent | prev [-]

Great, thanks for your post! I got it working too. This is going to be incredibly handy.

3abiton a year ago | parent | prev | next [-]

> I'm hoping some day to find a recipe I really like for running Python code in a WASM container directly inside Python.

But what would be the usecase for this?

simonw a year ago | parent [-]

Running Python code from untrusted sources, including code written by LLMs.

3abiton a year ago | parent [-]

I see, the way I would approach is it by running a client on in a specific python env on an incus instance, with LLM hosted either on the host or another seperate an incus instance. Lately been addicted to sandboxing apps in incus, specifically for isolated vpn tunnels, and automating certain web access.

Tsarp a year ago | parent | prev [-]

Atleast on macos cant the sandbox-exec be used similar to what codex is doing?

simonw a year ago | parent [-]

Yeah, I got excited about that option a while back but was put off by the fact that Apple's (minimal) documentation say sandbox-exec is deprecated.

fzzzy a year ago | parent | next [-]

OpenAI's Codex CLI uses it on macOS. It's in typescript but maybe I'll take a look at what they do and port it to python.

[edit] looks really simple, except I'll have to look into how their raw-exec takes care of writeableRoots: https://github.com/openai/codex/blob/0d6a98f9afa8697e57b9bae...

[edit2] lol raw-exec doesn't do anything at all with writeableRoots, it's handled in the fullPolicy (from scopedWritePolicy)

fzzzy a year ago | parent | prev [-]

I cleaned up the output of asking Gemini 2.5 Pro to rewrite it in python, and it seems to work well:

https://gist.github.com/fzzzy/319d6cbbdfff9c340d0e9c362247ae...