Remix.run Logo
dotdi 4 days ago

This is not an apples-to-apples comparison. Python needs to load and interpret the whole requests module when you run the above program. The golang linker does dead code elimination, so it probably doesn't run anything and doesn't actually do the import when you launch it.

maccard 4 days ago | parent | next [-]

Sure it's not an apples to apples comparison - python is interpreted and go is statically compiled. But that doesn't change the fact that in practice running a "simple" python program/script can take longer to startup than go can to run your entire program.

dotdi 4 days ago | parent [-]

Still, you are comparing a non-empty program to an empty program.

tuhgdetzhh 4 days ago | parent | next [-]

Even if you actually use the network module in Go, just so that the compiler wouldn't strip it away, you would still have a startup latency in Go way below 25 ms from my experience with writing CLI tools.

Whereas with Python, even in the latest version, you're already looking at atleast 10x the amount of startup latency in practice.

Note: This is excluding the actual time that is made for the network call, which can of course also add quiete some milliseconds, depending on how far on planet earth your destination is.

maccard 4 days ago | parent | prev [-]

You're missing the point. The point is that python is slow to start up _because_ it's not the same.

Compare:

    import requests
    print(requests.get("http://localhost:3000").text)
to

    package main

    import (
      "fmt"
      "io"
      "net/http"
     )

    func main() {
        resp, _ := http.Get("http://localhost:3000")
        defer resp.Body.Close()
        body, _ := io.ReadAll(resp.Body)
        fmt.Println(string(body))
    }
I get:

    python3:  0.08s user 0.02s system 91% cpu 0.113 total
    go 0.00s user 0.01s system 72% cpu 0.015 total
(different hardware as I'm at home).

I wrote another that counts the lines in a file, and tested it against https://www.gutenberg.org/cache/epub/2600/pg2600.txt

I get:

    python 0.03s user 0.01s system 83% cpu 0.059 total
    go 0.00s user 0.00s system 80% cpu 0.010 total
These are toy programs, but IME that these gaps stay as your programs get bigger
dekhn 4 days ago | parent | prev [-]

It's not interpreting- Python is loading the already byte compiled version. But it's also statting several files (various extensions).

I believe in the past people have looked at putting the standard library in a zip file instead of splatted out into a bunch of files in a dirtree. In that case, I think python would just do a few stats, find the zipfile, loaded the whole thing into RAM, and then index into the file.

maccard 4 days ago | parent [-]

> In that case, I think python would just do a few stats, find the zipfile, loaded the whole thing into RAM, and then index into the file.

"If python was implemented totally different it might be fast" - sure, but it's not!

dekhn 4 days ago | parent [-]

No, this feature already exists.

maccard 4 days ago | parent [-]

Great - how do I use it?

kortex 3 days ago | parent [-]

You should look at the self-executing .pex file format (https://docs.pex-tool.org/whatispex.html). The whole python program exists as a single file. You can also unzip the .pex and inspect the dependency tree.

It's tooling agnostic and there are a couple ways to generate them, but the easiest it to just use pants build.

Pants also does dependency traversal (that's the main reason we started using it, deploying a microservices monorepo) so it only packages the necessary modules.

I haven't profiled it yet for cold starts, maybe I'll test that real quick.

https://www.pantsbuild.org/dev/docs/python/overview/pex

Edit: just ran it on a hello world with py3.14 on m3 macbook pro, about 100 +/-30 ms for `python -m hello` and 300-400 (but wild variance) for executing the pex with `./hello/binary.pex`.

I'm not sure if a pants expert could eke out more speed gains and I'm also not sure if this strategy would win out with a lot of dependencies. I'm guessing the time required to stat every imported file pales in comparison to the actual load time, and with pex, everything needs to be unzipped first.

Pex is honestly best when you want to build and distribute an application as a single file (there are flags to bundle the python interpreter too).

The other option is mypyc, though again that seems to mostly speed up runtime https://github.com/mypyc/mypyc

Now if I use `python -S` (disables `import site` on initialization), that gets down to ~15ms execution time for hello world. But that gain gets killed as soon as you start trying to import certain modules (there is a very limited set of modules you can work with and still keep speedup. So if you whole script is pure python with no imports, you could probably have a 20ms cold start).