Remix.run Logo
Light Sleep: Waking VMs in 200ms with eBPF and snapshots(koyeb.com)
65 points by Sadzeih a day ago | 18 comments
mjb 21 hours ago | parent | next [-]

Always nice to see folks talking about VM snapshots - they're an extremely powerful tool for building systems of all kinds. At AWS, we use snapshots in Lambda Snapstart (along with cloning, and snapshots are distributed across multiple workers), and in Aurora DSQL (where we clone and restore a snapshot of Postgres on every database connection), in AgentCore Runtime, and a number of other places.

> But Firecracker comes with a few limitations, specifically around PCI passthrough and GPU virtualization, which prevented Firecracker from working with GPU Instances

Worth mentioning that Firecracker supports PCI passthrough as of 1.13.0. But that doesn't diminish the value of Cloud Hypervisor - it's really good to have multiple options in this space with different design goals (including QEMU, which has the most features).

> We use the sk_buff.mark field — a kernel-level metadata flag on packets - to tag health check traffic.

Clever!

> Light Sleep, which reduces cold starts to around 200ms for CPU workloads.

If you're restoring on the same box, I suspect 200ms is significantly above the best you can do (unless your images are huge). Do you know what you're spending those 200ms doing? Is it just creating the VMM process and setting up kvm? Device and networking setup? I assume you're mmapping the snapshot of memory and loading it on demand, but wouldn't expect anywhere near 200ms of page faults to handle a simple request.

tuananh 9 hours ago | parent [-]

> At AWS, we use snapshots in Lambda Snapstart

I'm curious on why is it taking so long to add support for different runtime? I imagine it would be same for all of them?

> where we clone and restore a snapshot of Postgres on every database connection

This is interesting. Is there any challenge while working on this?

deivid 3 hours ago | parent [-]

From my experience with firecracker, you need to send a signal to the VMM that can be used to indicate the process is "ready" (and the snapshot can be taken).

I assume that every runtime must be forked to add such signal right before calling into usercode

nevon 4 hours ago | parent | prev | next [-]

I feel like I'm missing something here when this is being used with nomad. Caveat being that the only comparable technologies I've worked with are k8s and ECS. In the article they mention that they are using a containerd shim to launch micro VMs, so from the perspective of the scheduler, whether the VM is actually "sleeping" or not, it looks like it's running since they continue to respond to health checks. So what exactly is the point of suspending the VMs on idle if the scheduler still thinks they're running? Whatever memory is reserved for that job is still going to be reserved, so you're not able to oversubscribe the host regardless.

nicoche 3 hours ago | parent [-]

Hey!

You got everything correctly. The advantages are: - For the end-user: not paying or paying less - For the hypervisor owner: a sleeping instance uses no CPU, so it reduces the load on the hypervisor

Other than that, it's still possible to oversubscribe, but you're right, we need to trump the scheduler. Another cool thing is that in the worst case scenario where an hypervisor gets full and it's over capacity, sleeping instances are great candidates for eviction.

nevon 3 hours ago | parent [-]

Ah, I think the part that I didn't consider was that an "idle" VM is not zero CPU cost, unlike a container, so indeed from a hypervisor owner perspective you'd like other active VMs to be able to use that CPU time. But again, doesn't that presuppose oversubscription? If a node is fully reserved, it doesn't matter if all of the running VMs are idle, you're still not going to be able to schedule another job on that node, so your costs remain the same unless you oversubscribe the host and count on the fact that there will be unused capacity available most of the time (similar to AWS Flex instances).

nicoche 3 hours ago | parent [-]

Yes definitely as an operator, you want to oversubscribe hosts. What I was mentioning is that there are still small benefits when an host is not full: the CPU gains _and_, for users, the fact that they're not paying/paying less (even though the operator is still paying for the full underutilized hypervisor, but hey, that's the game)

epolanski 18 hours ago | parent | prev | next [-]

Slightly OT but would be cool if there was a way to run computations in some on-demand VM that cold started in 200ms, did it thing, died and you only paid for the time you used it. In essence s lambda that exposed you a full blown VM rather than a limited environment.

eyberg 18 hours ago | parent | next [-]

There are a few ways to approach this. If you don't mind owning the orchestration layer this is precisely what firecracker does.

If you don't even want to pay for that though scheduling unikernels on something like ec2 gets you your full vm, is cheaper, has more resources than lambda and doesn't have the various limitations such as no gpu or timeouts or anything like that.

10 hours ago | parent [-]
[deleted]
easton 17 hours ago | parent | prev | next [-]

I would kill for this as a AWS service, but I admit all my use cases are around being too frugal to pay for the time it takes to initialize a EC2 instance from zero (like CI workers where I don’t want to pay when idle but also the task could possibly run longer than the lambda timeout).

ianseyler 18 hours ago | parent | prev [-]

Working on that now ;)

cptnntsoobv 19 hours ago | parent | prev | next [-]

> Saves the full VM state to disk

Does this include the RAM for the VM? For auto-idle systems like this where to park the RAM tends to be a significant concern. If you don't "retire" the RAM too the idling savings are limited to CPU cycles but if you do, the overheads of moving RAM around can easily wreck any latency budget you may have.

Curious how you are dealing with it.

deivid 11 hours ago | parent | prev | next [-]

Great post. Not sure if 200ms is fast though, you can definitely boot from zero to pid1 in <10ms.

I guess it depends on the workload, if you are snapshotting an already-loaded Python program, the time savings are huge, but if it's a program with fast startup, it's probably slower to snapshot.

> Waking up instantly on real traffic without breaking clients

is this for new TCP connections? Or also for connections opened prior to sleep?

stx5 14 hours ago | parent | prev | next [-]

How is this comparing to Rund? https://www.usenix.org/conference/atc22/presentation/li-ziju...

newaccount091 20 hours ago | parent | prev | next [-]

> Alongside the eBPF program, we run a lightweight daemon — scaletozero-agent — that monitors those counters. If no new packets show up for a set period, it initiates the sleep process.

> No polling. No heuristics. Just fast, kernel-level idle detection.

Isn't the `scaletozero-agent` daemon effectively polling eBPF map counters...?

markrwilliams 20 hours ago | parent [-]

Nope! There are evented eBPF map types that userspace processes can watch with epoll(2), e.g. https://docs.ebpf.io/linux/map-type/BPF_MAP_TYPE_RINGBUF/#ep...

nikisweeting 16 hours ago | parent | prev [-]

How does this stack up against unikernel-based VM snapshots?