Remix.run Logo
Show HN: Velda – Run jobs with serverless GPUs, without container images(velda.io)
2 points by eagleonhill 9 hours ago | 2 comments

Almost all cloud job frameworks rely on containers. But building, pushing, and pulling images is slow and forces you to maintain a separate environment from your local dev setup. The maintenance of images is challenging as well: version drift, custom libraries & integrations, complex manifests.

We believe containers create too much overhead during development. We built Velda to let you launch jobs directly in the cloud by mirroring your Velda managed dev-environment, all with just a command prefix. That way, you only need to allocate GPUs when you're running your training jobs, with no change to your workflow.

How it works:

* No manifests or custom libraries: You don't need to rewrite your code or define YAML. Just prefix your command with vrun.

* Zero Restrictions: Use any tool, binary, or library already on your machine.

* Instant Launch: We built a custom streaming file system. Instead of waiting for a 5GB image to pull, we stream only the necessary bits to the cloud instance, allowing jobs to start in seconds after the machine boots.

The core is open source: https://github.com/velda-io/velda and can be deployed on AWS/GCP/Nebius

If you want to try the hosted version, we’re giving free credits for your first GPU jobs: https://cloud.velda.io

kanglei1130 an hour ago | parent | next [-]

Setup development environments is always pain in the ass, happy to see a solution for model training

taoshihan 7 hours ago | parent | prev [-]

Very cool direction. The “tax” is real, especially for fast-moving AI teams where local dev environments drift constantly. Open sourcing the core was also a smart move.