Remix.run Logo
Show HN: RapidFire AI: 16–24x More Experiment Throughput Without Extra GPUs(github.com)
3 points by kamranrapidfire 13 hours ago

We built RapidFire AI, an open-source Python tool to speed up LLM fine-tuning and post-training with a powerful level of control not found in most tools: Stop, resume, clone-modify and warm-start configs on the fly—so you can branch experiments while they’re running instead of starting from scratch or running one after another.

- Works within your OSS stack: PyTorch, HuggingFace TRL/PEFT), MLflow.

- Hyperparallel search: launch as many configs as you want together, even on a single GPU

- Dynamic real-time control: stop laggards, resume them later to revisit, branch promising configs in flight.

- Deterministic eval + run tracking: Metrics curves are automatically plotted and are comparable.

- Apache License v2.0: No vendor lock in. Develop on your IDE, launch from CLI.

Repo: https://github.com/RapidFireAI/rapidfireai/

PyPI: https://pypi.org/project/rapidfireai/

Docs: https://oss-docs.rapidfire.ai/

We hope you enjoy the power of rapid experimentation with RapidFire AI for your LLM customization projects! We’d love to hear your feedback–both positive and negative–on the UX and UI, API, any rough edges, and what integrations and extensions you’d be excited to see.