| ▲ | Show HN: Chart Preview – Preview environments for Helm charts on every PR | ||||||||||||||||
| 20 points by Olu 2 days ago | 9 comments | |||||||||||||||||
I’m a software engineer who accidentally became my team’s Kubernetes person — and eventually the bottleneck for every Helm chart PR. I built Chart Preview so reviewers could see Helm chart changes running without waiting on me. A few years ago, my team needed to implement HA for an existing product, which meant deploying on Kubernetes and OpenShift. I spent months learning Kubernetes, Helm, and the surrounding ecosystem. After that, Kubernetes largely became “my thing” on the team. We later published public Helm charts for the product, and customers started submitting PRs. Those PRs would often sit for months — not because the changes were bad, but because testing them meant manually spinning up a Kubernetes cluster, deploying the chart with the proposed changes, running through test scenarios, and coordinating verification with product and QA. Since I was the only one who could reliably set up those environments, everything waited on me. I kept thinking: what if the PR itself showed the changes working? What if reviewers could just click a link and see it deployed? That idea became Chart Preview. Chart Preview deploys your Helm chart to a real Kubernetes cluster when you open a PR, provides a unique preview URL for that PR, and cleans everything up automatically when the PR closes. I started by solving a problem I was personally hitting, rather than surveying the whole market upfront. As I built more of it, I looked at existing preview tools and noticed that while there are solid solutions for previewing container-based applications, Helm-specific workflows introduce different challenges — chart dependencies, layered values files, and opinionated chart structures. That pushed me to focus Chart Preview on being Helm-native first, rather than adapting a container preview workflow to fit Helm. Under the hood, it’s built in Go using the Helm v3 SDK. The architecture is an API server with workers pulling jobs from a PostgreSQL queue — no Kubernetes operator, just services talking directly to the Kubernetes API. Each preview runs in its own namespace with deny-all NetworkPolicies, ResourceQuotas, and LimitRanges. GitHub integration is done via a GitHub App for check runs and webhooks, with GitLab supported via the REST API. There were a few interesting challenges along the way. Injecting preview hostnames into Ingress resources without corrupting manifests took several iterations. Helm uninstall doesn’t always clean everything up, so deleting the entire namespace turned out to be the safest fallback. Handling rapid pushes to the same PR required build numbering so the latest push always wins. And while the Helm SDK is powerful, it’s under-documented — I spent a lot of time reading Helm’s source code. I’ve been building and testing this for a few months using real charts like Grafana, podinfo, and WordPress to validate the workflow. It’s early, but it works, and now I’m trying to understand whether other teams have the same pain point I did. You can try it by installing the GitHub App here: https://github.com/apps/chart-preview I’d love feedback on a few things: Does this solve a real problem for your team, or is shared staging “good enough”? What’s missing that would make you actually use it? Are there Helm charts this wouldn’t work for? (Cluster-scoped resources are intentionally blocked.) Happy to answer questions about the implementation. | |||||||||||||||||
| ▲ | JimBlackwood a day ago | parent | next [-] | ||||||||||||||||
I don’t fully understand the problem this is trying to solve. Or at least, if this solves your problem then it feels like you have bigger problems? If you have staging/production deployments in CI/CD and have your Kubernetes clusters managed in code, then adding feature deployments is not any different from what you have done already. Paying for a third party app seems (to me) both a waste of money and a problem waiting to happen. How we do it: For a given helm chart, we have three sets of value files; prod, staging and preview. An Argo application exists for each prod, staging and preview instance. When a new branch is created, a pipeline runs that renders a new preview chart (with some variables based on branch/tag name), creates a new argo application and commits this to the kubernetes repo. Argo picks it up, deploys it to the appropriate cluster and that’s it. Ingress hostnames get picked up and DNS records get created. When the branch gets deleted, a job runs to remove the argo application and done. It’s the same for staging and production, I really wouldn’t want a different deployment pipeline for preview environments - that just increases complexity and the chances of things going wrong. | |||||||||||||||||
| |||||||||||||||||
| ▲ | kodama-lens a day ago | parent | prev | next [-] | ||||||||||||||||
Great way to apply your gathered Kubernetes knowledge! But I find the pricing tough and I don't like to give 3rd party tools that level of access to my clusters. I know its early state but I see several problems: Right now it seems to be GH only, a lot of people are on selfhosted GitLab. Does it only support helm or also kustomize and raw extra manifests. What about GitOps? I've build similar solution for clints, mostly only CI based. Often with Flux/ArgoCD support. The thing I found difficult was to show the diff of the rendered manifest also while applying the app. Since I'm not a fan of the rendered manifest pattern this often involved extra branches. Is this handled by the app? | |||||||||||||||||
| |||||||||||||||||
| ▲ | mrj a day ago | parent | prev | next [-] | ||||||||||||||||
Congrats! I could see the value of this, for sure. I handle this problem by spinning up a preview environment in a namespace. Each branch gets its own and a script takes care of setting up namespaces for a couple of shared resources for staging (rabbit and temporal). It was a lot of work setting that up though. Preview environments based on a helm deploy makes sense. I wish this had been available before I did all that. | |||||||||||||||||
| |||||||||||||||||
| ▲ | IntelliAvatar a day ago | parent | prev [-] | ||||||||||||||||
Nice idea. How does this compare to running ephemeral preview environments via ArgoCD or Helmfile today? | |||||||||||||||||
| |||||||||||||||||