In the last few years, microservices have become the go-to architecture for many organizations, for better or for worse. At the same time, containers and Kubernetes have taken the lion's share of the infrastructure market, helping to drive the adoption of microservices even further. While containers can make things easier in many ways, such as through increased modularity, sharing, and reuse, they can also introduce complexities, such as in configuration and maintenance.
Managing development environments is still difficult and time-consuming
Despite the challenges that come with this ever-changing landscape, teams still need to develop and release software efficiently and reliably: the development environment is just one piece of a bigger puzzle, but it plays a crucial role when it comes to making developers effective and productive. As for anything else, no one-size-fits-all solution exists, as every company and team has different needs and requirements: in this article, we will introduce tilt.dev as a tool that can help streamline the development process on a local machine (while also checking out its remote cluster capabilities) within the context of a Kubernetes-based workflow.
How it all started
How did that all start? At some point in my career, I found myself on the hunt for a modern solution to our local development needs that would help us handle an increasingly complex set of services which were posing serious performance challenges to our local development boxes. While we were already using Kubernetes in production, we were still relying on a combination of bash scripts and Docker Compose manifests for our everyday development needs, all on top of Docker Machine and Docker for Mac (which had less-than-stellar performances on Intel macs, especially when filesystem sharing was involved): this setup created a mismatch between the three environments (dev, staging, prod), and made some kind of errors more likely, such as updating a service's env vars and forgetting to update the Helm chart until we saw staging was broken.
The Promised Land
For all the reasons above, I wanted to find a solution that was:
- based on Kubernetes, thus resembling staging and prod as much as possible
- as fast as possible, possibly close to (perceived) native speed, to improve productivity and reduce frustration
- getting in people's way as little as possible: no forced IDE, OS, or hardware vendor, to accomodate everyone's preferences
- able to run only a customizable subset of modules, to reduce bloat, and increase focus and performances- highly configurable and extensible, to avoid getting stuck on edge cases and to allow potentially complex setups
- allowing for local, hybrid and remote environments, to further help with performances and maintenance
After much searching, I landed on Tilt.dev, which checked most (if not all) of the boxes on my list. Tilt is a tool that orchestrates your workloads on top of a Kubernetes cluster of your choosing. It allows you to define, run, and iterate on multiple microservices and other workloads, such as databases and static assets, in a single command. It then provides you with real-time feedback and debugging capabilities, as well as the ability to deploy to multiple environments with a single command.
So how do we use Tilt.dev at SIGHUP to streamline the development of our products, and how can you do that, too?
Introducing the Tiltfile
Everything starts from a manifest written using the Starlark language - a dialect of Python that was born within the Bazel project - called the Tiltfile: it has everything you can expect from a programming language such as conditionals, loops and data structures, and it's enriched by a vast library of functions provided by the Tilt project.
Pair all that with Tilt's own extensions system, and you'll find serious power and convenience available at your fingertips: actions such as loading yaml files into Kubernetes, building or restarting containers, upgrading Helm Charts or running Kustomize tasks will all be a matter of a few lines of code tied together declaratively, and conveniently presented in a nice web UI.
Here's an example from PixelTilt, a demo project living within the Tilt's Github organization that helps demonstrating many of the Tilt's functionalities:
# version_settings() enforces a minimum Tilt version # https://docs.tilt.dev/api.html#api.version_settings version_settings(constraint='>=0.22.1') # load() can be used to split your Tiltfile logic across multiple files # the special ext:// prefix loads the corresponding extension from # https://github.com/tilt-dev/tilt-extensions instead of a local file load('ext://restart_process', 'docker_build_with_restart') load('ext://uibutton', 'cmd_button') # k8s_yaml automatically creates resources in Tilt for the entities # and will inject any images referenced in the Tiltfile when deploying # https://docs.tilt.dev/api.html#api.k8s_yaml k8s_yaml([ 'filters/glitch/k8s.yaml', 'filters/color/k8s.yaml', 'filters/bounding-box/k8s.yaml', 'storage/k8s.yaml', 'muxer/k8s.yaml', 'object-detector/k8s.yaml', 'frontend/k8s.yaml', ]) # k8s_resource allows customization where necessary such as adding port forwards # https://docs.tilt.dev/api.html#api.k8s_resource k8s_resource("frontend", port_forwards="3000", labels=["frontend"]) k8s_resource("storage", port_forwards="8080", labels=["infra"]) k8s_resource("max-object-detector", labels=["infra"], new_name="object-detector") k8s_resource("glitch", labels=["backend"]) k8s_resource("color", labels=["backend"]) k8s_resource("bounding-box", labels=["backend"]) k8s_resource("muxer", labels=["backend"]) # cmd_button extension adds custom buttons to a resource to execute tasks on demand # https://github.com/tilt-dev/tilt-extensions/tree/master/uibutton cmd_button( name='flush-storage', resource='storage', argv=['curl', '-s', 'http://localhost:8080/flush'], text='Flush DB', icon_name='delete' ) # frontend is a next.js app which has built-in support for hot reload # live_update only syncs changed files to the correct place for it to pick up # https://docs.tilt.dev/api.html#api.docker_build # https://docs.tilt.dev/live_update_reference.html docker_build( "frontend", context="./frontend", live_update=[ sync('./frontend', '/usr/src/app') ] ) # the various go services share a base image to avoid re-downloading the same # dependencies numerous times - `only` is used to prevent unnecessary rebuilds # https://docs.tilt.dev/api.html#api.docker_build docker_build( "pixeltilt-base", context=".", dockerfile="base.dockerfile", only=['go.mod', 'go.sum'] ) # docker_build_with_restart automatically restarts the process defined by # `entrypoint` argument after completing the live_update (which syncs .go # source files and recompiles inside the container) # https://github.com/tilt-dev/tilt-extensions/tree/master/restart_process # https://docs.tilt.dev/live_update_reference.html docker_build_with_restart( "glitch", context=".", dockerfile="filters/glitch/Dockerfile", only=['filters/glitch', 'render/api'], entrypoint='/usr/local/bin/glitch', live_update=[ sync('filters/glitch', '/app/glitch'), sync('render/api', '/app/render/api'), run('go build -mod=vendor -o /usr/local/bin/glitch ./glitch') ] ) # for the remainder of the services, plain docker_build is used - these # services are changed less frequently, so live_update is less important # any of them can be adapted to use live_update by using "glitch" as an # example above! docker_build( "muxer", context=".", dockerfile="muxer/Dockerfile", only=['muxer', 'render/api', 'storage/api', 'storage/client'] ) docker_build( "color", context=".", dockerfile="filters/color/Dockerfile", only=['filters/color', 'render/api'] ) docker_build( "bounding-box", context=".", dockerfile="filters/bounding-box/Dockerfile", only=['filters/bounding-box', 'render/api'] ) docker_build( "storage", context=".", dockerfile="storage/Dockerfile", only=['storage'], entrypoint='/usr/local/bin/storage' )
In just a few lines we're able to set some constraints, load some extensions, build dockerfiles, deploy workloads on Kubernetes, and even create custom ui controls to ease our everyday work.
An image is worth a thousands words though, so here's what it looks like once you run it:
IaC-ing all the things
Tilt comes with a useful helper tool called ctlptl, which helps codifing dev clusters and registries creation.
It follows Kubernetes' resources' declaration format, which should look quite familiar if you already have some experience with our favourite containers orchestrator.
When it comes to picking a cluster, Tilt offers a large number of options, the most common and recommended one for starting out being kind.
Here's how a typical setup looks like:
--- # Describe the cluster you need for local development. apiVersion: ctlptl.dev/v1alpha1 kind: Cluster product: kind registry: example-kind-registry kindV1Alpha4Cluster: name: example-kind nodes: - role: control-plane image: kindest/node:v1.26.1 extraPortMappings: - containerPort: 30080 hostPort: 80 protocol: TCP - containerPort: 30443 hostPort: 443 protocol: TCP --- # Setup a local container registry to speed up build and deploy times apiVersion: ctlptl.dev/v1alpha1 kind: Registry name: example-kind-registry port: 5005 listenAddress: 127.0.0.1 ...
Putting it all together
Once you have all your manifest ready, you can run a few commands to start, and you'll be ready to go!
# Create the container registry ctlptl apply -f "kind-registry.yaml" # Create the kind cluster ctlptl apply -f "kind-cluster.yaml" # Configure $KUBECONFIG and kube context kind get kubeconfig --name example-kind > "kubeconfig" export KUBECONFIG=kubeconfig kubectl config set-context example # Run tilt tilt up
In the next instalment we'll take a deeper look at an example project and show some advanced functionalities and techniques that can help creating your own environment, tailored to your specific and exact needs.
We'll be talking about container rebuild and reload strategies for a blazing-fast development experience, we'll mention caching techniques that help minimizing image pulls and rebuilds, and we'll touch remote clusters to help offloading some heat from our local boxes, finally adding some networking magic to mix.