Product, Dispatch, Ksphere

Improve Developer Productivity by Combining a Powerful DSL with GitOps

Dispatch combines a powerful DSL with the GitOps workflow, making the promotion from development to deployed into production a breeze.

May 11, 2020

Carter Gawron


Dispatch combines a powerful DSL with the GitOps workflow, making the promotion from development to deployed into production a breeze. No longer do you need to stay up late at night thinking of all the ways that production can break, like, “What config isn’t up to date?” “What code didn’t we test?” or “Who made that change and why?” If you can break down the complexity of the release process we’ve been using since the internet began, you can start to make it something everyone involved in the life cycle can understand. By empowering developers and operators to use the same tools and methodologies, you can effectively manage your time spent on improving the product, instead of rolling forward and patching on-demand, just to stay ahead of the next fire.


The Power of a DSL

The purpose of developing anything is to ultimately land it into production. This can be a barrier if you can’t sufficiently describe what you want to do and turn that into a build. Here comes the power of the Dispatchfile, with its domain specific language (DSL) defined here. You can now easily create intuitive pipelines written in a number of languages you are already comfortable with and can easily read. Dispatch’s DSL is powerful enough to give you the flexibility to build complex projects that are simple enough that the barrier to learn and create is low. 


Hello World

We’ll walk you through an example of a hello-world Dispatchfile using Starlark that shows you how powerful Dispatchfile is:


# vi:syntax=python
load("", "gitResource", "pullRequest")
load("", "kaniko")
git = gitResource("helloworld-git")
# Build and push the docker image
simple_docker = kaniko(git, "$YOURDOCKERUSERNAME/helloworld")
# Use the pushed docker image to run CI 
        command=["go", "test", "./..."])])
simpleTasks = ["unit-test-simple"]
action(tasks=simpleTasks, on=pullRequest())
action(tasks=simpleTasks, on=pullRequest(chatops=["build"]))


In this example, you can see the intuitive way the Dispatchfile structure can make it easy to write and understand your pipeline. Let’s start with Actions. 

Actions define what tasks are to be executed based on a given condition. Here you can see that the task, “simpleTasks,” is to be executed on any Pull Request, as well as on chatops of “build.” These can be executed on Pull Requests, Tags and Push commits. 



The tasks defined in the actions are a set of sequential steps to run. They do the work of the pipeline. In terms of Kubernetes, steps are also containers in a pod. Steps are a combination of Inputs, Outpus, Dependencies, and Volumes. In the example above, there are two tasks defined. In “simple_docker,” we build and push the docker image to “$YOURDOCKERUSERNAME/helloworld." Then, we use that as an input for the task “unit-test-sample,” which runs the command “go ./test.” Finally, we assign the task “unit-test-simple” to the label “simpleTasks” to make it easier to reuse in Actions.



We also have Resources, which help define Git repositories, images, and other artifacts to be consumed or produced by tasks. A resource can only be outputted by no more than one task, and any task taking the resource as input will be run after the output task. This makes it easy to define dependencies and order the sequences of tasks. In this example, we have defined the git resource “helloworld-git.” This is consumed by the “simple_docker” task to be used to build the docker image “helloworld.” 



Lastly, we have Imports. Our Dispatch Catalog holds syntactic sugar for reusing various starlark functions that makes your Dispatchfile smaller and lets you focus on actual testing aspects. In this example, we import “gitResource” to declare the git repository as a resource that can be sent as input to tasks. Then, we import “pullRequest” to declare a condition to trigger the builds whenever a pull request is updated. We also import “kaniko” to build and publish the Docker image.



For hands on learning, please go through the following tutorial by Tarun Gupta Akirala, D2iQ, and learn more: Helloworld in Starlark.


The GitOps Workflow

At the heart of GitOps is the concept of configuration as code. The basic premise of GitOps is that every application in production has a known versioned state which is good. And this state starts with the developers themselves. GitOps breaks down the barrier between development and operations so that production can be scripted in a way which makes it seamless to upgrade, downgrade, repeatedly maintained, and easy to operate. Given sufficient effort, the configurations can account for all states of the application lifecycle. You can easily canary changes, test in staging, then deploy into production. Because GitOps empowers developers with operational aspects of the lifecycle, you now have a bridge which allows developers to understand and collaborate with operations to improve the quality of the product, customer experience, and that of on-call. Operations can feel more confident in the state of production because they can validate what is being pushed, define the acceptance criteria, and safeguard users' data. When both developers and operators share in the success of production, the overall quality of the product goes up.



Better Together


The combination of Dispatch’s DSL and native GitOps workflow is the key to unlocking developer productivity. By providing familiar languages in your developers' hands, they can quickly create a pipeline, test locally, push to Dispatch to build, test, and promote into production. This saves countless cycles wasted in trying to understand build configurations, getting tests to pass, and working with operations to maintain production. Implementing production as a state of configuration files, demystifies production and allows both developers and operators to work better together.

Ready to get started?