Marathon, a Mesos framework, adds Placement Constraints

For more than five years, DC/OS has enabled some of the largest, most sophisticated enterprises in the world to achieve unparalleled levels of efficiency, reliability, and scalability from their IT infrastructure. But now it is time to pass the torch to a new generation of technology: the D2iQ Kubernetes Platform (DKP). Why? Kubernetes has now achieved a level of capability that only DC/OS could formerly provide and is now evolving and improving far faster (as is true of its supporting ecosystem). That’s why we have chosen to sunset DC/OS, with an end-of-life date of October 31, 2021. With DKP, our customers get the same benefits provided by DC/OS and more, as well as access to the most impressive pace of innovation the technology world has ever seen. This was not an easy decision to make, but we are dedicated to enabling our customers to accelerate their digital transformations, so they can increase the velocity and responsiveness of their organizations to an ever-more challenging future. And the best way to do that right now is with DKP.

Nov 22, 2013

Tobi Knaup


2 min read

In addition to a number of bug fixes, Marathon 0.2 adds constraints, which allow you to control the placement of your app tasks in a cluster.
Constraints give operators control over where apps should run, to optimize for fault tolerance or locality. Constraints can be set via the REST API or the Marathon gem when you're starting an app. Make sure you have the gem version 0.2.0 or later for constraint support. Constraints are made up of three parts: a field name, an operator, and an optional value. The field can be any Mesos slave attribute, or the slave hostname.
UNIQUE operator
UNIQUE tells Marathon to enforce uniqueness of the attribute across all of an app's tasks. For example the following constraint ensures that there is only one app task running on each host:
marathon start -i sleep -C 'sleep 60' -n 3 --constraint hostname:UNIQUE
http POST localhost:8080/v1/apps/start id=sleep cmd='sleep 60' instances=3 constraints:='[["hostname","UNIQUE"]]'
CLUSTER operator
CLUSTER allows you to run all of your app's tasks on slaves that share a certain attribute. This is useful for example if you have apps with special hardware needs, or if you want to run them on the same rack for low latency.
marathon start -i sleep -C 'sleep 60' -n 3 --constraint rack_id:CLUSTER:rack-1
http POST localhost:8080/v1/apps/start id=sleep cmd='sleep 60' instances=3 constraints:='[["rack_id","CLUSTER","rack-1"]]'
GROUP_BY operator
GROUP_BY can be used to distribute your tasks evenly across racks or datacenters for high availability.
marathon start -i sleep -C 'sleep 60' -n 3 --constraint rack_id:GROUP_BY
http POST localhost:8080/v1/apps/start id=sleep cmd='sleep 60' instances=3 constraints:='[["rack_id","GROUP_BY"]]'
Optionally, you can add a value to limit the number of tasks per group.
Leader Proxying
If you're running multiple instances of Marathon in HA mode, previously the followers did an HTTP redirect to the master when they received API requests. This is problematic for some clients, especially for non-GET requests. Redirection was replaced with proxying so followers now transparently forward requests to the leader.
Rate Limiting
You can now configure a maximum number of tasks an app is allowed to launch per second. This prevents misconfigured applications from creating a flood of failing tasks. Just add the taskRateLimit key to the JSON request when starting an app, and set it to the maximum number of new tasks per second, e.g. 1.0.
A Dockerfile was added to allow Marathon to be started in a Docker container.

Ready to get started?