For more than five years, DC/OS has enabled some of the largest, most sophisticated enterprises in the world to achieve unparalleled levels of efficiency, reliability, and scalability from their IT infrastructure. But now it is time to pass the torch to a new generation of technology: the D2iQ Kubernetes Platform (DKP). Why? Kubernetes has now achieved a level of capability that only DC/OS could formerly provide and is now evolving and improving far faster (as is true of its supporting ecosystem). That’s why we have chosen to sunset DC/OS, with an end-of-life date of October 31, 2021. With DKP, our customers get the same benefits provided by DC/OS and more, as well as access to the most impressive pace of innovation the technology world has ever seen. This was not an easy decision to make, but we are dedicated to enabling our customers to accelerate their digital transformations, so they can increase the velocity and responsiveness of their organizations to an ever-more challenging future. And the best way to do that right now is with DKP.
Whether you bought it, built it, or adopted it from open source, you're probably already using some sort of software platform to build, deploy, and scale your applications.
A platform is what emerges after years of extracting common functionality out of applications into lower level abstractions. If done with deliberate intent and design, you get a platform out of it. If not, you probably end up with an organic mess on your hands and find yourself looking into platforms other people have built for a way out, a ray of hope.
The right platform for you will strike precisely the balance you need between flexibility and simplicity, allowing you to build faster without being too constrained. This piece examines the spectrum of cloud platforms to help you find the one best suited to your situation.
Everyone has their own idea of what the perfect platform looks like, since everyone's use case is little different. But there are two big things everyone is looking for:
- Increased Development Velocity
- Automated Operational Expertise
These two demands drive the majority of software platform investments. Really, they are the same arguments you might use to automate anything: speed and repeatability.
So knowing that no platform is going to be perfect for all users, should you write your own? If you write your own, do you build on top of an existing platform? How do you choose a base platform to start from? Do you want a platform that is tightly integrated top to bottom or do you want multiple layers of platforms that are loosely connected with robust extension points?
These are all hard questions, and there isn't really a single answer that is right for everyone. The journey to platform bliss is one of discovery, comparisons, and tradeoffs. So let's dig in.
The Platform Spectrum
The rainbow of cloud platforms has a flavor for everyone.
Every vendor will tell you their software is special, unique even. They're all trying to differentiate their product to provide value that is irreplaceable. But if you look hard enough, and tolerate some rough edges, you can group these products by the types of interfaces they provide.
The term Software as a Service, oldest of the bunch, goes back to around the year 2000 and refers to the bundling of packaged software products and support services into a hosted solution to avoid the often unknown cost of implementation and operation. A SaaS product can itself be a platform to build on. Some original uses of the term described solutions that replaced legacy enterprise resource planning (ERP) and customer relationship management (CRM) platforms.
Companies like Salesforce and SAP have been highly successful in this space with customers that don't have large engineering or IT departments to build and manage these complex systems. Even companies with those resources available may consider these things to be outside of their core competency and not worth building or operating themselves. More recently though, almost every category of software can be obtained as SaaS, from email to word processing to content management systems.
On the other end of the spectrum is Infrastructure as a Service.
Infrastructure platforms arrived shortly after SaaS. VMware GSX Server (2006) and Amazon Elastic Compute Cloud (EC2, 2006) both provided early virtualization platforms. While VMware initially focused on enterprise on-premises installation, Amazon Web Services resonated with a wider market because of the combination of its hosted IaaS and SaaS products. Later, Rackspace and NASA developed OpenStack (2010) as an open source, on-premise competitor to both VMware's vSphere (released in 2009, replacing GSX) and Amazon's EC2.
These IaaS primarily offer a few specific abstractions: virtual machine compute nodes, software-defined networking, and attachable storage. As with SaaS, the primary selling point of a hosted IaaS is the outsourcing of operations and automation of otherwise manual capacity provisioning, but unlike SaaS, hosted IaaS also sells the illusion of near infinite scale for your own software. For most companies interested in outsourcing infrastructure, AWS provides more capacity than they will ever need, scaling out their datacenters before you even ask for more nodes. For companies unable or unwilling to outsource, infrastructure platforms like OpenStack and vSphere provide the ability to host your own cloud in the datacenter of your choosing.
While managing not just hardware but also an infrastructure platform may seem like more work, it's what enterprise companies were already doing with home grown platforms. Either that or they were manually managing hardware without a virtualization layer and were eager to make the provisioning more self-service. And so, the as-a-service model came full circle: the platform that was hosted became packaged product, this time with added multitenancy capabilities to allow customers to operate it for their own internal user groups.
Then along comes the application platforms.
Among the first to use the term Platform as a Service was Fotango's Zimki (2006) and Heroku (2007). Later Google App Engine (2008), CloudFoundry (2011), and several others joined the fray. By that point it was clear that these were really application platforms (aPaaS), designed specifically to accelerate developer velocity and reduce operational overhead. Enabling developers to self-provision and manage the applications they developed further compressed the turnaround time from inception to release to feedback to iteration, dovetailing with the growing popularity of agile software development and seeding the fledgling DevOps movement.
But progress never stops. The container platforms are here.
Containerization has been around longer than you may realize (FreeBSD Jails have been around since 2000), but it's probably safe to say that containerization didn't really become widely popular until Docker (2013) combined Linux operating-system-level virtualization with file system images. This made it easy to build and deploy containerized applications, a pattern recognizable to IaaS users who had been building disk images to speed up infrastructure platform provisioning. But unlike VMs, where a couple running simultaneously are enough to bog down your workstation with overhead, containers allow you to deploy full microservice stacks locally, dramatically speeding up the development cycle. Plus, because of the reduced overhead, each microservice could have its own container image, its own release cycle, and its own rolling upgrades, allowing for more smaller teams to develop them in parallel.
From container runtimes to container platforms was an obvious evolutionary step. Application platforms like CloudFoundry and cluster resource managers like Apache Mesos have been using container isolation transparently since their inception. The next step was to expose a platform API that allowed developers to deploy the increasingly popular Docker images across a cluster of machines. Like infrastructure platforms, container platforms started on-premise, later offered as hosted services. Mesosphere's Marathon (2013) was one of the first open source platforms for general purpose container orchestration, but it was predated by internal efforts such as Google's Borg (~2004), and Twitter's Aurora (written in 2010; open sourced as Apache Aurora in 2013).
Container orchestration lies at the heart of container platforms. Like application platforms, container platforms needed to provide declarative constraint-based scheduling. Unlike application platforms, containers aren't constrained to be twelve-factor apps. Stateful services, for example, require persistent volumes, isolation guarantees, domain-specific migration procedures, collocated backup jobs, etc. Because of this flexibility, container platforms can easily become even more complex than application platforms in order to support a wider variety of workloads.
For added flexibility, and to support legacy workloads without migration, many people run container platforms on top of infrastructure platforms, but this isn't strictly necessary. Containers are close enough to individual machines that almost all workloads are compatible, so not everyone needs this kind of flexibility. Many developers spend all their time in a single layer of the stack. They look for ways to avoid repetitive tasks like hand crafting container images for every new app they build. For these people, function platforms (aka serverless) were made.
Amazon kicked off the "serverless" craze with AWS Lambda (2014), offering light-weight containerized event handling on top of their virtual infrastructure platform. Like other Amazon Web Services, Lambda is hosted-only. So a market quickly sprang up for on-premises alternatives, filled by Iron.io (2014), Apache OpenWhisk (2016), Fission (2016), Galactic Fog's Gestalt (2016), OpenLambda (2016).
Function platforms operate along the same lines as an application platforms, except they also include language-specific frameworks. So instead of writing applications with multiple endpoints, the developer just writes event handlers and maps triggers to handlers with the platform API. Function platforms often come with or integrate with an API Gateway to handle proxying, load balancing, and centralized service discovery. Unlike application platforms, function platforms transparently incorporate load-based auto-scaling, because they control all ingress points and multiplexing.
Like container platforms, function platforms don't necessarily require an infrastructure platform, but unlike the flexibility provided by container platforms, function platforms just aren't designed to support a wide variety of workloads. So it may be unwise or impossible to run just a function platform. You probably need a lower level container or infrastructure platform as well. Some function platforms are even designed to integrate with container platforms, taking advantage of middle layer automation to reduce the complexity of the higher layers.
Each of these platform layers provide their own distinct abstractions and APIs, some more abstract than others. Some high level platforms are all or nothing; they have top to bottom integration but can only support a fraction of the workloads you want to run. You may be tempted to choose the highest layer of abstraction to maximize developer velocity, but you also have to consider that software built on those platforms is going to be the most tightly coupled to the platform, potentially requiring the most reworking to re-platform, increasing your risk. On the other hand, the lower level platforms provide the most flexibility, enabling the widest variety of workloads, including web apps, microservices, legacy monoliths, data pipelines, and data storage services. They allow easy migration and easier infrastructure operations but don't make it any easier to actually develop or operate apps, services, or jobs on top.
This conflict between application platforms and infrastructure platforms is one of the big reasons container platforms are popular. Container platforms are a compromise on both fronts. They allow you to decide on a per-container basis whether your workload needs its own environment or can run as just a binary, supporting a wider variety of workloads. But they also provide declarative configuration, lifecycle management, replication, and scheduling, like an application platform. If you also need a higher level of abstraction, you can easily deploy a thinner application or function platform on top of a container platform, sharing resources and machines with lower level workloads. If you also need a lower level of abstraction, you can easily deploy a container platform on top of an infrastructure platform, instead of directly on bare metal.
DC/OS - The Platform of Choice
At Mesosphere, our mission is to make it insanely easy to build and scale world-changing technology. That means we serve not just the developers or just the operators, but both. Helping you achieve true agility requires enabling both developer velocity and operational flexibility. Developers want to reduce repetitive work, automate elasticity, and build on top of robust platform services. Operators want visibility, to avoid vendor lock-in, and the ability to control costs. So we built DC/OS to provide an infrastructure-agnostic container platform with cloud-like services and an open partner ecosystem. With DC/OS you get a solid container platform plus a catalog of higher level services: databases, queues, test automation, continuous delivery pipelines, logging & metrics stacks, auto-scalers, function platforms, etc.