Product, Tutorials

Getting Konvoy up and running with OpenEBS

Sep 25, 2019

Murat Karslioglu

MayaData / OpenEBS

8 min read

 
D2iQ recently launched Konvoy, a managed Kubernetes platform for operation and lifecycle management. The solution enables developer agility and faster time-to-market for new Kubernetes deployments.
 
In this blog post, I provide step-by-step instructions on how to quickly configure a stateful-workload-ready Kubernetes cluster using Konvoy and OpenEBS.
 
First let’s take a look at the requirements.
 
Prerequisites
Minimum requirements for a multi-node cluster:
 
Hardware
  • Install Konvoy on AWS
  • Default settings to create 3x t3.large master and 4x t3.xlarge worker instances (total 7)
  • Install a demo environment on your laptop using Docker running Konvoy provision --provisioner=docker parameter (optional)
 
Software 
  • CentOS 7
  • jq
  • AWS Command Line Interface (AWS CLI)
  • kubectl v1.15.0 or newer
  • Konvoy
  • OpenEBS
  • Elasticsearch, Kibana, Prometheus, Velero (or any other stateful workload)
 
Getting Prerequisites Ready (2-5 minutes)
Assuming you are deploying on AWS, you need an AWS account and an AWS user with permission to use the related services.
 
You will also need awscli and kubectl installed before we begin.
 
I will be using Ubuntu 18.04 LTS as my workstation host. If you already have the prerequisites, you can jump to the Installing Konvoy section.
 
Install awscli on your workstation:
 
$ sudo apt-get update && sudo apt-get install awscli jq -y
 
Configure the AWS CLI to use your access key ID and secret access key:
 
$ aws configure
 
Follow the command below to download and install the Kubernetes command-line tool
 
kubectl:
$ curl -LO 
https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl
 
Confirm kubectl version:
 
$ kubectl version --short
Client Version: v1.15.0
Server Version: v1.15.0
 
You need to subscribe to CentOS and accept the agreement on AWS Marketplace to use the images; follow the link here and subscribe to “CentOS 7 (x86_64) – with Updates HVM”.
 
Install D2iQ Konvoy (2-5 minutes)
Reach out to your D2iQ representative or fill out this form here to download Konvoy binaries. Follow the instructions below to extract Konvoy on your workstation. The filename should be similar to konvoy_vx.x.x_linux.tar.bz2 :
 
$ tar -xf konvoy_v0.6.0_linux.tar.bz2
 
Move files under your user PATH:
 
$ sudo mv ./konvoy_v0.6.0/* /usr/local/bin/
 
That’s it. The first time you call Konvoy, it will pull images from the Docker registry. Now check the version: 
 
$ konvoy --version
{
"Version": "v0.6.0",
"BuildDate": "Thu Jul 25 04:55:40 UTC 2019"
 
Hold on before you try the konvoy up command. That would create nodes with the default config, but we need to first deploy all components that require persistent storage on cloud native storage.
 
Deploying a Konvoy cluster (15-20 minutes)
Let’s first stand up stateless components that don’t require persistent storage.
 
Create the cluster.yaml file:
 
$ konvoy init
 
Edit the file. An example file below:
 
$ nano cluster.yaml
---
kind: ClusterProvisioner
apiVersion: konvoy.D2iQ.io/v1alpha1
metadata:
name: ubuntu
creationTimestamp: "2019-07-31T23:04:29.030542867Z"
spec:
provider: aws
aws:
region: us-west-2
availabilityZones:
- us-west-2c
tags:
owner: ubuntu
adminCIDRBlocks:
- 0.0.0.0/0
nodePools:
- name: worker
count: 4
machine:rootVolumeSize: 80
rootVolumeType: gp2
imagefsVolumeEnabled: true
imagefsVolumeSize: 160
imagefsVolumeType: gp2
type: t3.xlarge
- name: control-plane
controlPlane: true
count: 3
machine:
rootVolumeSize: 80
rootVolumeType: gp2
imagefsVolumeEnabled: true
imagefsVolumeSize: 160
imagefsVolumeType: gp2
type: t3.large
- name: bastion
bastion: true
count: 0
machine:
rootVolumeSize: 10
rootVolumeType: gp2
type: t3.small
sshCredentials:
user: centos
publicKeyFile: ubuntu-ssh.pub
privateKeyFile: ubuntu-ssh.pem
version: v0.6.0
 
Edit the file: find the addons sections and disable all AWS EBS storage options and stateful components.
 
...
addons:
configVersion: v0.0.47
addonsList:
- name: awsebscsiprovisioner
enabled: false
- name: awsebsprovisioner
enabled: false
- name: dashboard
enabled: true
- name: dex
enabled: true
- name: dex-k8s-authenticator
enabled: true
- name: elasticsearch
enabled: false
- name: elasticsearchexporter
enabled: false
- name: fluentbit
enabled: false
- name: helm
enabled: true
- name: kibana
enabled: false
- name: kommander
enabled: true
- name: konvoy-ui
enabled: true
- name: localvolumeprovisioner
enabled: false
- name: opsportal
enabled: true
- name: prometheus
enabled: false
- name: prometheusadapter
enabled: false
- name: traefik
enabled: true
- name: traefik-forward-auth
enabled: true
- name: velero
enabled: false
version: v0.6.0
 
Now provision the Konvoy cluster. This step takes around 10 minutes to complete:
 
$ konvoy up
 
Konvoy will bring up CentOS images, and configure three master nodes and four worker nodes. It will also deploy helm, dashboard, opsportal, traefik, kommander, dex, konvoy-ui, traefik-forward-auth, and dex components. After a successful install, you should see a message similar to the following:
 
Kubernetes cluster and addons deployed successfully!Run `konvoy apply kubeconfig` to update kubectl credentials.Navigate to the URL below to access various services running in the cluster.
https://a67190d830fd64957884d49fd036ea78-841155633.us-west-2.elb.amazonaws.com/ops/landing
And login using the credentials below.
Username: dazzling_swirles
Password: DGwwyTFNCt6h2JSxTfqzD8PUWSLo1qPLCZJ82kQmMmaNctodWa3gd8W0O2SPFCIcIf the cluster was recently created, the dashboard and services may take a few minutes to be accessible.
 
Configure kubectl:
 
$ konvoy apply kubeconfig
 
Confirm your Kubernetes cluster is ready:
 
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-129-203.us-west-2.compute.internal Ready <none> 4m49s v1.15.0
ip-10-0-129-216.us-west-2.compute.internal Ready <none> 4m49s v1.15.0
ip-10-0-130-203.us-west-2.compute.internal Ready <none> 4m49s v1.15.0
ip-10-0-131-148.us-west-2.compute.internal Ready <none> 4m49s v1.15.0
ip-10-0-194-238.us-west-2.compute.internal Ready master 7m8s v1.15.0
ip-10-0-194-76.us-west-2.compute.internal Ready master 5m27s v1.15.0
ip-10-0-195-6.us-west-2.compute.internal Ready master 6m31s v1.15.0
 
Setting up Konvoy to use OpenEBS as the default storage provider (15-20 minutes)
Now, let’s stand up stateless components on OpenEBS persistent storage using the cStor engine.
 
Get your clustername, either from the prefix in the instance name in your AWS console or using the command below:
 
$ aws --region=us-west-2 ec2 describe-instances | grep ubuntu-
...
"Value": "ubuntu-bb0c" In my case, my CLUSTER name is ubuntu-bb0c (as indicated in the “Value” field2). Replace the values for the first three variables with your settings and set the variables to be used in our script:
export CLUSTER=ubuntu-bb0c # name of your cluster,
export REGION=us-west-2   # region-name
export KEY_FILE=ubuntu-ssh.pem # path to private key fileIPS=$(aws --region=$REGION ec2 describe-instances | jq --raw-output ".Reservations[].Instances[] | select((.Tags | length) > 0) | select(.Tags[].Value | test("$CLUSTER-worker")) | select(.State.Name | test("running")) | [.PublicIpAddress] | join(" ")")
 
Install the iSCSI client on all worker nodes using the script below:
 
for ip in $IPS; do
echo $ip
ssh -o StrictHostKeyChecking=no -i $KEY_FILE centos@$ip sudo yum install iscsi-initiator-utils -y
ssh -o StrictHostKeyChecking=no -i $KEY_FILE centos@$ip sudo systemctl enable iscsid
ssh -o StrictHostKeyChecking=no -i $KEY_FILE centos@$ip sudo systemctl start iscsid
done
 
Set the additional disk size to be included in your cStor storage pool. In our example, it is 200GB.
 
export DISK_SIZE=200
 
Create and attach new disks to the worker nodes:
 
aws --region=$REGION ec2 describe-instances | jq --raw-output ".Reservations[].Instances[] | select((.Tags | length) > 0) | select(.Tags[].Value | test("$CLUSTER-worker")) | select(.State.Name | test("running")) | [.InstanceId, .Placement.AvailabilityZone] | "\(.[0]) \(.[1])"" | while read instance zone; do
echo $instance $zone
volume=$(aws --region=$REGION ec2 create-volume --size=$DISK_SIZE --volume-type gp2 --availability-zone=$zone --tag-specifications="ResourceType=volume,Tags=[{Key=string,Value=$CLUSTER}, {Key=owner,Value=michaelbeisiegel}]" | jq --raw-output .VolumeId)
sleep 10
aws --region=$REGION ec2 attach-volume --device=/dev/xvdc --instance-id=$instance --volume-id=$volume
done
 
To simplify block device selection, I will exclude the system devices from NDM. First download the latest openebs-operator-1.x.x YAML file by following the instructions below:
 
wget https://openebs.github.io/charts/openebs-operator-1.1.0.yaml
 
Edit the file and add the two devices /dev/nvme0n1,/dev/nvme1n1 at the end of exclude: like under filterconfigs:
 
…
filterconfigs:
– key: vendor-filter
name: vendor filter
state: true
include: “”
exclude: “CLOUDBYT,OpenEBS”
– key: path-filter
name: path filter
state: true
include: “”
exclude: “loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md,/dev/nvme0n1,/dev/nvme1n1”
…
 
Now, install OpenEBS.
 
kubectl apply -f openebs-operator-1.1.0.yaml
 
Get the list of block devices we have mounted to our Amazon Elastic Compute Cloud (EC2) worker nodes.
 
$ kubectl get blockdevices -n openebs
NAME SIZE CLAIMSTATE STATUS AGE
blockdevice-16b54eaf38720b5448005fbb3b2e803e 107374182400 Unclaimed Active 23s
blockdevice-2403405be0d70c38b08ea6e29c5b0a2f 107374182400 Unclaimed Active 14s
blockdevice-6d9b524c340bcb4dcecb69358054ec31 107374182400 Unclaimed Active 22s
blockdevice-ecafd0c075fb4776f78bcde26ee2295f 107374182400 Unclaimed Active 23s
 
Create a StoragePoolClaim using the list of block devices from the output above. Make sure you have replaced the blockDeviceList with yours before you run it:
 
cat <<EOF | kubectl apply -f -
kind: StoragePoolClaim
apiVersion: openebs.io/v1alpha1
metadata:
name: cstor-ebs-disk-pool
annotations:
cas.openebs.io/config: |
- name: PoolResourceRequests
value: |-
memory: 2Gi
- name: PoolResourceLimits
value: |-
memory: 4Gi
spec:
name: cstor-disk-pool
type: disk
poolSpec:
poolType: striped
blockDevices:
blockDeviceList:
- blockdevice-16b54eaf38720b5448005fbb3b2e803e
- blockdevice-2403405be0d70c38b08ea6e29c5b0a2f
- blockdevice-6d9b524c340bcb4dcecb69358054ec31
- blockdevice-ecafd0c075fb4776f78bcde26ee2295f
EOF
 
Create a default Storage Class using the new Storage Pool Claim.
 
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: openebs-cstor-default
annotations:
openebs.io/cas-type: cstor
cas.openebs.io/config: |
- name: StoragePoolClaim
value: "cstor-ebs-disk-pool"
- name: ReplicaCount
value: "3"
storageclass.kubernetes.io/is-default-class: 'true'
provisioner: openebs.io/provisioner-iscsi
EOF
 
Confirm that it is set as default:
 
$ kubectl get sc
NAME PROVISIONER AGE
openebs-cstor-default (default) openebs.io/provisioner-iscsi 4s
openebs-device openebs.io/local 26m
openebs-hostpath openebs.io/local 26m
openebs-jiva-default openebs.io/provisioner-iscsi 26m
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 26m
 
Edit the cluster.yaml file: Find the addons sections and enable all stateful workload components (pretty much everything except awsebscsiprovisioner , awsebsprovisioner and localvolumeprovisioner):
 
...
addons:
configVersion: v0.0.47
addonsList:
- name: awsebscsiprovisioner
enabled: false
- name: awsebsprovisioner
enabled: false
- name: dashboard
enabled: true
- name: dex
enabled: true
- name: dex-k8s-authenticator
enabled: true
- name: elasticsearch
enabled: true
- name: elasticsearchexporter
enabled: true
- name: fluentbit
enabled: true
- name: helm
enabled: true
- name: kibana
enabled: true
- name: kommander
enabled: true
- name: konvoy-ui
enabled: true
- name: localvolumeprovisioner
enabled: false
- name: opsportal
enabled: true
- name: prometheus
enabled: true
- name: prometheusadapter
enabled: true
- name: traefik
enabled: true
- name: traefik-forward-auth
enabled: true
- name: velero
enabled: true
 
Now update the Konvoy cluster. This step should only take around 5 minutes to complete this time:
 
$ konvoy up
 
After a successful update, you should see a message similar to the following:
 
STAGE [Deploying Enabled Addons]
helm [OK]
dashboard [OK]
opsportal [OK]
traefik [OK]
fluentbit [OK]
kommander [OK]
prometheus [OK]
elasticsearch [OK]
traefik-forward-auth [OK]
konvoy-ui [OK]
dex [OK]
dex-k8s-authenticator [OK]
elasticsearchexporter [OK]
kibana [OK]
prometheusadapter [OK]
velero [OK]STAGE [Removing Disabled Addons]
awsebscsiprovisioner [OK]Kubernetes cluster and addons deployed successfully!

Run `konvoy apply kubeconfig` to update kubectl credentials.

 Navigate to the URL below to access various services running in the cluster.
https://konvoy.us-west-2.elb.amazonaws.com/ops/landing
And login using the credentials below.
Username: dazzling_swirles
Password: YourPasswordHere

If the cluster was recently created, the dashboard and services may take a few minutes to be accessible.
 
You should have created about 10 persistent volumes (PVs) on OpenEBS. You can confirm that all stateful workloads got a PV using the openebs-cstor-default storage class.
 
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0b6031e2-38c7-49e0-98b6-94ba9decda19 4Gi RWO Delete Bound kubeaddons/data-elasticsearch-kubeaddons-master-2 openebs-cstor-default 18m
pvc-0fa74e6d-978a-47e6-961d-9ff6601702e6 10Gi RWO Delete Bound velero/data-minio-3 openebs-cstor-default 15m
pvc-18cfe51f-0a2c-409a-ba89-dbd89ad1c9f5 10Gi RWO Delete Bound velero/data-minio-2 openebs-cstor-default 15m
pvc-442dad69-56e5-44f1-8c43-cc90028c1d6d 10Gi RWO Delete Bound velero/data-minio-1 openebs-cstor-default 16m
pvc-69b30e82-6e47-4061-9360-54591a3cb593 30Gi RWO Delete Bound kubeaddons/data-elasticsearch-kubeaddons-data-0 openebs-cstor-default 21m
pvc-6d837a95-8607-4956-b3f5-bce4e42f5165 10Gi RWO Delete Bound velero/data-minio-0 openebs-cstor-default 16m
pvc-83498bcb-6f7e-45b8-8d25-d6f7af46af9a 4Gi RWO Delete Bound kubeaddons/data-elasticsearch-kubeaddons-master-1 openebs-cstor-default 19m
pvc-9df994f7-38af-4cb2-b0fc-2d314245a965 50Gi RWO Delete Bound kubeaddons/db-prometheus-prometheus-kubeaddons-prom-prometheus-0 openebs-cstor-default 21m
pvc-f1c0e0ad-f03e-4212-81b5-f4b9e4b70f10 30Gi RWO Delete Bound kubeaddons/data-elasticsearch-kubeaddons-data-1 openebs-cstor-default 19m
pvc-f92ca2d2-b808-4dc5-b148-362b93ef6d20 4Gi RWO Delete Bound kubeaddons/data-elasticsearch-kubeaddons-master-0 openebs-cstor-default 21m
 
Find the URL in the CLI output of Step 14 above. URL should look similar to https://yourhost.yourzone.elb.amazonaws.com/ops/landing. Open the link in your browser.
 
Log in using the username and password from the output.
 
If you want to learn more about Konvoy, sign up for a free trial.

Ready to get started?