Skip to main content

What is Kops? Kubernetes Operations with Kops

Today, we’ll be building on our recent coverage of the Kubernetes Ecosystem to talk more in depth about Kops. This post is a complement to our Kubernetes webinar earlier this year and follows previous posts that cover deploying applications with Helm and creating and maintaining Kubernetes clusters with Kops. Let’s begin by addressing a basic question: What is Kops?

What is Kops?

Kops is an official Kubernetes project for managing production-grade Kubernetes clusters. Kops is currently the best tool to deploy Kubernetes clusters to Amazon Web Services. The project describes itself as kubectl for clusters.

If you’re familiar with kubectl, then you’ll feel at home with Kops. It has commands for creating clusters, updating their settings, and applying changes. Kops uses declarative configuration, so it’s smart enough to know how to apply infrastructure changes to existing clusters. It also has support for cluster operational tasks like scaling up nodes or horizontally scaling the cluster. Kops automates a large part of operating Kubernetes on AWS.

Before moving on to examples, let’s look at its key features:

  • Deploy clusters to existing virtual private clouds (VPC) or create a new VPC from scratch
  • Supports public & private topologies
  • Provisions single or multiple master clusters
  • Configurable bastion machines for SSH access to individual cluster nodes
  • Built on a state-sync model for dry-runs and automatic idempotency
  • Direct infrastructure manipulation, or works with CloudFormation and Terraform
  • Rolling cluster updates
  • Supports heterogeneous clusters by creating multiple instance groups

Check out this short ASCII cast demo for more info.
Now, we’ll tackle a common scenario: Create a cluster and configure it for your use case.

Creating Your First Kubernetes Cluster on AWS

You’ll need to configure IAM permissions and an S3 bucket for the KOPS_STATE_STORE. The KOPS_STATE_STORE is the source of truth for all clusters managed by Kops. You’ll need appropriate IAM permissions so that Kops can make API calls on your behalf. I won’t cover that in this post, but you can follow the instructions here.

You’ll also need to configure DNS. Kops supports a variety of configurations. Each has its own setup instructions. AWS Route53 with an existing HostedZone is the easiest. We’ll assume that there is an existing AWS Route53 HostedZone for slashdeploy.com in these examples.
Kops clusters must be valid DNS names. Let’s create the demo.slashdeploy.com cluster. Kops will also create the DNS record for the Kubernetes API sever at api.demo.slashdeploy.com, and bastion.demo.slashdeploy.com. Keep in mind that DNS names may only be so long, so don’t use base cluster names that are too long. Everything starts with kops create. You can pass options directly to the command or write a cluster spec file. We’ll use the command line options for this exercise. Using a dedicate file is great for source control and other forms of configuration management. kops create accepts many options. We’ll start with the simplest case by only supplying the required options.

$ kops create cluster \
	--yes \
	--zones=eu-west-1a,eu-west-1b,eu-west-1c \
	demo.slashdeploy.com

There are two required values. --zones states the GCP zones / AWS regions where to create the infrastructure. Here, eu-west-1a, eu-west-1b, eu-west-1c are specified. This instructs Kops to create infrastructure in each eu-west-1 availability zone. This is important because Kops aims to create high availability production clusters. Multiple availability zones make the cluster more reliable by protecting against failures in one availability zone.

You must also specify the cluster name. --yes confirms operations that normally prompt for confirmation. kops create adds a kubectl configuration entry for the new cluster so you’re ready to use it right away. The command is async. It will trigger infrastructure creation, but will not block it completely. Luckily, Kops includes a command to validate a cluster. You can rerun this command until it succeeds.

$ kops validate demo.slashdeploy.com

When everything is complete, you should see something similar to the following:

$ kops validate cluster demo.slashdeploy.com
Validating cluster demo.slashdeploy.com
INSTANCE GROUPS
NAME                    ROLE    MACHINETYPE     MIN     MAX     SUBNETS
master-eu-west-1a       Master  m3.medium       1       1       eu-west-1a
master-eu-west-1b       Master  m3.medium       1       1       eu-west-1b
master-eu-west-1c       Master  m3.medium       1       1       eu-west-1c
nodes                   Node    t2.medium       2       2       eu-west-1a,eu-west-1b,eu-west-1c
NODE STATUS
NAME                                            ROLE    READY
ip-172-20-120-240.eu-west-1.compute.internal    master  True
ip-172-20-50-132.eu-west-1.compute.internal     master  True
ip-172-20-66-106.eu-west-1.compute.internal     master  True
ip-172-20-75-89.eu-west-1.compute.internal      node    True
Your cluster demo.slashdeploy.com is ready

Now, you’re ready to run any kubectl command such as kubectl get pods -n kube-system. The cluster is a bit strange because it has three masters and only a single worker. Let’s update the node instance group.

Modifying Cluster Infrastructure

Remember that kops behaves like kubectl. This means that you can kops edit to edit the configuration files in your editor. The next step is to run kops update. This applies configuration changes, but does not modify running infrastructure. kops rolling-update manages updating or recreating infrastructure.

This process applies to all sorts of configuration changes. First edit, then update, and finally  rolling-update. Let’s take this for a spin by editing the node instance group to increase the number of worker nodes.

$ kops edit instancegroup nodes

That will open a YAML file in your editor. You’ll see something similar to the following:

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2017-04-05T15:33:52Z"
  labels:
    kops.k8s.io/cluster: demo.slashdeploy.com
  name: nodes
spec:
  image: kope.io/k8s-1.5-debian-jessie-amd64-hvm-ebs-2017-01-09
  machineType: t2.medium
  maxSize: 1
  minSize: 1
  role: Node
  subnets:
  - eu-west-1a
  - eu-west-1b
  - eu-west-1c

All we need to do is replace minSize and maxSize with appropriate values. I’ll set both values to 3 and save the file. This writes the updated file back to the KOPS_STATE_STORE. Now, we need to update the cluster. Again, we’ll supply --yes to confirm the changes.

$ kops update cluster --yes
Using cluster from kubectl context: demo.slashdeploy.com
I0422 07:34:58.458492   26834 executor.go:91] Tasks: 0 done / 114 total; 35 can run
I0422 07:34:59.990241   26834 executor.go:91] Tasks: 35 done / 114 total; 26 can run
I0422 07:35:01.211466   26834 executor.go:91] Tasks: 61 done / 114 total; 36 can run
I0422 07:35:04.215344   26834 executor.go:91] Tasks: 97 done / 114 total; 10 can run
I0422 07:35:04.845173   26834 dnsname.go:107] AliasTarget for "api.demo.slashdeploy.com." is "api-demo-1201911436.eu-west-1.elb.amazonaws.com."
I0422 07:35:05.045363   26834 executor.go:91] Tasks: 107 done / 114 total; 7 can run
I0422 07:35:05.438759   26834 executor.go:91] Tasks: 114 done / 114 total; 0 can run
I0422 07:35:05.438811   26834 dns.go:140] Pre-creating DNS records
I0422 07:35:06.707548   26834 update_cluster.go:204] Exporting kubecfg for cluster
Wrote config for demo.slashdeploy.com to "/home/ubuntu/.kube/config"
Kops has set your kubectl context to demo.slashdeploy.com
Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster

Finally, apply the rolling-update.

$ kops rolling-update cluster --yes
Using cluster from kubectl context: demo.slashdeploy.com
NAME                    STATUS  NEEDUPDATE      READY   MIN     MAX     NODES
bastions                Ready   0               1       1       1       0
master-eu-west-1a       Ready   0               1       1       1       1
master-eu-west-1b       Ready   0               1       1       1       1
master-eu-west-1c       Ready   0               1       1       1       1
nodes                   Ready   0               3       3       3       3
No rolling-update required

That’s a bit strange. Kops says that there is no rolling-update required. This is true because we only changed the minimum and maximum number of instances in the nodes auto scaling group. This does not require any changes to the existing infrastructure. AWS simply triggers creation of two instances.
Let’s make another change that requires changing infrastructure. Imagine that the existing t2.medium instances are not cutting it. We need to scale up to meet workload requirements. To do that, we need to change the instance type. The same edit, update, and rolling-update process applies here. Let’s upgrade to m4.large. Repeat the exercise and replace t2.medium with m4.large then apply the rolling-update. Now, Kops kills each node in order to trigger creation of an up-to-date node.

$ kops rolling-update cluster --yes
Using cluster from kubectl context: demo.slashdeploy.com
NAME                    STATUS          NEEDUPDATE      READY   MIN     MAX     NODES
bastions                Ready           0               1       1       1       0
master-eu-west-1a       Ready           0               1       1       1       1
master-eu-west-1b       Ready           0               1       1       1       1
master-eu-west-1c       Ready           0               1       1       1       1
nodes                   NeedsUpdate     3               0       3       3       3
I0422 07:42:31.615734     659 rollingupdate_cluster.go:281] Stopping instance "i-038cbac0aeaca24d4" in AWS ASG "nodes.demo.slashdeploy.com"
I0422 07:44:31.920426     659 rollingupdate_cluster.go:281] Stopping instance "i-046fe9866a3b51fe6" in AWS ASG "nodes.demo.slashdeploy.com"
I0422 07:46:33.539412     659 rollingupdate_cluster.go:281] Stopping instance "i-07f924becaa46d2ab" in AWS ASG "nodes.demo.slashdeploy.com"

Caution though! Current versions (<= 1.6) do not yet perform a real rolling update It just shuts down machines in sequence with a delay; there will be downtime Issue #37 We have implemented a new feature that does drain and validate nodes. This feature is experimental, and you can use the new feature by setting export KOPS_FEATURE_FLAGS="+DrainAndValidateRollingUpdate". This should be fixed in a future release.

This same process applies to infrastructure and configuration (such as kubelet flags or API server flags). The documentation covers specific cases:

As always, you can refer to the documentation for complete information.

Custom Cluster Infrastructures

Our example covered the most simple case, but this does not apply to all scenarios. Let’s walk through different options available to the kops create cluster.

$ kops create cluster --help
Creates a k8s cluster.
Usage:
  kops create cluster [flags]
Flags:
      --admin-access stringSlice             Restrict access to admin endpoints (SSH, HTTPS) to this CIDR.  If not set, access will not be restricted by IP. (default [0.0.0.0/0])
      --associate-public-ip                  Specify --associate-public-ip=[true|false] to enable/disable association of public IP for master ASG and nodes. Default is 'true'.
      --bastion                              Pass the --bastion flag to enable a bastion instance group. Only applies to private topology.
      --channel string                       Channel for default versions and configuration to use (default "stable")
      --cloud string                         Cloud provider to use - gce, aws
      --dns string                           DNS hosted zone to use: public|private. Default is 'public'. (default "Public")
      --dns-zone string                      DNS hosted zone to use (defaults to longest matching zone)
      --image string                         Image to use
      --kubernetes-version string            Version of kubernetes to run (defaults to version in channel)
      --master-count int32                   Set the number of masters.  Defaults to one master per master-zone
      --master-security-groups stringSlice   Add precreated additional security groups to masters.
      --master-size string                   Set instance size for masters
      --master-zones stringSlice             Zones in which to run masters (must be an odd number)
      --model string                         Models to apply (separate multiple models with commas) (default "config,proto,cloudup")
      --network-cidr string                  Set to override the default network CIDR
      --networking string                    Networking mode to use.  kubenet (default), classic, external, cni, kopeio-vxlan, weave, calico. (default "kubenet")
      --node-count int32                     Set the number of nodes
      --node-security-groups stringSlice     Add precreated additional security groups to nodes.
      --node-size string                     Set instance size for nodes
      --out string                           Path to write any local output
      --project string                       Project to use (must be set on GCE)
      --ssh-public-key string                SSH public key to use (default "~/.ssh/id_rsa.pub")
      --target string                        Target - direct, terraform (default "direct")
  -t, --topology string                      Controls network topology for the cluster. public|private. Default is 'public'. (default "public")
      --vpc string                           Set to use a shared VPC
      --yes                                  Specify --yes to immediately create the cluster
      --zones stringSlice                    Zones in which to run the cluster
  • --vpc and --newtork-cidr can be used when deploying to an existing AWS VPC.
  • --bastion generates a dedicated SSH jump host for SSH access to cluster instances. This is best used with --associate-public-ip=false.
  • --master-zones specifies all of the zones where masters run. This is key for HA setups.
  • --networking sets the default network. Note that your particular choice depends on your requirements and may work with the specified --topology.
  • --topology is the internal networking state. I prefer --bastion --topology=private --associate-public-ip=false --networking=weave to keep the clusters inaccessible on the public internet.

Next Steps

Kops is one the best tools that we have right now to manage Kubernetes clusters. Kops, like everything else in the Kubernetes ecosystem, is changing rapidly. The #kops channel on the Kubernetes Slack team is the best place to interact with other users. The people behind it are actively fixing bugs, introducing new features, and accepting proposals from the community. They also set aside an hour every other week to offer help and guidance to the community. They work with newcomers, help with PRs, and discuss new features, anything goes.

Add something to the agenda. They hold office hours (on Zoom video conferences) on Fridays at 5 p.m. UTC/9 a.m. US Pacific Time every other week, on odd weeks. I also recommend that you read through the issue tracker to get a feel for the known issues and more importantly the missing features.

Kops can do a lot, but it may not do everything for your use case, so be sure to do your research before diving in head first. One notable omission is the lack of pre/post install hooks for node configuration. This is required for things like pre-pulling images or installing software on nodes. This was recently fixed in a pull request, but there is no timeline for the next release at this point in time.

Kops, Kubernetes, containers, Docker and more are also discussed in the CloudAcademy 2017 office hours.
Stay tuned on the Cloud Academy blog for more Kubernetes!

Avatar

Written by

Adam Hawkins

Passionate traveler (currently in Bangalore, India), Trance addict, Devops, Continuous Deployment advocate. I lead the SRE team at Saltside where we manage ~400 containers in production. I also manage Slashdeploy.

Related Posts

Sam Ghardashem
Sam Ghardashem
— May 15, 2019

Aviatrix Integration of a NextGen Firewall in AWS Transit Gateway

Learn how Aviatrix’s intelligent orchestration and control eliminates unwanted tradeoffs encountered when deploying Palo Alto Networks VM-Series Firewalls with AWS Transit Gateway.Deploying any next generation firewall in a public cloud environment is challenging, not because of the f...

Read more
  • AWS
Joe Nemer
Joe Nemer
— May 3, 2019

AWS Config Best Practices for Compliance

Use AWS Config the Right Way for Successful ComplianceIt’s well-known that AWS Config is a powerful service for monitoring all changes across your resources. As AWS Config has constantly evolved and improved over the years, it has transformed into a true powerhouse for monitoring your...

Read more
  • AWS
  • Compliance
Avatar
Francesca Vigliani
— April 30, 2019

Cloud Academy is Coming to the AWS Summits in Atlanta, London, and Chicago

Cloud Academy is a proud sponsor of the 2019 AWS Summits in Atlanta, London, and Chicago. We hope you plan to attend these free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. These events are all about learning. You can learn how t...

Read more
  • AWS
  • AWS Summits
Paul Hortop
Paul Hortop
— April 2, 2019

How to Monitor Your AWS Infrastructure

The AWS cloud platform has made it easier than ever to be flexible, efficient, and cost-effective. However, monitoring your AWS infrastructure is the key to getting all of these benefits. Realizing these benefits requires that you follow AWS best practices which constantly change as AWS...

Read more
  • AWS
  • Monitoring
Joe Nemer
Joe Nemer
— April 1, 2019

AWS EC2 Instance Types Explained

Amazon Web Services’ resource offerings are constantly changing, and staying on top of their evolution can be a challenge. Elastic Cloud Compute (EC2) instances are one of their core resource offerings, and they form the backbone of most cloud deployments. EC2 instances provide you with...

Read more
  • AWS
  • EC2
Avatar
Nitheesh Poojary
— March 26, 2019

How DNS Works – the Domain Name System (Part One)

Before migrating domains to Amazon's Route53, we should first make sure we properly understand how DNS worksWhile we'll get to AWS's Route53 Domain Name System (DNS) service in the second part of this series, I thought it would be helpful to first make sure that we properly understand...

Read more
  • AWS
Avatar
Stuart Scott
— March 14, 2019

Multiple AWS Account Management using AWS Organizations

As businesses expand their footprint on AWS and utilize more services to build and deploy their applications, it becomes apparent that multiple AWS accounts are required to manage the environment and infrastructure.  A multi-account strategy is beneficial for a number of reasons as ...

Read more
  • AWS
  • Identity Access Management
Avatar
Sanket Dangi
— February 11, 2019

WaitCondition Controls the Pace of AWS CloudFormation Templates

AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...

Read more
  • AWS
  • CloudFormation
Badrinath Venkatachari
Badrinath Venkatachari
— February 1, 2019

10 Common AWS Mistakes & How to Avoid Them

Massive migration to the public cloud is changing architecture patterns, operating principles, and governance models. That means new approaches are vital to get a handle on soaring cloud spend. Because the cloud’s short-term billing cycles call for financial discipline, you must empower...

Read more
  • AWS
  • Operations
Avatar
Andrew Larkin
— January 24, 2019

The 9 AWS Certifications: Which is Right for You and Your Team?

As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in cloud computing.As the market leader and most ma...

Read more
  • AWS
  • AWS Certifications
Avatar
Andrew Larkin
— January 15, 2019

2018 Was a Big Year for Content at Cloud Academy

As Head of Content at Cloud Academy I work closely with our customers and my domain leads to prioritize quarterly content plans that will achieve the best outcomes for our customers.We started 2018 with two content objectives: To show customer teams how to use Cloud Services to solv...

Read more
  • AWS
  • Azure
  • Cloud Computing
  • Google Cloud Platform
Avatar
Jeremy Cook
— November 29, 2018

Amazon Elastic Inference – GPU Acceleration for Faster Inferencing

“Add GPU acceleration to any Amazon EC2 instance for faster inference at much lower cost (up to 75% savings)”So you’ve just kicked off the training phase of your multilayered deep neural network. The training phase is leveraging Amazon EC2 P3 instances to keep the training time to a...

Read more
  • AWS
  • Elastic Inference
  • re:Invent 2018