Kubernetes: The Final Frontier

 Kubernetes:  The Final Frontier

Created by David F. Alves, managing partner at Global Quality Partners, LLC.

Kubernetes, also known as K8s, what is it?  No, it is not the latest choice designer drug of tweens, nor something that grammar schoolers run around imaginarily passing onto each other on the playground.

Kubernetes is the Ultimate DevOps Automation Tool.  The final tool you’ll ever need, as it delivers in virtually every conceivable way in which you can imagine (or need), and continues to grow into the future.  Now I’m experienced enough to know it isn’t perfect, and some out there will find ways in which it falls short for them, but no other competitive product that I am aware of, commercial or open source, delivers in every which way I am about to discuss merely in the confines of this article.

What is it?

So, what is Kubernetes, and why is it the Final Frontier?

Well, before we engage upon this voyage into Kubernetes. Let’s talk about its predecessor, and where it comes from.

Kubernetes (in the form of its predecessor) was created approximately 15 years ago (in its current form, it was launched in June 2014; v1.0 release July 2015), at Google.   The original code name at Google was Project Seven.  A tribute to the Star Trek fan favorite character “Seven of Nine”, a friendlier member of the Borg collective (an alien humanoid race of cyber-drones organized into a “Hive Mind”, known as “The Collective”.

Seven of Nine

The Borg conquered other races by assimilating their people into the collective and consuming their knowledge and technology).  Eventually it became known as the Borg System, or simply the Borg.


Star Trek Enterprise, approaching the Borg Cube                                          Enterprise Captain Jean-Luc Picard, assimilated into the Borg.

I presume this is due to its “Hive Mind” like capability to assimilate applications (as containers), and to automate, orchestrate, and scale their deployments.  Google has been using this system to deploy most all of its applications, as containers, within the Borg, for over a decade.

So, Kubernetes is well tested and capable, it is not the new kid on the block.

To emphatically and technically answer the definition of what is Kubernetes:

It is an open-source system for automating the deployment, scaling and management of containerized applications.  It supports a range of container tools, including Docker.  It was originally designed by Google and donated to the Cloud Native Computing Foundation.

What makes it so great?

When it comes to the evolution of computer technology the key has always been “Abstraction” and “Portability”.  Abstraction and Portability makes things easier, and quicker for humans, and often fosters automation, mobility, configuration flexibility, and scalability, just to name a few benefits.  The Evolutionary Abstraction & Portability Lineage of Computer Technology loosely goes like this:

  • Abstraction of Computer Languages: from assembler to 2nd, 3rd, 4th generation languages; including Object Oriented Languages with Automatic Garbage Collection, etc.
  • Abstraction & Portability of the Compute Machine (cross machine languages): JVM and Java, CLR and .NET Languages.
  • Abstraction & Portability (or Virtualization) of the Physical Machine: VMWare, VirtualBox, Hypervisor, etc.
  • Abstraction & Portability of the Application: Docker and other container technologies encapsulate the Application and everything it requires to operate into a fixed, movable “container” that can be stacked with other beneficial containers.

And that is the state-of-the art as far as most software professionals are aware.  However, todays DevOps tools are doing the same thing, in other ways, with Workflows, Models, and the support technology that brings these abstractions to reality.  Having said that however, most of these tools are point solutions, solving one problem, or relying on one language, or one type of infrastructure.  If they come close in capabilities, they usually cannot compete in price with an open source solution, or they utilize Kubernetes to accomplish this last mile, or they get hung up in your internal IT security and support policies due to their proprietary communications, etc., especially when dealing with Cloud or Hybrid-Cloud solutions.

So, to continue, here is what additional abstractions and portability Kubernetes brings to the table, that no other tool that I am aware of in today’s market can deliver on, 100%:

  • Abstraction & Portability of the Operating System: Pods, which run one or more Application Containers (Docker and others).
  • Abstraction & Portability of Services (and load balancing): Services are a supported entity in Kubernetes, and can be pointed at, at will, to real internal service ports, and can also be externalized to the public.
  • Abstraction & Portability of the Host System (Virtual or Physical):   Nodes are managed by the cluster, and contain one or more Pods.  Usually you run one Node per VM or physical box.
  • Abstraction & Portability of the Deployment Environment: The Kubernetes Cluster is a virtual computing cluster that can run on bare-metal machines, VMs, Amazon, Azure, Google Compute Engine, etc.


Kubernetes is one of the latest and greatest evolutionary tools available to us today.

Still not convinced of the power of Kubernetes?  Well, keep in mind the power of all the capabilities listed above, and now, let’s combine that power with what is listed below, some of Kubernetes built-in features:

  • RESTful API:  The Kubernetes Cluster has a RESTful API (over port 443 by default) that it uses internally and externally, allowing for complete manual, and automated control of your environment, in real-time.  While I don’t know why you would want to do this with The Kubernetes Cluster, you could literally hot-swap refactor the systems architectural design in real-time, without ever bring the system down, while users are using it…. that’s amazing!
  • Replication Controllers:  A ReplicationController ensures that a specified number of pod “replicas” are running at any one time. In other words, a ReplicationController makes sure that a pod or homogeneous set of     pods are always up and available. If there are too many pods, it will kill some. If there are too few, the ReplicationController will start more.
  • Replica Sets:  A ReplicaSet is the next-generation Replication Controller.
  • Deployments:  A Deployment provides declarative updates for Pods and ReplicaSets.
  • Daemon Sets:  A DaemonSet ensures that all (or some) nodes run a copy of a pod.  Typically, Daemon Sets are used to run Unix/Linux Style Daemons or Windows Services.
  • StatefulSets (formerly PetSets): A StatefulSet is a Controller that provides a unique identity to its Pods.  StatefulSets are valuable for applications that require one or more of the following:
    • Stable, unique network identifiers.
    • Stable, persistent storage.
    • Ordered, graceful deployment and scaling.
    • Ordered, graceful deletion and termination.
  • Jobs:  A job creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the job tracks the successful completions. When a specified number of successful completions is reached, the job itself is complete. Deleting a Job will clean up the pods it created.
  • Separate management of:
    • Configuration data
    • Secrets (passwords, etc.)
    • Volumes (storage)
      • Persistent
      • Dynamic

All of this leads to some incredible possibilities and benefits with Kubernetes.

Below is a diagram depicting what this all looks like in your typical Kubernetes Cluster:

Simple Kubernetes Cluster setup

What are the Benefits?

With all this capability, abstraction, and portability, here are the benefits you can realize, with a single set of scripts (which are easy to write):

  • Write a single set of Kubernetes system deployment scripts in any language/scripting language you want (PowerShell, Dos Batch, Unix/Linux Shell Scripts, Ruby, Java, etc.). NOTE:  You will need to learn the Kubernetes API, which you can call via the command line with Kubectl.
  • Create any architecture you’d like, including individual Development, Testing, Staging, and Production Clusters, or combine them all in one large federated Cluster (not recommended, but possible).
  • Deploy the same exact system architecture to in-house physical machines, Amazon AWS, Microsoft Azure, or Google Compute Engine (the Cloud basically). Again, same script, no changes.
  • Set your systems up for High Availability and specify how much Automatic Replication (run a specific number of Pods; think of them as tiers, or a group of Apps that share a common purpose, from startup, to loss due to disaster recovery, to purposely shutting down and redeployment of an updated version) of any system Pod component you want.
  • Upgrade your environment without ever bringing it down. Simularily, support Canary/Blue-Green deployment environments.
  • Automate incident remediation in Production. If you find a problem using monitoring tools like Dynatrace, automatically fix it, in real-time, without bringing the system down or effecting user experience.
  • 3-Dimensional Scalability: With traditional Vertical Scalability, we separate functionality into tiers.  With traditional Horizontal Scalability, we reproduce physical or virtual machines at the tier level.  With Kubernetes, the functionality, Tiers, VMs, etc., are broken down into containers, Pods, and Nodes, therefore we can duplicate pieces of any tier, VM, or component thereof, and/or break things into individual scalable Services, in essence, scaling diagonally.
  • While Kubernetes already has security features as well, it is hard to hack vulnerabilities that are plugged every hour, with the ease of deployment Kubernetes brings to the table, vulnerability patches can be rolled out extremely fast. If that’s not enough, it is hard to hack something you cannot find, because it shifts and changes underneath you.  With Kubernetes, you can implement shifting configuration and deployment strategies so that Nodes, Pods, or Services are constantly rotating locations and ports on a periodic basis.  Talk about proactive security.
  • Never worry about scalability again. Kubernetes will automatically scale more Nodes as necessary, and instantiate or even move Pods if any Nodes become overwhelmed. I hate to say it, but you literally can throw hardware at it, or VMs.
  • Support legacy infrastructure with Kubernetes, or state-of-the-art microservices, or both, as you slowly rewrite new, and eventually older services from legacy to microservices.
  • The possibilities are endless.

Wrapping up, if you are concerned about support, because Kubernetes is open source, don’t be.  It has by far the largest community out there of any of its commercial or open source competitors.  There are a gazillion addons, plugins, whatever you want to call them….and lastly, if community support isn’t enough for you, and you want Enterprise Support and some additional Enterprise Add-ons, go with OpenShift from Red Hat.  OpenShift is built upon Kubernetes, with additional support from Red Hat, and some additional Enterprise features and capabilities.

So, if you are not yet using Kubernetes, which is illogical, it is time to assimilate, resistance is futile.

Make it so!

If you'd like to contact Mr. David F. Alves, you can do so at LinkedIn or https://www.gqpartners.com.