Kubernetes is the ancient greek word for the steersman or captain of a ship. In this talk from OpenStack summit in Vancouver 2015, product managers Craig Peters from Mirantis and Kit Merker from Google show how you can take a Kubernetes cluster running on OpenStack and deploy the same cluster unmodified on Google Compute Engine.
Docker has accelerated development of portable services on cloud due to its slick approach to component lifecycle management. But connecting up a bunch of Docker containers as well-behaved aggregate services in the cloud can be more complex than most Docker users realize.
As it turns out, differences in cloud infrastructures and APIs are often invisible to developers until it’s too late. Infrastructure land mines include networks and security configurations, and other distributed control problems.
Management of change at scale requires solutions for service discovery, well-specified aggregation and more.
Of course OpenStack enables placement, running, and restarting of workloads; but someone still has to make the decisions of how and where to do that. Google Kubernetes fills this gap by autoplacing pods of docker containers, orchestrating restart and replication of these service sets, across diverse cloud topologies, enabling a more responsive, lower intervention approach to elastic workload execution.