Continuous Infrastructure Automation
Why it's no longer optional
Why Continuous Infrastructure Automation is key.
Let’s start with why continuous infrastructure automation is key. We have now reached a critical point in cloud-based infrastructure evolution where several factors are calling for a game changing approach.
Cloud Native design (containerized micro-services and service orchestration) is disrupting how we build and run apps. Transforming the landscape at an incredible pace, Kubernetes has become the new de-facto standard for container orchestration. And this, within one year of its graduation from the CNCF (Cloud Native Computing Foundation)!
We now have the ability to run multiple instances of micro-services and autoscale them with zero downtime. In the same time, multi-cloud is now a proven trend in Enterprise infrastructure. Indeed, over 60% of IaaS users deploying workloads on several clouds.
While very powerful, it’s an insanely complex machinery! And it necessitates huge skills to properly master both how it works and how to maintain it over time.
As complexity now moves to the infrastructure level, continuous infrastructure automation is no longer optional.
Traditional architecture meant using a few simple servers behind a load balancer to host and run our applications. Since then, environments gradually evolved into orchestrated containerized architecture. As a consequence, complexity now moves to the infrastructure level.
The logical outcome of that situation is that now, continuous infrastructure automation has become key to proper environment provisioning. But still, using IaaS consoles or simple deployment scripts, many teams jump to Kubernetes without automating their infrastructure first. Thus, they are putting their organization’s efficiency and security in jeopardy. Unsafe, sometimes locked and/or un-repeatable environments are legion.
Indeed, “ClicOps” won’t lead you ahead on the steep learning curve of using Infrastructure-as-Code. And it won’t either help you bridge the gap between your needs and a global context of DevOps skills shortening. Even more, those shortcuts often trigger additional problems due to the total lack of access right management or clear audit trails.
Organizations continuously trying to achieve a better time to value with quicker deployments. Infrastructure-as-Code provides the best option for them to provision, kill and relaunch environments on demand. They can leverage tools and code to provision and deploy servers and applications and have greater control of their infrastructures. Especially in multi-cloud environments.
So, how do you manage Kubernetes clusters properly with Infrastructure-as-Code?
Leveraging Hashicorp Terraform’s abilities, we are building CloudSkiff in a GitOps approach to manage your infrastructure-as-code properly. Think of it as a CI/CD for complex infrastructures that you need to provision, autoscale, kill and relaunch in a seamless and repeatable process. That’s what we call continuous infrastructure automation.
Consequently, the key features of CloudSkiff are :
- Generate a proper code for your infrastructure. This is done either by writing it from scratch with an assistant code generator, or by reusing complex modules written by other people, or even through a tool that allows you to turn your click-based environments into proper code that you can reuse and share.
- Organize the collaboration around your code. Setting up straightforward process and boundaries with RBAC (Role Based Access Control), to avoid troubles. You can’t have your junior Dev, or even several separate senior people push
terraform destroycommands without proper control over what’s happening.
- Manage testing environments with a sandbox just to make sure that things are running smoothly before you deploy to production.
- Post deployment, use dashboards and monitoring to keep control over performance and costs across geographies and environments and optimize your setup.
- Monitor your Kubernetes environments across all accounts and all clouds and watch over performances and costs for a better optimization.
- Duplicate environments and switch seamlessly between cloud providers with our migration assistant
Why we go for Kubernetes as a first step? And why you need a tool to manage the lifecycle of your infrastructure?
At CloudSkiff we decided to begin by supporting managed Kubernetes environments as a first step. Obviously, you’ve got to start somewhere, and going first for this framework that is (again) very powerful but insanely complex seemed to be a good way of providing support to the teams that might not have all the skills to master and maintain it overtime.
A Kubernetes Cluster is a living being, killed and reborn several times a day, always moving, always changing, always adapting/autoscaling… It’s highly sensible to version upgrades. Basic configuration mistakes can yield drastic consequences, which is why you need a pipeline to manage its lifecycle. We believe that DevOps teams need a new generation of dedicated tools to manage the lifecycle of their Kubernetes Clusters and that this is what it takes to help them focus on building and running their apps.
Question is : how do you implement that into existing workflows?
Indeed, most existing workflows are “touchy” once setup and nobody really wants to change that. This is why Cloudskiff is built to be fully integrated within existing workflows. It is directly linked to the Github repositories of your choice and can even be called by your existing CI/CD tool though an API.