Andrew Shafer and Patrick Debois came up with this concept. They named it “DevOps” after a year. Celebrating the tenth anniversary of DevOps, we are not going to discuss implementation, tricks, commercial things, etc. It is time to take a look at the incredible DevOps journey. How did it originate, practices, patterns, and more? DevOps has conquered many frontiers and is still on the move towards newer horizons.
I believe that this article will help beginners to understand the legacy of DevOps. For the experienced audience, it is time to sit back and enjoy the DevOps moments. So let us begin!
It might not be correct to say that DevOps is a new concept. As it is the merge of two different concepts. But for us, obviously, it is no less than a unique helpful concept. So let us keep it as a new concept for the time being. How did it happen? In 2008, Toronto held the Agile 2008 conference. During the conference, Andrew Shafer gave a talk. Most of you might know this but it is worth mentioning that this talk rooted the plant of DevOps. The plant that soon became a huge tree with the help of DevOps Agile movement. In short, the Agile conference witnessed the birth of DevOps.
(1) Continuous delivery of quality software
(2) Delivering working software frequently”
(3) Harnessing change … as a competitive advantage
We can understand that why DevOps and the practice of CI/CD were accepted soon. DevOps fulfilled the above requirements and Agile people must have loved it.
The Agile people or the ones who support Agile development loved the idea of codification. But the chunk of codification could only be gulped down with a sip of supporting infrastructure. IT professionals and other related professionals supported codification.
The sip of supporting infrastructure consisted of continuous integration. But what is the basis of continuous integration? It is actually a codified manual test and a code management process! The infrastructure as code and the cloud resources with API together became the codified manual. This lead to codified manual installation and configuration processes. But what about the continuous delivery and deployment? These are the codified manual release activities. We can say that without codification, DevOps and Agile won’t be able to fulfill their big promises. Codification has two applications in general:
For a normal application, the most time-consuming activities post development are testing, setting up the server, and application installation. Codification helps in abstracting the process into the code. It can then be reused. Overall, it helps to increase productivity and save time.
Codification helps in addressing the inherent danger of the changes. Even the slowest developers come to know about this due to codification. Addressing risks help in releasing a quality software. For example, Change C shall lead to Problem P and there are high chances of encountering the problem. So the developer needs to think whether to opt for Change A or not!
Firstly, automated testing helps in catching bugs quickly. Second, Infrastructure as a code prevents the configuration drift. Third, the automated rollbacks offer a safety valve for botched releases. Fourth, with automated reduces the chance of human errors.
Another important advantage of codification is that it leads to uniformity. The code creates an unofficial shared language, a common platform! The developers and system admins usually like this because the communication becomes easier. It becomes transparent. The idea of codifying everything helped blossom flowers in the DevOps tree. An entire industry of amazing products and services, hosted, SaaS, open source, commercial, etc. So what are these products? Any guesses? Yes, we are talking about Heroku, Cloud Foundry, AWS Beanstalk, TravisCI, Jenkins, CodeShip, Bamboo, Puppet, Ansible, Terraform, and so on.
We discussed a lot about codification and the advantages. Until this stage, everything was just being discussed. It was then time to implement! The Agile manifesto includes terms like continuous, frequent, and working software. The implementation of DevOps brought these terms to life. But there is one thing that wasn’t alive or was still not understood. It was known as harnessing change, as a competitive advantage. So what kind of changes could benefit us and how do we know that these particular changes will always benefit us in any situation? What is the requirement of customers and will they like these changes?
Marketing, UX, and product management people are always fond of such questions. They utilize weapons like A/B and multivariate testing, test audiences, and short-lived experimental features for fighting their battle. But all such questions can only be answered by experimentation. Did you notice that these questions have led us to experimentation? Yes, we are talking about this experimentation mindset. DevOps led to modern way thinking of experimenting in the back end. It was very much needed! Earlier, software engineers, architects, and system admins used to do a lot of prediction. But for DevOps, they converted themselves from predictors to adapters. For example, now they predict the future and build the code base/infrastructure. This will help do the future job as soon as possible and in the best manner. This is also known as specification or estimation.
Many engineers have ditched the predictions and estimation patterns in the past. But this has never been an easy escape. New tools, techniques, patterns, and services have always helped in assisting the engineers. So the engineers realized this and started learning it, rather than running away from it. Soon they were adapted to these things and then onwards, the performance has kept on improving. So let us go through a few of these patterns, tools, and services.
The blue/green deployments can be considered as the great-grandfather of all the patterns. It is by far the crudest but the most widespread deployment pattern. Interestingly, this pattern had already been implemented long before DevOps existed. The pattern is quite simple, it includes two identical environments. These are accompanied by some form of switch that routes the traffic from one environment to the other. At the time of release, one environment is deployed and the traffic is directed towards it. The other environment is on a standby for any unexpected issues.
In Canary releasing, the granularity is higher as compared to the blue/green deployment. The principle remains same. The benefit of this pattern is that it allows you to start running proper experiments. There is no need to put your production environment at risk. The routing component plays an important role in this pattern. It has to be smart while facilitating canary release. The reason being that you should only focus on the silver part of your total traffic. It should then be directed to the new version of the application.
A/B Testing pattern is nothing but the continuation of “Canary Releasing” in a more statistical format and framework. The targeting options are the same but with some added goals. It helps in determining that how much does the new/updated version contribute for achieving the goal.
The question is that who offers A/B Testing? Simple, it is offered by professional vendors like Optimizely, Visual Website Optimizer (VWO), Google Optimize, etc. A/B Testing is also included as a feature in some services like Unbounce, Mailchimp, Kissmetrics, etc. Companies like Pinterest and Booking use A/B testing for themselves. So they build it themselves for their own use and analytics.
But all these services work on the front-end. So you can test visual and textual variations. But what about testing/experimenting other parts of the application? Services and products aren’t a part of the front-end. They operate from the back-end! For this, Optimizely X Full Stack is a good full-service solution. It covers many languages and run-times. For open source, we have options like Planout by Facebook, Wasabi by Intuit, and Petri by Wix. I am sure there will be many more options available online.
The combination of canary releasing and A/B testing into a pattern focused on toggles is known as feature toggling. The application features are turned on and off with the help of the toggles in general or on target audiences. The main aspect of feature toggling is that it separates the moment of deployment from the moment of release. Companies like LaunchDarkly and Spilit.io offer feature toggling. For open source, Togglz and Flipper are good options.
Shadow traffic pattern is also known as dark traffic. In this pattern, the production flow is exposed to the experimental version of the application. The interesting part is that the end user isn’t directly involved in all this. So somehow, everything happens in the darkness. The pattern can be used for load and performance testing as well!
Implementing shadow traffic pattern isn’t easy. One has to think about how does the network traffic work from the lowest level in terms of client responses, encryption, etc. The output of all this hard work is quite rewarding especially with an integrated platform like Kubernetes.
The experimentation patterns are accepted and implemented across the full IT stack. This is the reason why they need to be analyzed at a granular level. This analysis gives rise to a whole new level of problems. For example, how to deal with the data output of experiments? Every company doesn’t have a dedicated data science team. The next question, how will you match the infrastructure and the need for experiments? The successful experiments require more resources. Another question, how would you do this at a large scale while maintaining your sanity? Some problems are also related to automation and DevOps practices can solve them. Let us see the first products and services that aim to tackle these issues.
Machine learning and artificial intelligence are not only meant for coming close to the human behavior. It can also be used to understand the deluge of logs and metrics in a typical modern IT stack. It hasn’t been a success yet, but it looks quite promising and certainly has a successful future. Here, pattern recognition, filtering, and predictive analysis are considerable aspects. These are well understood under mathematical problems.
DevOps oriented products and services like Signifai and Logz have used these techniques. They aim to cut down the unnecessary noise that is quite inherent in data streams. It will further help focus on finding important things that lie underneath. Do you know ElasticSearch? It is an open source search engine. It helps analyze log data and has also added machine learning as a feature. The users get an easy path into using machine learning. It leverages machine learning and artificial intelligence for making decisions in terms of deployments, scaling, features toggling, and ongoing experiments. Surprisingly, this field is yet unexplored and there is a lot of scope for improvement.
We will be discussing modern container platforms like Kubernetes, Docker Swarm, and Mesosphere DC/OS fit. My point is that these modern container platforms are amazingly emerging as the new patterns of DevOps. The platforms are offering unparalleled portability, scalability, and quick deployment patterns. On top of that, all these patterns are standardized. So we can consider that these patterns form the ideal base infrastructure for undertaking experiments.
(1) Codified and standardized run times. Storage and networking through containers, for now, Docker containers and other technologies respectively.
(2) Codified deployment process through platform’s API.
(3) Smart routing via overlay networks, integrated API gateways, and service discovery/meshes.
Fortunately, troops are now understanding the importance of DevOp. They are moving to cloud-native infrastructures. In fact, we can see numerous applications that are already enabled by the cloud-native paradigm.
For example, products/frameworks like Istio and Vamp have taken smart routing to a new level. They have implemented it in such a way that routing decisions become real-time malleable application attributes. All this is integrated with monitoring solutions and smart routing solutions. These solutions have become the backbone of patterns like A/B testing and shadow traffic that are already implemented on DC/OS and Kubernetes.
There is another amazing technique that I would love to discuss with you all. It is known as bin packing. In this technique, numerous application processes share a physical host up to its capacity. By doing this, it ensures that the unused capacity isn’t being wasted. Kubernetes and DC/OS have already implemented it.
Further, there are chances to combine AWS Spot Instances to increase the cost efficiency. It fosters potential advances by tying the cost level to higher level business goals. There are many companies that depend on the internal cost accounting for structuring the IT cost. This could help the companies make a top-notch business case for the experimental applications.
Most of us know that DevOps is being talked around and also being implemented. But where do the greatest opportunities lie if we consider DevOps? We can’t predict anything precisely. But certainly, the ongoing trend in DevOps is pointing in these directions:
DevOps will help improve the day-to-day activities. Not just big companies but small companies will implement DevOps for performing well. Being flexible, it will enable teams to develop new patterns and new ways of doing things. It will also help in integrating the early testing process. Even how to undergo tests and their feasibility will be monitored with the help of DevOps.
The cycle of customer loyalty will be broken with the help of phenomenal CX. When the customer leaves, there are 60% chances that they won’t come back, ever. So there is just one opportunity to provide an outstanding CX. DevOps will help deliver innovation, faster! There might not be the need for a developer in the future for infrastructure changes. There won’t be a need to build it and manage it. Small teams will be able to build scalable systems with fewer members. Everyone will be doing DevOps in the future and with uncountable benefits popping up.
Containers are already popular and will be more popular in the future. The need and rate of code changes, deployments, updates, etc are growing. With that, security and DevSecOps will grow and it is going to be a massive growth. DevOps will help improve the security of all code and applications. The database and security should fall under the same policies. DevOps algorithms are just achieving this! So more companies will migrate to the cloud, pretty soon!
We have already discussed artificial intelligence and machine learning. The tools will adopt the artificial intelligence and will soon start finding patterns in logs/events. Also, it would lead to self-healing. There will be more tests/experiments and less of teams. This will also lead to autoscaling.
So what is the “Moral” of this DevOps Saga? Quite simple, DevOps is expanding. Accompany DevOps on its journey and grow big with it!
For more DevOPs and in-depth study, I recommend this book : https://level-up.one/devops-pdf-book/