DevOpsCon Munich 2021: 100% online, 100% smart!

devopscon munich for blog


After the success at DevOps D-Day in France a few weeks ago, the Cycloid team continued to conquer the world of DevOps - this time at DevOpsCon Munich. The event initially planned as hybrid fell at an unfortunate time and, due to new restrictions in Germany, had to switch to 100% online at the last minute. 

Despite the difficulties, the organizers did an impressive job, and our brilliant team Céline Stenson, Meggie Juton, Rob Gillam, Olivier de Turckheim, and your humble servant Chamseddine Saadoune worked hard to make the most of it. 

So what are the most pressing topics in the world of DevOps at this stage? What did we learn from last year and what are we bringing into the next year? Answers below

General trends and small discoveries

 

There were 26 talks and 6 AMAs (ask me anything sessions) on the first day (November 30) and 24 talks, an online raffle (which I, unfortunately, didn’t win), and 3 AMAs on the second (the 1st December). Out of the 13 sessions that I was able to listen to, participate in, and tweet about (follow me for live updates), here are the event highlights.

Remember, DevOps is not (just) about technology

 

infinity_white_background_smallerSince 2007, the term DevOps has been growing in popularity, however, its understanding has not. Vague definitions make adopting DevOps culture complicated. This is the definition we like to stick to at Cycloid (inspired by the one provided by Microsoft):

“DevOps is a philosophy centered on the adoption of cultures, practices, and tools intended to continuously deliver added value through the union of people, processes, and technologies linked to the worlds of development (Dev) and operations (Ops ).”

And I particularly like this definition because it clearly shows that adopting a DevOps culture is not just a matter of focusing on modern tech tools, but also supporting teams and upskilling people.

Galia Diez and Andriy Samsoyuk summed it up very well in their keynote "It Takes a Team to Achieve the Full Value of DevOps: DevOps practices go far beyond technological aspects”. They've proven the importance of cultural change through their analysis of managing change in a large-scale DevOps transformation program. Their concept of agile change management follows a few key steps that allow them to better master the constant changes to meet rapidly evolving business challenges.

This process includes:

    1. Identifying the need for change
    2. Qualify changes and assess their impacts
    3. Plan, organize, and manage change actions
    4. Communicate and implement
    5. Review and evaluate

These steps recall The Three Ways that underpin a DevOps philosophy (principles of flow, feedback, and continuous learning). This is why, with this process and other elements borrowed from DevOps principles (start small and keep it simple, for example), they were able to have better control of their change management, without focusing solely on purely technical aspects (and that's beautiful!).

In another take, Erol Zavidic and his click-baiting talk "Retrospective: Where DevOps nearly caused havoc" explained how implementing only the technical aspects of DevOps caused significant negative business impact. How? A succession of changes that weren’t closely monitored by the team made several important servers unavailable for a client.

These overlapping changes could have been avoided if the technical, business, and project teams communicated/collaborated better in order to understand the issues and possible impact. Erol also insisted on the importance of feedback provided by KPIs and appropriate monitoring, but also automated tests to validate (or not) a recent change. All these are good practices which, even if they are purely technical, must be discussed, validated, and of course followed up by the actors who interact with them on a daily basis.

Microservices design & management, common pitfalls & solutions

 

Do you know about microservice architectures? These application architectures allow very fast development cycles by splitting an application into independent services, but ultimately linked to each other via API calls, for example. Why are they useful? Microservices allow better application distribution, better scaling, better collaboration, deployment, security, etc. In short, that's good news!

But not so fast! Be aware that there are some obvious pitfalls during microservice design or execution. This was precisely the topic that Magnus Kulke and Lothar Schulz covered during their talk "Microservice Pitfalls".

In summary, the speakers described microservices as a social tool above all. Conway's famous law specifies that “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.”. Thus microservices (even more than other application architecture models) allow teams to make autonomous decisions. Communication between teams and achieving what stakeholders call contracts becomes an even more important goal. The danger here is to underestimate the importance of this!

Another pitfall is that defining the domain of each microservice attached to an application is often complicated. Common traps include having only one database shared by all microservices or not thinking of your application in a distributed logic.

Also, in looking to implement these new techniques, teams tend to reinvent the wheel in the way it connects the services without necessarily thinking of a unified solution that can be used in one or more environments (such as the service mesh). This was covered by Denis Jannot during his talk The challenges of exposing and connecting microservices. Unfortunately, the problem of the “build-it-yourself” approach in IT is very common.

Finally, and by way of conclusion, a point raised by these two talks is the need to monitor the use of microservices, and to be on the lookout for metrics, traces, and logs submitted in order to learn from feedback and be reactive, but also keep in mind that failure is not just an option, but inevitable. So as much as possible, please don't fall into the trap of not being proactive enough!

Observability & chaos engineering to prepare for the unexpected

 

Observability and chaos engineering, both highlighted at DevOpsCon Germany, are in some ways two sides of the same coin. Let me explain.

On one hand, as Andre Pietsch said very well in his talk “Observabiliwhaaaat and why now? But open source for sure …” microservices help a lot, but add an extra layer of complexity, both in their designs and in their daily execution and management (hello Kubernetes!). Classic monitoring is a supporter of the model that promotes keeping an eye on things we know can go wrong, while observability is more focused on finding the unexpected and explaining why it happens. To illustrate this, we are also talking about the three pillars of observability, namely metrics, logs, and traces, as mentioned above.

On the other hand, explaining why the unexpected happened doesn't tell us when it happened or how we react when it does. This is why an increasingly widespread practice, initially popularized by Netflix (and its famous chaos monkeys), called Chaos Engineering, highlights continuous experimentation and learning (values very important to the DevOps philosophy and to Cycloid).

In connection with the SRE philosophy, this practice focuses on creating voluntary chaos in its environments through tools and automated actions. In order, once again, to validate certain assumptions and expectations, to test the resilience of its architectures, or to improve its capacity to respond to incidents (unintentional this time), for example.

Also know that you can bring chaos anywhere and ultimately via any means (you can even use Cycloid to do that!) - even in the security of your systems, as Adriana Petrich and Francesco Sbaraglia explained very well in the talk “Let the Chaos Begin - SRE Chaos Engineering Meets Cybersecurity”. However, without an approach and tools focused on observability, finely analyzing the results of your tests is much more complex.

Lastly, the final presentation by Michiel Rook "Learning in production (or why the Apollo 11 landing nearly failed)" showed how even in space aviation there has been a great deal of experimentation and learning from (un)intentional errors. In today’s IT landscape we can make the most of the union of observability and chaos engineering to prepare and test.

Final words

 

The whole 2 days of this DevOpsCon were completely packed with information and yet felt like just a few hours. This year the speakers truly delivered insightful topics, heated discussions, and interesting opportunities. 

If you enjoyed this review and would like to see more updates from me, follow me on Twitter for live tweets from the events that Cycloid will do next year, or subscribe to our newsletter to keep up with new reviews on the blog.

 



Read More

Value stream mapping - the right choice for every DevOps-first org?

Value stream management. If you’ve spent any time on the internet as part of your job in DevOps, it...
Sep 21 - 3 min read

Does DevOps help or hinder shadow IT?

Shadow IT is... “any technology (application, person, or device) deployed within an organization...
Aug 13 - 4 min read

The Cycloid origin story - people, process, tools

The DevOps triad - people, process, and tools - sounds simple, but it's infinitely more complicated...
Mar 12 - 3 min read