Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Hidden Lessons of Incident Management By @JasonHand | @DevOpsSummit #DevOps

One of the most common early goals of implementing DevOps principles is a deep understanding of our systems in a stable state

Hidden Lessons of Incident Management
By Jason Hand

One of the most common early goals of implementing DevOps principles is a deep understanding of our systems in a stable state. However, this objective is not a “once and done” effort. It is important to continuously circle back in some form (a feedback loop) as changes are introduced. It’s an ongoing exercise for an entire organization as our processes, tools, and teams improve continuously over time.

In many cases during these beginning stages of DevOps transformations, agreeing on a starting point is where much of our time is spent. An unfortunate consequence of this is that without confidence in understanding where to start, oftentimes we never start at all. Analysis paralysis is a very real thing, especially for big organizational changes, and those who are typically risk-averse unfortunately fall victim to this far too easily.

15381354616_663a8999b9_z

Be wary if R.O.I. is creeping into the conversation regarding adopting tools and processes of DevOps. This is an immediate indication that the collective mindset of management, or at least the decision-makers, have not yet placed continuous improvement and learning as the highest priority. The decision-making ways of the past will not empower an organization to adapt & thrive in the competitive nature of today’s business.

(Should you disagree with this position, the rest of this article will provide you no real value.)

“Establishing a deep understanding of our current systems to formulate a baseline and feedback loop is the foundation. From there, we improve.”

Introducing a level of confidence (or lack of) in current methods of software delivery & maintenance as measured by anomalies in development and operations efforts helps shed light on where to start. By moving focus towards a deeper understanding of our infrastructure and codebase, a starting point begins to appear. The paralysis of decision-making begins to ease and the “managing from a distance” behaviors, such as R.O.I. mentions, start to provide no meaningful value.

6218038832_cf5be5bcfe_z

From there, small incremental goals or what are known as “Target Conditions” can be set to begin the process of improvement. This focus on improvement is the key to unlocking so many of the concepts brought up in DevOps conversations. Continuous Integration and Continuous Delivery are possible only as results of a focus on understanding current conditions while placing a company-wide effort on striving towards Continuous Improvement.

Thus, a good starting point in any organization’s efforts to dip their first toe in the DevOps pool is with on-call, incident management, and monitoring improvements. Understanding your organization’s existing methods of identifying and responding to abnormalities is one of the easiest and most stimulating first steps.

The immediate benefits of modern on-call practices are easy to identify and agree on:

– Anomalies are detected in real-time.

– The correct operators and engineers are alerted to actionable issues as quickly as possible.

– Critical context on what’s taking place gives responders exactly what they need in that moment, shaving time and cognitive load.

– A collaborative space to discuss context, diagnosis, and efforts towards repair, means reduced Time to Repair and an increased situational awareness across teams and the organization of what is happening and the “state of systems”.

However, what about the benefits that aren’t obvious or immediate? What else is gained simply by improving the way we monitor and manage on-call and incident management?

Opportunity To Learn
Waiting to identify or be notified of a problem until much later makes it difficult to learn. The ability to understand the contributing factors becomes increasingly problematic the longer time passes. The trail to identifying everything involved with a disruption in service begins to go cold as operators, engineers, and the systems themselves move on to new tasks. Because of this, it becomes very difficult to learn and any opportunity for improvement is missed.

16235812517_090424ea31_z

Snowball effect
What may seem like a small or non-critical problem can quickly become a large one if left alone. As time ticks away, seemingly insignificant issues accumulate and grow into large, complicated or complex problems that have dangerous negative impacts and are much more difficult to diagnose and repair. In some situations, this can happen very quickly and a minor incident may become a “Sev-1″ outage in no time at all.

Stay on Track
Many of us follow Agile Development principles and operate in short development cycles. Shortened sprints are designed and planned in such a way that disruptions and context switching can be very detrimental to our efficiency. However, sprint planning is developed in a way to establish targets and goals, with the caveat of being able to quickly change course when and if the need arises. By responding right away to disruptions, we have the greatest chance of achieving those goals.

Waiting to deal with a problem until you’ve finished the code or configuration you are currently working on may very well result in the realization that those efforts (and code) were wasted. The feedback about your system (in the form of a problem) is likely full of information indicating that the piece of code you are writing won’t work under the current conditions of your system. Or worse, that it doesn’t provide value to the service you are building.

Leveraging monitoring, alerting, and incident management means having a pulse on your systems. That feedback loop is essential to staying on track for the greater good of the services you are engineering, even if that means changing course and activities quickly and often. That is – after all – what Agile and DevOps are determined to provide you.

Consistency
The quality of your service is extremely important to not only your end users, but the business as a whole. The service you provide IS the brand of the company and not placing quality of service as a top priority can mean extreme negative consequences. System resiliency and reliability as a means towards “high availability” is paramount in establishing credibility. Consumers of your product have very little tolerance for regular or lengthy outages. Communicating to your end users that quality of service is extremely important to you, yet not responding to problems as they occur is saying one thing and doing another.

The message you are sending is inconsistent at best and indicates trouble within the organization (likely at the management level) that priorities are not in alignment. Being consistent is one of the most important things to focus on for any organization. Your customers are paying attention to that consistency. Are you?

Downstream consequences
Many of us are aware of the benefits of loosely-coupled and independent processes or systems. The arguments for microservices architecture are hard to ignore. Its approach means a degraded performance of one service can have little-to-no impact on others. If there is a problem in one small area of the system, it doesn’t have a negative consequence to the system as a whole.

8695082512_fdb2b66baa_z

However, unless your entire service is part of a distributed microservice ecosystem, services are in fact, closely-coupled, and a problem in one area can quickly lead to problems elsewhere. The idea of a rarely used and non value-adding part of your infrastructure or codebase crumbling your entire service is frustrating for some, but something that keeps many in Operations roles from sleeping well at night. Not being aware of or alerted to an issue may mean catastrophic failure when a small and less significant service takes out a large and critical one.

The approach you and your organization take on managing incidents and those tasked with the responsibility of responding to them is a key indicator on where you value continuous improvement. If the culture of your team or company does not place a high value on learning and striving for improvements in processes, tools, and individuals in a continuous manner, then any efforts of rolling out DevOps will fail. This is why the culture piece of DevOps conversations comes up so frequently, and why it frustrates many who strongly hold on to “old-view” methods of managing.

Continuous improvement is at the heart of it all. Empathizing with our end users and those involved in engineering and maintaining our systems means that nothing is ever “done” or “good enough”. Everything must continuously get better. Establishing a deep understanding of your systems provides insight on where to focus efforts of improvement.

Failing to place understanding and learning as the highest priority means imminent failure of the organization and the products or services it provides.

The post Hidden Lessons of Incident Management appeared first on VictorOps.

Read the original blog entry...

More Stories By VictorOps Blog

VictorOps is making on-call suck less with the only collaborative alert management platform on the market.

With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...