Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Break Down the Silos: Correlate Data Between Vendors | @DevOpsSummit #DevOps #APM #Monitoring

The complexity of modern infrastructure makes it difficult to avoid silos

Break Down the Silos: Correlate Data Between Vendors
By Chris Riley

Thanks to the DevOps movement, we now understand why software delivery chains that consist of a series of silos are bad. They complicate communication between different teams, leading to delivery delays, backtracking, and bugs.

When it comes to incident management, there is another type of silo to contend with - the kind that separates incident management data from one vendor or product to another. These silos hamper incident resolution, as it makes it more difficult to collect and analyze monitoring data from multiple sources.

How do you break down these silos to keep incident management operations flowing efficiently?

Identify the Silos
The first step in working past incident management silos is to understand why silos exist in the first place.

The reason is simple: Modern infrastructure consists of diverse hardware and software. Most components have special monitoring needs. They output information in a certain format, according to a certain rhythm, and they require data to be collected in a certain way. The monitoring information associated with each part of the infrastructure, therefore, lives in a silo, because it is not readily comparable to data from other parts of the infrastructure.

As a basic example, take a datacenter that consists of ten bare-metal servers running Windows and another ten bare-metal servers that run Linux. In this scenario, the company would require different monitoring tools for its Windows and Linux servers. Although some of the monitoring information for each type of operating system (such as whether the host is up) would be the same, other data would not be. And either way, the data would need to be collected by tools that are compatible with the operating system in question. Each context, therefore, becomes a distinct silo, with its own miniature ecosystem of monitoring tools and data.

This is just a simple example, by the way. Things are much more complicated in most real-world settings, when you would have not just two different types of bare-metal servers to monitor, but virtual servers running on top of one or more types of hypervisors, workstations running different types of desktop operating systems, and mobile devices powered by a widely varying array of mobile operating systems, versions, and so on.

Break Down Silos
How do you eliminate the silos that separate each monitoring context within your infrastructure so that you get seamless and holistic monitoring visibility? The solution has two parts.

Step 1: Centralize Data Collection
The first step is to implement an incident management solution that can collect information from diverse types of environments, then forward that information to a central location. This way, engineers can monitor the entire infrastructure from a single vantage point. They don't need to go looking inside individual silos to monitor different parts of the infrastructure.

Centralized data collection requires an incident management solution that is smart enough to aggregate monitoring information from multiple sources. This is no trivial task; supporting a wide range of environments and endpoints requires integration with many different types of monitoring systems, sometimes even custom tooling.

Step 2: Translate the Data
The second step is one that is easy to overlook. In addition to aggregating data from many monitoring tools and exposing it in a central location, incident management teams also need to translate all of the data into a consistent format.

Data translation is the only way to assure that every engineer is able to interpret and react to alerts from any source. If data is not translated, engineers would have to have special expertise in a particular type of monitoring system or know a certain vendor's schema, in order to understand data that originated from that system. Making all of the data available in a central location would, therefore, be of little help in breaking down silos, because there would still be tall barriers separating different monitoring contexts.

Consider, for example, the different ways in which Zabbix and Nagios use the term "alias." On the former monitoring system, an alias basically serves as a shorthand for any type of configuration term. On Nagios, in contrast, an alias is a given name for a host. Its meaning is more specific. If you don't understand this difference and you see data from both Zabbix and Nagios systems aggregated in a centralized dashboard, things can easily get confusing.

For effective incident management then, you need a solution that can translate vendor- and platform-specific terminology into a single, consistent language. Only with event normalization, such as that enabled by the PagerDuty Common Event Format, can responders easily and accurately interpret data from multiple sources.

The complexity of modern infrastructure makes it difficult to avoid silos. Yet, that does not mean that monitoring information has to live within those silos, as information is only useful when it can be understood and acted upon. By aggregating monitoring information from diverse sources and translating it into a language that anyone on the on-call team can understand, incident management teams can break down the silos that exist within their infrastructure. They will then enjoy seamless communication and agile, real-time response to incidents.


Dunatov, Devin. "Speeding." Jul 17, 2012. Online image. <https://www.flickr.com/photos/ddunatov/7588797542>

The post Break Down the Silos: Correlate Data Between Vendors appeared first on PagerDuty.

Read the original blog entry...

More Stories By PagerDuty Blog

PagerDuty’s operations performance platform helps companies increase reliability. By connecting people, systems and data in a single view, PagerDuty delivers visibility and actionable intelligence across global operations for effective incident resolution management. PagerDuty has over 100 platform partners, and is trusted by Fortune 500 companies and startups alike, including Microsoft, National Instruments, Electronic Arts, Adobe, Rackspace, Etsy, Square and Github.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...