Welcome!

Containers Expo Blog Authors: Pat Romanski, Ravi Rajamiyer, Elizabeth White, Yeshim Deniz, Liz McMillan

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Containers Expo Blog, Agile Computing, SDN Journal

@DevOpsSummit: Blog Post

DevOps & SDN | @DevOpsSummit [#DevOps]

Whether it's DevOps or SDN, a key goal is the reduction of variation (complexity) in the network

Kirk Byers at SDN Central writes frequently on the topic of DevOps as it relates (and applies) to the network and recently introduced a list of seven DevOps principles that are applicable in an article entitled, "DevOps and the Chaos Monkey. " On this list is the notion of reducing variation.

This caught my eye because reducing variation is a key goal of Six Sigma and in fact its entire formula is based on measuring the impact of variation in results. The thought is that by measuring deviation from a desired outcome, you can immediately recognize whether changes to a process improve the consistency of the outcome.Quality is achieved by reducing variation, or so the methodology goes.

six-sigma-with-legend

This stems from Six Sigma's origins in lean manufacturing, where automation and standardization are commonly used to improve the quality of products produced, usually by reducing the number of defective products produced.

This is highly applicable to DevOps and the network, where errors are commonly cited as a significant contributor to lag in application deployment timelines caused by the need to troubleshoot such errors. It is easy enough to see the relationship: defective products are not all that much different than defective services, regardless of the cause of the defect.

Number four on Kirk's list addresses this point directly:

#4: Reduce variation.

 

Variation can be good in some contexts, but in the network, variation introduces unexpected errors and unexpected behaviors.

Whether you manage dozens, hundreds, or thousands of network devices, how much of your configuration can be standardized? Can you standardize the OS version? Can you minimize the number of models that you use?   Can you minimize the number of vendors?

Variation increases network complexity, testing complexity, and the complexity of automation tools. It also increases the knowledge that engineers must possess.

Obviously, there are cost and functional trade-offs here, but reducing variation should at least be considered.

What Kirk is saying without saying, is that standardization improves consistency in the network. That's no surprise, as standardization is a key method of reducing operational overhead. Standardization (or "reducing variation" if you prefer) achieves this by addressing network complexity that contributes heavily to operational overhead and variation in outcome (aka errors).

That's because a key contributor to network complexity is the sheer number of boxes that make up the network and complicate topology. These boxes are provisioned and managed according to their unique paradigm, and thus increase the burden on operations and network teams by requiring familiarity with a large number of CLIs, GUIs and APIs. Standardization on a common platform relieves this burden by providing a common CLI, GUI and set of APIs that can be used to provision, manage and control critical services. The shift to a modularized architecture based on a standardized platform increases flexibility and the ability to rapidly introduce new services without incurring the additional operational overhead associated with new, single service solutions. It reduces variation in provisioning, configuration and management (aka architectural debt).

On the other hand, SDN tries to standardize network components through the use of common APIs, protocols, and policies. It seeks to reduce variation in interfaces and policy definitions so components comprising the data plane can be managed as if they were standardized. That's an important distinction, though one that's best left for another day to discuss. Suffice to say that standardization at the API or model layer can leave organizations with significantly reduced capabilities as standardization almost always commoditizes functions at the lowest common set of capabilities.

That is not to say that standardization at the API or protocol layer isn't beneficial. It certainly can and does reduce variation and introduce consistency. The key is to standardize on APIs or protocols that are supportive of the network services you need.

What's important is that standardization on a common service platform can also reduce variation and introduce consistency. Applying one or more standardization efforts should then, ostensibly, net higher benefits.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and sh...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
What are the new priorities for the connected business? First: businesses need to think differently about the types of connections they will need to make – these span well beyond the traditional app to app into more modern forms of integration including SaaS integrations, mobile integrations, APIs, device integration and Big Data integration. It’s important these are unified together vs. doing them all piecemeal. Second, these types of connections need to be simple to design, adapt and configure...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and Bi...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determ...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...