Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Post

The Next Evolution of DevOps in the Enterprise: 'Hardening' DevOps | @DevOpsSummit #DevOps

In today's digital age much of the business innovation is driven by software

In today's digital age much of the business innovation is driven by software. To win, serve and retain their customers, enterprises are being tasked with releasing application updates at an increasingly faster pace. A great idea, killer functionality and a robust technology are all as important as ever - but do not mean much if you can't get your code to your end users in a fast, predictable manner, and with high quality.

Your "Pathway to Production" is the path that your code takes from developer check-in all the way to a successful Release. It spans the entire organization - comprising all the different stakeholders, teams, processes, tools and environments involved in your software delivery. This is, essentially, how your organization delivers value to the market.

Increasingly, we see that organizations that become better at streamlining and accelerating their Pathway to Production are better equipped to compete and win in today's economy. The maturity, speed and quality of your software release processes have become a key differentiator and a competitive advantage for businesses today.

DevOps and ARA: Paving a Better Pathway to Production
DevOps and Application Release Automation (ARA) have emerged to help organizations become better at delivering software - allowing for greater speed and agility while mitigating the risk of software releases.

DevOps has huge business benefits: statistics show that organizations that practice DevOps outperform the SMP500 over a 3 year period, high-performing IT organizations have 50% higher market cap growth, and so on.

In order to remain competitive and meet consumer demands, enterprises across the board are adopting DevOps to optimize their Pathway to Production. Just as you would invest in designing the right functionality for your product, or defining a winning go-to-market plan, organizations now invest in optimizing and (re)designing their Pathway to Production to enable innovation.

The implementation of DevOps in large organizations comes with a unique set of challenges. Enterprises often need to support large volumes of distributed teams and multiple applications/product releases. In addition, regulatory and governance requirements, supporting legacy systems, tool variety, infrastructure management, and complex internal processes further compound these challenges.

I'd like to discuss the evolution of DevOps adoption in the enterprise, and what I see as the next phase of the DevOps revolution.

DevOps in the Enterprise: Starting Small, Dev Is Leading.
Agile methodologies, adopted by many software organizations, have been largely focused on development, QA and product management functions, and less on the Pathway to Production once the software has been authored.

As a continuation to Agile, DevOps also started as a very Dev-driven movement (despite the ‘Ops' in the name). Dev teams were quicker to adopt these practices, as they were eager to find a way to get their code into Production faster. Ops were traditionally more hesitant to adopt DevOps, seeing the increased velocity and speed as possible risks.

The majority of DevOps implementations today still start as grass-roots initiatives in small teams. And that's OK and is a good way to show early success and then scale. Increasingly, alongside these bottom-up efforts, we're seeing a shift towards DevOps being a company-wide initiative, championed both at the executive-level, as well as at the team-level.

The Next Phase: Scaling DevOps, Ops Takes Center Stage.
One of the biggest challenges for large enterprises is the "silo-ing" of people, processes and toolsets. Oftentimes, one or more of these silos may be quite adept at understanding and automating their piece of the puzzle, but there is no end-to-end visibility or automation of the entire Pathway. This leads to fragmented processes, manual handoffs, delays, errors, lack of governance, etc.

Since the Pathway to Production spans the entire organization, enterprises are realizing that optimizing it is not a disparate set of problems, but requires a system-level approach. The evolution of DevOps is towards scaling adoption across the entire enterprise to cover the end-to-end Pathway to Production. This removes friction by automating all aspects of your delivery pipeline, in the pursuit of creating predictable, repeatable processes that can be run frequently with less and less human intervention. By achieving consistency of processes and deployments (into QA, Staging, Prod.) throughout the entire lifecycle, you're in fact always ‘practicing' for game-day, and hardening your DevOps practices as you optimize them.

As part of this process - as DevOps matures and becomes mainstream in enterprises (and as it becomes more critical to their operations), DevOps practices are ‘hardened' to take into account more ‘Ops' requirements for releases: mainly around manageability,governance, security and compliance.

Talking about "enterprise-control" is no longer a bad thing or something that may be viewed as hindering DevOps adoption. DevOps is about enabling speed while ensuring stability. Similar to children maturing, now that we've grown and learned to walk (faster), it's time to learn to be more responsible.

As with the software your organization is developing, it's time to "harden" your DevOps practices to scale adoption throughout your end-to-end process across the organization. ‘Hardening' doesn't mean sacrificing speed or experimentation; it means your DevOps is getting ready for Prime Time!

"Hardening" Your DevOps Implementation:
You want to design your underlying tools and processes along your Pathway to Production in a way can scale across the enterprise. This requires balancing team ownership and collaboration, with supporting the needs of the organization for checks and balancesstandardization, and system-level visibility and control.

While you would likely still start ‘local', and gradually roll out across different groups as you optimize - be sure to always think ‘global'. As you analyze and (re)design your Pathway to Production, you need to take a system-wide approach, and always consider:How do I scale this? - across all teams, applications, releases, environments, and so on.

First, take some time to map your end-to-end Pathway to Production. From my experience, organizations often are not even aware of the entire path their code takes from check-in, through build, testing, deployment across environments, etc. Be sure to interview all the different teams and stakeholder, until you reach a painstakingly detailed documentation of your cross-functional pipeline(s) - including all the tools, technologies, infrastructure and processes involved.

Then, take a look at the bottlenecks - where do your pipelines choke? For example: waiting on VMs, waiting on builds, configuration drifts between environments, failed or flaky tests, bugs making it to Production, failed releases, errors or lags due to manual handoffs between teams or tools, etc.

As you redesign your pipelines to eliminate friction points, here are some things to consider on your journey to ‘harden' your DevOps practices to support stability and scaling across the organization:

  1. How do I ensure security access controls and approval gates at critical points along the pipeline?
  2. How do I guarantee visibility and auditability- so we have real-time reporting of the state of each task along the pipeline, and a record of exactly who did what/where/when?
  3. What security and compliance test (or other tests) must all processes adhere to in order to move through the pipeline and into Production?
  4. How do I standardize as much as possible on toolchain, technology and processes to normalize my pipeline to allow reusability across teams/applications and save on cost?
  5. How do I still enable extensibility and flexibility to support different needs from various teams or variants of the application?
  6. Can my chosen DevOps solution orchestrate and automate the entire end-to-end pipeline?
  7. Can my implementation support Bi-modal IT - enabling traditional release practices and support for legacy apps, as well as more modern container/microservices architectures and CD pipelines?
  8. Can I support both simpler, linear, release pipelines, as well as complex releases that require coordination of many inter-dependent applications and components into many environments?
  9. Is my solution ‘future-ready' and flexible enough to be able to plug-in any new technology stack, tool or processes as the need arise?
  10. As I scale, can my implementation support the velocity and throughout I'm expecting across the organization - which can include thousands of developers, thousands of Releases, millions of builds and test cases per .?
  11. Setting up one pipeline for one team/release is easy enough, but how do I onboard thousands of applications?

While optimizing your tools and technology to scale DevOps adoption is important, it is only half the battle. Above all- DevOps is a mindset, and cultural shift takes time. Remember that change doesn't happen in a day, and that you're in it for the long haul.

As a community, we started with asking why we should even bother doing DevOps. After establishing momentum and proving the ROI of DevOps, the discussion is gradually evolving to how we get DevOps right in large enterprises: what are some of the patterns for success, and how can we effectively scale so that the entire organization can reap the benefits.

infoworld-logo This article was originally published on InfoWorld

More Stories By Anders Wallgren

Anders Wallgren is Chief Technology Officer of Electric Cloud. Anders brings with him over 25 years of in-depth experience designing and building commercial software. Prior to joining Electric Cloud, Anders held executive positions at Aceva, Archistra, and Impresse. Anders also held management positions at Macromedia (MACR), Common Ground Software and Verity (VRTY), where he played critical technical leadership roles in delivering award winning technologies such as Macromedia’s Director 7 and various Shockwave products.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...