Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Blog Post

Open Source as Part of Your Software Delivery Toolchain in the Enterprise | @DevOpsSummit #DevOps

Perspectives for CIOs

A myriad of point-tools are involved in every organizations' software production. Some of our enterprise customers report using over 50 tools along their pipeline - from code development all the way to releasing into Production. For the majority of development organizations today, these tools are comprised of a mix of commercial and Open Source (OSS) technologies.

Existing open source tools can be found throughout your software Dev and Ops teams - from programming languages, infrastructure and technology stacks, development and test tools, project management and bug tracking, source control management, CI, configuration management, and more - OSS is everywhere.

The proliferation of OSS technologies, libraries and frameworks in recent years has greatly contributed to the advancement of software development, increased developer productivity, and to the flexibility and customizability of the tools landscape to support different use cases and developers' preferences.

To increase productivity and encourage a culture of autonomy and shared ownership - you want to enable teams to use their tool(s) of choice. That being said, since the advent of Agile development, we see large enterprises wrestle with striking a balance to allow this choice while also retaining a level of managementvisibility and governance over all the technologies used in the software delivery lifecycle. And this problem gets harder over time - because with every passing day new tools are being created and adopted to solve increasingly fine-grained problems in a unique and valuable way.

content

Enterprises operating mission-critical applications need this level of control, not only to lower costs (with improved utilization of tools, infrastructure, etc.) or speed cycle times (with streamlined or standardized processes), but - more importantly - as a way to ensureoperability, compliance and SLAs.

The tools you're using can be free, and your process can be faster. But, at the end of the day, no savings on the development side would justify the risks if you're having trouble managing your applications in Production, or if you're exposed from a security or regulatory standpoint.

I'd like to address two of the key challenges software executives face with regards to the use of OSS as part of the software development and release process, and how you can address them when adopting OSS, while mitigating possible risks.

Enabling Developers while Ensuring System-Level Management
The realities of software production in large enterprises involve a complex matrix of hundreds or thousands of inter-connected projects, applications, teams and infrastructure nodes. All of them using different OSS tools and work processes - creating management, visibility, scalability and interoperability challenges.

The multitude of point-tools involved also creates a problem of silos of automation. In this situation - each part of the work along the pipeline is carried out by a different tool, and the output of this work has to be exported, analyzed and handed-off to a different team and tool(s) for the next stage in the pipeline. These manual, error-prone handoffs are one of the biggest impediments to enterprise DevOps productivity - they not only slow down your process, but they also introduce risk and increase management overhead.

The fact that your process involves a lot of "best for the task" tools is pretty much a fact of life by now - and with (mostly) good reason. But these silos of automation do not have to be.

Enterprise DevOps initiatives require a unifying approach that coordinates, automates, and manages a disparate set of dozens of tools and processes across the organization. While you want to allow your developers to use the tools they're used to, you also want to be able to manage the entire end-to-end process of software delivery, maintain the flexibility to include new tools as they are needed, and optimize the whole process across many teams and projects throughout the organization.

This is why enterprises today are opting to integrate their toolchains into an end-to-end DevOps Release Automation platform. To accelerate your pipeline and support better manageability of the entire process, you want a platform that can serve as a layer above (or below) any infrastructure or specific tools/technology and enable centralized management and orchestration of all your tools, environments and apps. This allows for the flexibility to manage the unique tool set of each team has today (or adopts tomorrow), while also tying all the tools together to eliminate silos of automation and provide cross-organization visibility, compliance and control.

Security Risks and Open Source
Open source is not only prevalent in your toolchain, it's also in your code and in yourinfrastructure. Many applications today incorporate OSS components and libraries, or rely on OSS technology stacks. Some estimate that more than a third of software code uses open source components, with some applications relying on as much as 70 percent open source code. As OSS use increases, so are the potential security vulnerabilities and breeches (think Heartbleed, Shellshock and POODLE.)

Commercial software is just as likely to include security bugs as OSS code. To mitigate these risks, you need to ensure you have the infrastructure in place to react and fix things quickly to resolve or patch any vulnerability that might come up.

By orchestrating all the tools and automating your end-to-end processes across Dev and Ops, a DevOps Release Automation platform also accelerates your lead time in these cases - so that you can develop, test, and deploy your update more quickly.

In addition, the historical tracking and easy visibility provided by some of these solutions into the state of all your applications, environments, and pipeline stages greatly simplifies your response. When you can easily identify which version of the application is deployed on which environment, and where the compromised bits are located, you can more quickly roll out your update in a faster, more consistent, and repeatable deployment process.

In conclusion
When managing IT organizations and steering digital transformation in the enterprise, technology leaders need to support proper use of both OSS and commercial technologies as part of their toolchain, while putting the right systems in place to enable enterprise-scale, governance and security.

How do you know where OSS technologies are being used in your process, and if there are any inherent risks or major inefficiencies that need to be addressed as a result? Before you can start optimizing, you have to know exactly what your application lifecycle looks like. This holistic process is sometimes hard to encapsulate in large and complex organizations. I often see different stakeholders understanding only a fraction of the overall process, but lacking knowledge of the entire cross-organizational "pathway to production." CIOs need to work with their teams to capture the end-to-end pipeline and toolchain, from code-commit all the way to production. This mapping is critical to finding the bottlenecks, breakages and inefficiencies that need to be addresses.

Then, work with your teams to pick the tools (whether they be OSS or not) that work best for the problem that you are trying to solve. Consider how you can orchestrate all these tools as part of a centralized platform. By being able to manage, track and provide visibility into all the tools, tasks, environments and data flow across your delivery pipeline, end-to-end DevOps automation supports extensibility and flexibility for different teams, while enabling system-level view and cross-organizational management for complex enterprise pipelines.

Along with cultural change, breaking the "silos of automation" goes a long way towards effectively breaking the silos between Dev and Ops, and unifying your processes towards one - shared - business goal: the faster delivery of valuable software to your end users.

This article first appeared on CIO Review magazine.

More Stories By Anders Wallgren

Anders Wallgren is Chief Technology Officer of Electric Cloud. Anders brings with him over 25 years of in-depth experience designing and building commercial software. Prior to joining Electric Cloud, Anders held executive positions at Aceva, Archistra, and Impresse. Anders also held management positions at Macromedia (MACR), Common Ground Software and Verity (VRTY), where he played critical technical leadership roles in delivering award winning technologies such as Macromedia’s Director 7 and various Shockwave products.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...