Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Blog Feed Post

Infrastructure as (Someone Else’s) Code By @DMacVittie | @DevOpsSummit #DevOps

The rush to integrate has created a consumption level of new & previously unheard of modules that is astounding

We are rapidly approaching a world where the bulk of datacenter day-to-day operations are automated. The major application provisioning tools are integrating with infrastructure vendor APIs to give operations the power to control and monitor the datacenter – including things like SAN and networking gear – through their systems. To my mind this is a very cool development, but before we rush headlong into this world, let’s have a frank discussion about the nature of infrastructure, the nature of these integrations, and the nature of hackers. Because it’s never all sunshine and unicorns, and automation is no exception.


The rush to integrate has created a consumption level of new and previously unheard of modules that is astounding. If a module meets a pent up desire, thousands of organizations are using it in production practically overnight. This makes sense as more and more enterprises move to a more complete automation infrastructure, but it is not without its risks, and you really should consider those risks before the 2am phone call comes. Which of course, we all hope never will.

As mentioned, the major application provisioning providers are working closely with infrastructure vendors to integrate infrastructure into the realm of what they can manage. SaltStack, Puppet, and Ansible, for example, are integrated with products from infrastructure vendors like Cisco, EMC, and F5. The nature of these integrations is often that the vendor does the development, which is cool, because who knows the product better than the vendor?

But that brings one planning point into the equation. What to do if the vendor drops support for your chosen provisioning platform? While this could be an issue for the entire relationship, it is more likely to come into play when a vendor EOLs a product. These solutions are almost all open source, but it is the nature of Open Source in the enterprise that this is not a major differentiator. Except for extreme need, most organizations never work through the source code for providers – particularly complex, multi-layered providers – to make certain they can maintain it. Not that there is no interest, but in the enterprise, that kind of free time is a rare commodity, so it is only done at need.

So I suggest you have a plan. Know what steps you will take if a vendor ends support in a middleware DevOps tool, and you need that support continued. The plan doesn’t have to be complex, just have thought through it so you’re not making it up on the spot when the situation arises.

While you’re thinking about it, make certain the vendor-provided plug-ins are indeed open source, because that changes the “what would we do” equation a little if it can just be pulled from the market entirely, and you don’t have access to the source.

Just a reminder that infrastructure is the center of your world. If you have one of these modules, and it causes problems, it could potentially impact a lot more than just one service. You know that, but it implies a much greater need for quality assurance of modules than you would use for, say, an apache config/install module. The potential impact is huge, and we’ve seen when DevOps tools propagate problems across server farms, it could be so much worse if they do the same across networking gear.

This is more important when you find that module designed by a user that does exactly what you need. Make certain that it’s solid code. Bring it in, do a code review – no, I’m not kidding. This is code going to change things on your core infrastructure, due diligence is absolutely recommended. I’d say “required” instead of recommended, but to some extent the tolerance of your organization for risk figures into the equation. But if I’m a customer of yours? Consider it required.

Do you know what a bad actors’ dream scenario is? It is infrastructure as code. Given the opportunity to submit code to such a project, this is a golden opportunity. The attackers could stop messing with applications, and just get back doors into infrastructure. That’s a scary scenario. And it will happen.

This is another area that is a bigger concern when you are grabbing modules developed by users of a provisioning tool than when using tools implemented with vendor assistance, though in an open source and massive code reuse world, there is always a risk of both purposeful and inadvertent tainting of codebases.

Most enterprises today have a security team. They need to go over these modules before they are implemented – in production for sure, but I’d recommend this review before deploying in test too. The usual reason an organization doesn’t do this step is availability of resources as opposed to delivery timelines. Considering the number of man-hours a module like this can save over the long term, an investment up-front to make certain it’s safe is not too much investment. Stretch timelines or free up resources. I know that’s easier said than done, I’ve been management on high-visibility teams in enterprises. But the possible negative impacts are massive, and definitely worth the effort to get them reviewed.

A last word
Others have written more extensively about these concerns, since there is only so much one can cram into a blog and expect you to read it, I recommend seeking out some of those other sources and reading them.

The problem we have with security generally is that these risks as a percent chance are pretty slim. Most organizations will not suffer if they ignore this post and others like it. But the ones that do will suffer greatly. I don’t wish to over-exaggerate the risks, they are relatively small on a per-enterprise basis, though I think this type of problem will inevitably impact some of us. Of course the vendors – both application provisioning and infrastructure – do not want to be the source of problems with automated infrastructure, so they are watching also. But the risk is still there, and it’s worth a few extra man-hours to make sure there are not problems in the modules you choose to use. The network you save could be your own.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...