Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Rethinking Operations Automation | @DevOpsSummit #DevOps

The world of automated provisioning has come a long way in a short time

The world of automated provisioning has come a long way in a short time. From hand deploying everything from temporary VMs to complex clustered systems, we have reached the point where the entire operations stack can be provisioned with the click of a button – provided the infrastructure has been put together to do so.

This has the huge benefit of offering operations more time to work on projects that add value to the organization. That new system that marketing needs can now move forward because operations has the man-hours, for example. It also offers the surety that there isn’t some magical individual on staff who holds all the critical information about a system. By using DevOps principles, and keeping configuration and automation information in a version control system, the knowledge is preserved in the best possible form – scripts and configurations that actually work and can be examined/explored should the need arise. The corporate stability this offers is quantifiable, but normally there are human issues that keep us from delving too deeply into the cost/benefit scenario (let’s face it, all geeks – this author included – have some amount of ego, and saying “your knowledge will be replaced by a system” can be a touchy topic without delving into “…and we won’t have to have Don’s approval to get this done quickly!” type conversations.)

The thing is that when a market is rapidly and constantly evolving like the DevOps/Provisioning market is, we sometimes have to step back, take a breath, and look at where we are, and what’s to come. Part of this process involves each organization looking at the different branches in the marketplace and determining which branch will best serve their needs.

Relatively recently, those of us talking about it have divided provisioning into “Server” provisioning and “Application” provisioning. This is an accurate subdivision, as tools like Stacki and Cobbler handle the server part and tools like Salt and Puppet handle the application provisioning side. This is not a perfectly clean divide – both Cobbler and Stacki can install/configure certain apps, for example (agents for application provisioning being the obvious one), and Puppet can use Razor to do server provisioning, while Salt doesn’t have a dedicated server provisioning tool, but is doing some interesting things with server provisioning in the cloud space. Add to that the fact that a modern OS is the core of the system and a plethora of… Yes… Applications. There are text editors and network monitors built into modern operating systems that are technically applications being provisioned by OS provisioning tools.

Into this slightly muddled mix we are increasingly seeing another aspect of provisioning. Hardware. While the majority of server provisioning tools support some level of hardware configuration, the level of support required to run a modern datacenter is just not there. There is good reason for this, while operating systems installation can be done generically in a variety of ways, hardware configuration is, by definition, relatively unique to the hardware being configured. A RAID card is not a SAN card, after all, and RAID cards from vendor X are not RAID cards from vendor Y. Indeed, when I was a storage and servers editor, it amazed me at how different two vendor’s interfaces could be… Standardization of programmability was head-nodded by groups like SNIA, but interoperability (from a DevOps or developer perspective) was relatively non-existent.

The market is discovering that quite often, even with tools that support hardware configuration (not all server provisioning tools do), the support is based upon a few disparate vendors and pieces of hardware. This is difficult for mass deployment in a heterogeneous environment, and normally means that operations must intervene to perform manual steps on the way to server provisioning. Automating most of the system is useful, automating all of it frees up resources to work on amazing new things. If operations staff has to sit there to configure the RAID card, then they have to sit there. If systems can do that configuration as part of deployment, then operations staff can be working on the next great project.

So the advanced provisioning stack looks something like this (for hardware and to a limited extent, VMs Cloud and some VM functionality is handled differently because of hardware abstraction and mode of OS deployment):


The thing is that it is not just initial deployment that operations needs to worry about. It is also upgrades and maintenance. I had a disk array lose a drive last year, and part of the fix was upgrading the firmware on the controller. This was not an optional step; it was required to get my NAS back into healthy condition. The same is true with OS security patches. While you could ignore them, that is certainly not IT best practice (not to mention infosec best practices). But changing these things can have implications up the stack. The ability to quickly and efficiently upgrade and repair existing installations is important, but the upgrades required, by cascading up the stack in an increasingly complex environment, can cause problems and consume a lot of time.

So what we need is a toolset that can handle all of the above (and, as my next blog will explore, even more). Not just at initial install time, but throughout the lifecycle of the servers/OS’s/Apps.

In the current marketplace, that means looking at the hardware support for server provisioning tools and seeing if it meets your needs. It is likely that as the market moves forward, this proposition will change, but for now that’s where we are.

What do I think? I think all of this will go through consolidation, and eventually you’ll have a DevOps provisioning tool that handles all of the above and more. But that’s a while off yet.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...