Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Does Your DevOps Department Need More Attention? | @DevOpsSummit #Cloud #DevOps #Containers

Luckily the warning signs of a DevOps department in need of help are pretty easy to recognize

Does Your DevOps Department Need More Attention?
By Josh Symonds

There are some big red flags that signify your DevOps department needs an overhaul.

Your deployment process seems to take forever. It only work from a few developers’ computers. It’s different for each server you deploy to.

Sound familiar?

Luckily the warning signs of a DevOps department in need of help are pretty easy to recognize. Read on to learn how to identify if and when your infrastructure team needs more attention—plus a few suggestions to implement those changes as smoothly and seamlessly as possible.

All your servers are slightly different
Automation is the byword of DevOps. With automation, you remove manual intervention (and possible human error), which means you can deploy new services and recover from critical events faster. If a server were to go down, a new one should be automatically created.

That’s a good ideal, but don’t worry if you’re not there yet. Just having a process to create servers is an enormous step in the right direction. Investigate tools such as Chef, Puppet, Ansible, or Salt to standardize your provisioning process. You should be able to take a server from a bare image to a full-fledged member of your cluster in one command. If you can’t, you’re in danger of losing important infrastructure knowledge when a server inevitably dies. And recreating server configuration after it’s been destroyed is not a fun experience.

A huge bonus of a standardized stack is liberation from correcting strange, difficult-to-trace server problems. Sorting through header files and C source trying to track down an error, only to figure out a disk experienced a freakish one-time mishap, will become an exercise of the past. The next time your OS acts up in an unexpected way, just destroy the entire server and let your provisioning system bring it back, fresh and new.

You may be surprised how, through no fault of your application or your own, entropy can infest a system and gradually introduce errors, bloat, and bugs. Fighting server divergence is one of the hardest tasks in operations, but configuration management tools and a standardized server creation process are the most important steps to ensure conformity among all members of your cluster. The surest way improve your DevOps game is to establish a streamlined, automated provisioning process you know works on all your servers—and don’t be afraid to use it!

Change is hard
Another sign you need to reinvest in your DevOps stack is if you spend a lot of time trying to manually change parts of your infrastructure. If a simple version upgrade takes weeks of manual work by your system’s administrators, there’s definitely something wrong. No piece of software should be manually installed on a server (except maybe to test how it functions). Administrators should largely write and correct software in repositories, not fix them on servers.

On the provider side, if creating new load balancers, databases, or other provider-mediated resources takes a while and requires you to use your provider’s management console, consider a tool like Terraform or CloudFormation to automate and manage your infrastructure backend. Changes you make to any part of your infrastructure should be tracked, managed, and understood through your version control system. This includes both the software running on servers and the commands used to provision those servers and all associated resources.

And similarly, changes to the infrastructure should be quick and transparent. A new version of your application should be delivered via a continuous deployment process that occurs automatically after a merge or new version. Needing a developer or administrator to manually perform deployments is a serious problem; waiting for deployments is an artificial bottleneck that takes time and saps focus. You can be sure someone will forget how it works, which can lead to a breakdown of the process, unless it’s incredibly well-documented.

And if you’re documenting it that well, why not just write code that performs the documented steps for you?

Developer environments are inconsistent
When a new developer joins your company—or an existing engineer buys a new computer—hours of time must be devoted to installing proper tooling, ensuring versions of local software are correct, and debugging any application-specific problems that crop up. This may seem like a small issue but it can rear its ugly head at unexpected times. Even six months after an engineer joins, the code he or she developed locally may work differently once deployed. Figuring out the problem can turn into a days-long slog that craters productivity.

A developer should be able to work on an environment exactly identical to your production stack. Tools including Vagrant and Docker allow you to bring the same provisioning and containerization processes that your servers use to your developers’ workstations, which helps ensure versioning problems are a headache of the past.

But even if you can’t introduce Vagrant and Docker, having an automated install process and a standardized development environment can alleviate a lot of the pain caused by inconsistencies. Your Windows or Linux developers may chafe when required to use Macs, but if you can ensure Macs always install the correct version of your software tools, it may be worth asking them to make that sacrifice.

Of course, developing with virtual machines means a developer could use whatever platform they’re most comfortable with and still be guaranteed to receive the same software. But getting there takes a lot more work than having an automated install script.

Conclusion
If your DevOps initiative is suffering from some or all of these issues, it’s clear your organization is experiencing drag caused by bad tooling or lack of processes. Thankfully, most of these issues are easy to fix. Streamlining your DevOps flow will save your engineers and administrators countless hours of manual management and debugging. Paying a little more attention to your DevOps can make formerly implacable, difficult-to-debug problems easy to fix through automation and standardization.

The post Does Your DevOps Department Need More Attention? appeared first on Application Performance Monitoring Blog | AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...