Welcome!

Containers Expo Blog Authors: Zakia Bouachraoui, Pat Romanski, Yeshim Deniz, Elizabeth White, Liz McMillan

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Metrics and KPIs for Test Environment Stability | @DevOpsSummit #DevOps #APM #Monitoring

How often is an environment unavailable due to factors within your project’s control?

How often is an environment unavailable due to factors within your project’s control? How often is an environment unavailable due to external factors? Are the software and hardware in an environment up to date with the target Production systems? How often do you have to resort to manual workarounds due to an environment?

Metric: Availability and Uptime Percentage
QA and Staging environments seldom require the same level of uptime as Production, but tell that to a team of developers working 24/7 on a project that has an aggressive deadline. As a Test Environment Manager, you know that when a QA system is unavailable, you will get immediate calls from developers and managers.

As a Test Environment Manager, you will also want to understand the root cause of every outage. If you follow a problem management process for Production outages, you should follow a similar process with test environment management. Understanding why an outage happened is critical for communicating with a development team. Very often a QA environment will become unavailable due to a factor far outside the control of a Test Environment Manager. If one team pushes bad code that interrupts the QA process for all teams you need to be able to identify this clearly.

How to Measure Availability and Uptime?
Keep track of system availability with a standard monitoring tool such as Zabbix or Nagios. If your systems are visible to the public internet, you can also use hosted platforms like Pingdom to measure system availability.

Example Metric: Goal for Availability
An uptime of 95% is usually sufficient for a QA or Staging environment.
If your development is limited to a few time zones, you can also further qualify this by only measuring availability during development hours. While your Production availability commitment is often higher that 99% or 99.5%, you don’t have to treat every QA outage as an emergency. But, your developers may have other opinions—95% uptime still allows for eight hours of downtime a week. You may want to aim higher.

How Does This Metric Motivate Concrete Action?
When you measure system availability and make these numbers public, you encourage Test Environment Managers to make a commitment to uptime. This results in fewer obstacles for QA and development, allowing them to deliver software faster. There’s nothing more debilitating to an organization than disruptions in QA and testing. Measuring this metric allows you to encourage movement toward always-available QA systems.

Metric: Mean Time Between Outages
If your system has a 95% availability, then almost seventy-five minutes of downtime is acceptable every day. If your system fails for ten minutes every hour during an eight-hour work day due to a build or deployment, you’ll be creating a QA or Staging environment that has a 5% chance of losing developer and QA confidence. To get an accurate picture of system availability you need to couple an availability percentage metric with your mean time between outages (MTBO).

How to Measure MTBO
If you follow a process that keeps track of outages and strives to understand the root causes of these outages, you’ll develop a database of issues that you can use to derive your MTBO. If you have a monitoring system configured to calculate availability percentages automatically, you can use this same system to record your MTBO.

Example Metric: Goal for MTBO
This depends on your availability goal. The lower your availability goal, the higher your MTBO should be. For example, if you have a 95% uptime commitment then your outages need to be spaced over a day or a week. You might have eight hours of downtime each weekend to perform system upgrades or a nightly build and deploy process that takes about an hour, but what you can’t have is an MTBO of 45–60 minutes. This will mean that QA and Staging systems will be unavailable for a few minutes every hour, which will result in dissatisfied customers.

How does this Metric Motivate Concrete Action?
If your MBTO is very short, this suggests that build and deploy activity from a continuous integration environment is frequently interrupting both Development and QA. If your MBTO is very high, but your availability is very low (95% or lower) this means that you are experiencing multi-hour downtime at least once a day. When you measure MBTO, you encourage your Release Engineers and Test Environment Managers to work together to create build and deployment scripts that don’t affect availability, and you encourage your staff to approach QA and Staging uptime with care. Without this metric, you run the risk of having teams grow complacent with frequent, low-level unavailability as long as they satisfy overall availability metrics.

Metric: Downtime Requirement for a Test Environment Build and Deploy
When software is deployed to any system, there is a natural tendency for disruption
. If new code is being deployed to an application server that server often requires a restart so that new code can be loaded. If a web server such as Apache or Nginx is being reconfigured this often requires a fast restart measured in seconds.

Some of these build and deploy related disruptions can be avoided through the use of load balancers and clusters of machines. On the largest projects, this is essential in both Production as well as Staging and QA systems. An example is a QA system for a large bank’s transaction processing system. There are so many teams that depend on this system to be up and running 24/7 that causing any disruption would run the risk of freezing the QA process across the entire company.

Other build and deploy downtimes are unavoidable. A frequent example is changes to a database schema. Certain changes to tables and indexes require systems to be stopped and rebooted to reach a state where database activity isn’t competing with DDL statements.

The downtime requirement for a given build and deploy to a test environment is a central measure that is directly related to the availability metrics mentioned before in this section.

How to Measure Build/Deploy Downtime
It’s simple: run a build and deployment and keep track of the downtime that falls into the timespan of each build and deploy function. If you have a continuous integration system such as Jenkins or Bamboo, grab the timestamps of the last few builds and look at your monitoring metrics on QA and Staging to see if there is a system impact.

Example Metric: Goal for Build/Deploy Downtime
Your goal for this metric depends on your level of availability
. If you are working on a shared service, your build and deploy downtime requirement should be as close to zero as possible. If you are working on a less critical application, then your build and deploy downtime should be measured in minutes or seconds.

How does this Metric Motivate Concrete Action?
This metric encourages your Release Engineers and Test Environment Managers to drive build and deploy downtime to zero. With the tools available to developers and DevOps professionals it is possible to achieve zero-downtime deployments to QA and Staging systems. Doing this will give your internal customers more confidence in the systems you are delivering.

The post Metrics and KPIs for Test Environment Stability appeared first on Plutora.

Read the original blog entry...

More Stories By Plutora Blog

Plutora provides Enterprise Release and Test Environment Management SaaS solutions aligning process, technology, and information to solve release orchestration challenges for the enterprise.

Plutora’s SaaS solution enables organizations to model release management and test environment management activities as a bridge between agile project teams and an enterprise’s ITSM initiatives. Using Plutora, you can orchestrate parallel releases from several independent DevOps groups all while giving your executives as well as change management specialists insight into overall risk.

Supporting the largest releases for the largest organizations throughout North America, EMEA, and Asia Pacific, Plutora provides proof that large companies can adopt DevOps while managing the risks that come with wider adoption of self-service and agile software development in the enterprise. Aligning process, technology, and information to solve increasingly complex release orchestration challenges, this Gartner “Cool Vendor in IT DevOps” upgrades the enterprise release management from spreadsheets, meetings, and email to an integrated dashboard giving release managers insight and control over large software releases.

IoT & Smart Cities Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to great conferences, helping you discover new conferences and increase your return on investment.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
SYS-CON Events announced today that Silicon India has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Published in Silicon Valley, Silicon India magazine is the premiere platform for CIOs to discuss their innovative enterprise solutions and allows IT vendors to learn about new solutions that can help grow their business.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.
DXWorldEXPO LLC announced today that "IoT Now" was named media sponsor of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.