Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Java IoT, Open Source Cloud, @CloudExpo, Cloud Security

Containers Expo Blog: Blog Feed Post

Service Virtualization ROI: Is it Worth It?

Consider the ROI opportunities: OpEx/CapEx reduction, risk reduction & incremental top-line revenue

service virtualization roiLast week, we explored the business value of service virtualization: how service virtualization's simulated test environments speed innovation, accelerate time to market, and reduce risks.

It's not hard to see the potential benefits of service virtualization. However, many organizations are skeptical about whether service virtualization is really worth the cost and effort.

If you've been wondering if service virtualization is worth it, consider the opportunities for ROI in terms of OpEx reduction, CapEx reduction, risk reduction, and incremental top-line revenue...

OpEx Reduction from Service Virtualization
The reduction in operation expenditures is predicated around three major cost savings:

  • The elimination of wait time
  • The reduction of time needed to configure environments
  • The reduction of access fees

Wait Time Reduction from Service Virtualization
QA and performance testing teams are notoriously stalled across many steps within the SDLC. This wait time stems from their inability to continue a task or complete a step in a process. Service Virtualization delivers a net benefit by reducing wait time as follows:

  • Reduce Wait Time for Staged Test Environment Access – up to 100%
  • Reduce Wait Time for Test Data – up to 100%
  • Reduce Wait Time for APIs – up to 100%

Configuration Time Reduction from Service Virtualization
Testing an application involves multiple configuration steps required to set up, tear down, and reset the dependent test environment. Service Virtualization enables an organization to automatically manage the configuration of dependent systems as follows:

  • Reduce Configuration Time for Each Dependent System – up to 100%
  • Reduce Configuration Time for Applying Test Data – up to 100%
  • Reduce Configuration Time for Performance Testing – up to 100%
  • Reduce Configuration Time for Aggregating Dependent System End Points – up to 100%

Access Fee Reduction from Service Virtualization
In many cases, teams incur access fees for testing against a staged system (such as a mainframe or large ERP system) or for accessing a managed environment. For example, a mainframe could be charging back for MIPS usage or a third party might be hosting a staged test instance charging per transaction or per time block. Service Virtualization enables an organization to reduce these access fees as follows:

  • Reduce Access Fees for Mainframes – up to 80%
  • Reduce Access Fees for Staged Instances – up to 80%
  • Reduce Cloud-Based Access Fees – up to 80%

CapEx Reduction from Service Virtualization
Service Virtualization yields a significant reduction in capital expenditures as well as operation expenditures. Without Service Virtualization, the complexity of an organization’s test environment (test lab) can be accommodated only by a physical staged environment. In these cases, organizations seeking to support additional capacity need to acquire, maintain, and configure machines and licenses in order to expand the staged environment. With Service Virtualization, an organization can suspend purchasing additional machines and licenses in favor of leveraging simulated test environments. As the organization shifts its focus to the simulated test environments established by Service Virtualization, the overall demand on the current staged test environment is significantly diminished as follows:

  • Reduce Need for Hardware – up to 100%
  • Reduce Need for Software Licenses – up to 95%
  • Reduce Need for Lab Infrastructure – up to 100%

Risk Reduction from Service Virtualization
In the vast majority of development projects, schedule overruns or last-minute “feature creep” result in software testing being significantly shortchanged or relegated to a handful of verification tasks. Since testing is a downstream process, the cycle time allotted for testing activities is drastically curtailed when timelines of upstream processes are stretched. Since Service Virtualization delivers a simulated test environment, QA and performance testers can simulate missing or evolving system components in order to incrementally test applications earlier and more completely. The value of Service Virtualization is compounded when you apply this “early access” concept to an agile development environment since Service Virtualization helps development and QA keep pace with the speed and cadence of agile methods. Risk reduction results achieved with Service Virtualization include:

  • Increase the Time Allotted for Testing – up to 80%
  • Decrease the Costs Associated with Defect Remediation – 10x-50x reduction
  • Increase the Testing Scope to Business-Driven Test Scenarios
  • Decrease the Defects Passed on to Customers

Incremental Top Line Revenue from Service Virtualization
Innovation is key to an organization’s success. Innovation speed and time-to-market can make the difference between a first-mover advantage and winning the next big enterprise deal. There is no doubt that Service Virtualization speeds innovation by eliminating delays and providing the infrastructure for better testing—and, therefore, better-quality deliverables.

The incremental revenue benefit associated with Service Virtualization can be a bit difficult to calculate due to the array of other conditions that impact the release and/or deployment of a software product. Nonetheless, the qualitative benefits associated with Service Virtualization’s contribution to releasing better product earlier cannot be contested:

  • Faster Release Cycles
  • Earlier Time To Market
  • Earlier Start for the Testing Cycle
  • More Opportunity for Earlier Revenue

[Webinar] Service Virtualization: Real Data, Real Results

voke-comcast-service-virtualizationWant to learn more about the ROI of service virtualization? Join our Accelerating the SDLC with Service Virtualization webinar on Tuesday, October 22 2:00 PM EST / 11:00 AM PST. Theresa Lanowitz (founder of voke) and Frank Jennings (Director of Performance Test at Comcast) will share real data and real results from service virtualization.

First, Theresa Lanowitz will discuss the latest research findings on Service Virtualization business value and usage within leading IT organizations. Next, Comcast will share first-hand experiences with service virtualization—including implementation challenges, best practices, and impact on schedules, costs, risk, and innovation. A significant portion of this webinar will be reserved for your questions.

The event will be moderated by David Rubenstein, editor-in-chief of SD Times.


More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...