Welcome!

Containers Expo Blog Authors: Liz McMillan, Elizabeth White, Zakia Bouachraoui, Pat Romanski, Yeshim Deniz

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Microsoft Cloud, Open Source Cloud

Containers Expo Blog: Article

How Service Virtualization Helped Comcast Release Tested Software Faster

Service virtualization has allowed us to get great utilization from our testing staff and complete more projects on time

Frank Jennings Comcast Service Virtualization

"Service virtualization has allowed us to get great utilization from our testing staff, complete more projects on time, and also save money by lowering the overall total cost of performing those tests for a given release."

Frank Jennings, TQM Performance Director at Comcast, shares his service virtualization experiences with Parasoft in this Q & A...

Why did Comcast explore service virtualization?

There were two primary issues that led Comcast to explore service virtualization. First, we wanted to increase the accuracy and consistency of performance test results. Second, we were constantly working around frequent and lengthy downtimes in the staged test environments.

My team executes performance testing across a number of verticals in the company-from business services, to our enterprise services platform, to customer-facing UIs, to the backend systems that perform the provisioning and activation of devices for subscribers on the Comcast network. While our testing targets (AUTs) typically have staged environments that accurately represent the performance of the production systems, the staging systems for the AUT's dependencies do not.

Complicating the matter further was the fact that these environments were difficult to access. When we did gain access, we would sometimes impact the lower environments (the QA or integration test environments) because they weren't adequately scaled and could not handle the load. Even when the systems could withstand the load, we received very poor response times from these systems. This meant that our performance test results were not truly predictive of real world performance.

Another issue was that we had to work around frequent and lengthy downtimes in the staging environments. The staging environment was not available during the frequent upgrades or software updates. As a result, we couldn't run our full performance tests. Performance testing teams had to switch off key projects at critical time periods in order to keep busy- they knew they wouldn't be able to work on their primary responsibility because the systems they needed to access just weren't available.

How did this impact the business?

These challenges were driving up costs, reducing the team's efficiency, and impacting the reliability and predictability of our performance testing. Ultimately, we found that the time and cost of implementing service virtualization was far less than the time and cost associated with implementing all the various systems across all those staging environments-or building up the connectivity between the different staging environments.

Did you consider expanding service virtualization beyond performance testing?

Yes, the functional testing teams sometimes experience the same issues with dependent systems being unavailable and impeding their test efforts. They're starting to use service virtualization so that they can continue testing rather than get stuck waiting for systems to come back up.

We're currently in the process of expanding service virtualization to the functional testing of our most business-critical applications. We're deploying service virtualization not only to capture live traffic for those applications, but also to enable functional testers to quickly select and provision test environments. In addition to providing the team the appropriate technologies and training, we're taking time to reassure them that their test results won't be impacted by using virtual assets rather than live services.

In your opinion, what is the key benefit of service virtualization?

The key benefit of service virtualization is the increased uptime and availability of test environments. Service virtualization has allowed us to get great utilization from our testing staff, complete more projects on time, and also save money by lowering the overall total cost of performing those tests for a given release.

If you could start all over again with service virtualzation, what would you do differently?

I think things would have run more smoothly if we had a champion in place across all teams at the beginning to marshal the appropriate resources. The ideal rollout would involve centralizing the management and implementation of the virtual assets, implementing standards right off the bat, and using the lessons learned in each group to make improvements across all teams.

Any other tips for organizations just starting off with service virtualization?

Make sure that your virtual assets can be easily reused across different environments (development, performance, system integration test, etc.). It's really helpful to be able to capture data in one in environment then use it across your other environments. Obtaining data for realistic responses can be challenging, so you don't want to constantly reinvent the wheel.

Also, don't underestimate the amount of education that's needed to get the necessary level of buy-in. For each team or project where we introduced service virtualization, we needed to spend a fair amount of time educating the project teams and business owners about what service virtualization is, what business risks are associated with using it for testing, and how the system proactively mitigates those risks. People are understandably nervous when they hear that you're removing live elements from the testing environment, so some education is needed to put everyone at ease.

Service Virtualization: Real Results Webinar
Want to learn more about service virtualization at Comcast, including how it saved them about $500,000 and helped them reduce downtime by 60%?

Watch the on-demand webinar Service Virtualization: Accelerating the SDLC with Simulated Test Environments.

New Research Package from Gartner and Parasoft: Accelerating the SDLC with Service Virtualization

gartner service virtualizationThe new Service Virtualization research package from Gartner and Parasoft provides more details about how service virtualization helps organizations accelerate the SDLC. Download it to learn:

  • Why service virtualization is a "must-have" for accelerating the SDLC.

  • How service virtualization helps organizations release thoroughly-tested software faster-and at a lower total overall cost.

  • Recommendations for organizations getting started with service virtualization.

  • Strategies for streamlining the release management process beyond service virtualization.

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

IoT & Smart Cities Stories
@CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in ...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
LogRocket helps product teams develop better experiences for users by recording videos of user sessions with logs and network data. It identifies UX problems and reveals the root cause of every bug. LogRocket presents impactful errors on a website, and how to reproduce it. With LogRocket, users can replay problems.
Data Theorem is a leading provider of modern application security. Its core mission is to analyze and secure any modern application anytime, anywhere. The Data Theorem Analyzer Engine continuously scans APIs and mobile applications in search of security flaws and data privacy gaps. Data Theorem products help organizations build safer applications that maximize data security and brand protection. The company has detected more than 300 million application eavesdropping incidents and currently secu...
Rafay enables developers to automate the distribution, operations, cross-region scaling and lifecycle management of containerized microservices across public and private clouds, and service provider networks. Rafay's platform is built around foundational elements that together deliver an optimal abstraction layer across disparate infrastructure, making it easy for developers to scale and operate applications across any number of locations or regions. Consumed as a service, Rafay's platform elimi...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace.
Here are the Top 20 Twitter Influencers of the month as determined by the Kcore algorithm, in a range of current topics of interest from #IoT to #DeepLearning. To run a real-time search of a given term in our website and see the current top influencers, click on the topic name. Among the top 20 IoT influencers, ThingsEXPO ranked #14 and CloudEXPO ranked #17.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...