Welcome!

Containers Expo Blog Authors: Pat Romanski, Elizabeth White, PagerDuty Blog, XebiaLabs Blog, Automic Blog

Related Topics: Containers Expo Blog

Containers Expo Blog: Blog Post

Why Leading Companies Love Service Virtualization

For Valentine's Day: A collection of Service Virtualization love stories

Why leading organizations across financial, retail, communications, utilities, travel, insurance, and other industries fell in love with service virtualization...

There are a lot of things to love about service virtualization. Just consider a few of the many exciting findings from voke's recently-released  service virtualization research:

  • 36% of respondents achieved a greater than 41% reduction in production defects
  • 46% achieved greater than 41% reduction in total defects
  • 20% achieved more than 2X the test coverage
  • 26% achieved an increase of 2X or greater of test execution rates
  • 34% achieved a decrease of 50% or greater in test cycle time
  • 40% achieved a decrease of 40% or greater in release cycle time

These stats are quite impressive. But what's a love story without, well, a story?

Here are some great stories about why leading organizations across financial, retail, travel, communications, utilities, insurance, and other industries fell in love with service virtualization...

Why Staples Loves Service Virtualization
Staples is committed to making everything easy for its customers, but ensuring positive customer experiences on their eCommerce site is far from simple. Functional testers must contend with the high number of dependent systems, subsystems, and services that are required to complete almost any eCommerce transaction-but rarely available for dev/test purposes.

The Staples eCommerce functional testing team turned to service virtualization in hopes that it would enable them to more rapidly and more exhaustively test complex transactions across highly-distributed systems.  They found that with service virtualization, they could start testing earlier in each cycle and complete their test plans faster.

This was especially critical on parallel development projects, such as when the Retail, Warehouse, and eCommerce teams were all working on functionality related to online ordering with in-store pickup.  This was a complex project with a very aggressive timeline. Using service virtualization to simulate resources that were still being developed, each team's development and testing could move forward without waiting on the others. With the virtual assets, they could start integration testing much earlier than if they had to wait for all the dependent components to be completed. This helped them get everything running smoothly even before they integrated all the completed components. Ultimately, they not only completed the project on budget, but actually ended up deploying it two weeks early.

In addition to loving how service virtualization promotes faster release cycles, they are also quite fond of how service virtualization gives them greater control over environment stability and application behavior, as well as how it frees them to test on their own schedule.

Read more about service virtualization at Staples

Why Comcast Loves Service Virtualization
Comcast's Performance Testing team often ran into scheduling conflicts around sharing the test infrastructure. Sometimes downstream systems were not available. Other times, test engineers would try to run tests at the same time, which could affect the test results. This led to variability between tests, which made it challenging to isolate particular problems.

When they started with service virtualization 3 years ago, their initial focus was on the biggest pain points in terms of scheduling conflicts within the performance testing teams, unavailable systems, and systems where our testing would impact other development or test groups. Since then, they've been able to virtualize about 98% of the interfaces involved in their tests, and they've seen a 65% annual reduction in the amount of time it takes us to create and maintain test data (factoring in the time they spend creating and updating virtual assets). They also reduced staging environment downtime by 60%.  Their tests are now more predictable, more consistent, and more representative of what would be seen in production. Moreover, they're able to increase the scope of testing in many cases.

In summary, they love service virtualization because it's allowed them to get great utilization from their testing staff, complete more projects on time, and also save money by lowering the overall total cost of performing the testing required for a given release.

Read more about service virtualization at Comcast

Why a Fortune 500 Retailer Loves Service Virtualization
As a leading Fortune 500 retailer advances its omnichannel retail strategy, ensuring a positive user experience on the company's eCommerce site has become increasingly critical. More and more of their customers are now utilizing the ecommerce site at some point during the purchase process- for example, to research products before (or after) visiting a brick and mortar location, to order products for direct delivery or in-store pickup, or even to initiate a product return. Recognizing that all these additional touch points represent opportunities to reinforce-or undermine-their reputation as a market leader, the company is firmly committed to ensuring that all transactions associated with this eCommerce site meet or exceed customer expectations.

They love the fact that service virtualization provides them an efficient and cost-effective way to accelerate the delivery of top-quality functionality. They could rapidly create "virtual assets" for dependencies ranging from mainframes, to SAP, to JDBC, to ESBs, to partner APIs, and countless services- all of which communicate via a variety of message protocols and formats. As a result, all nine regional offices gained anytime, anywhere access to a complete test environment.

Before adopting Service Virtualization, team members would typically wait weeks for access to test data, then try to race through the test plan during highly-limited (and inconvenient) test environment access windows. Now, the team can begin testing as soon as a new service is completed-even if dependent systems are not yet completed or are unavailable for testing-and complete the full range of planned testing. With an unprecedented level of control over the dependencies' behavior, their testing now covers a broader range of "what if" scenarios (e.g., concurrency, fail-over, performance, and negative test scenarios). This extensive, early testing drastically reduces the number of issues that surface when their services are finally integrated into the production system-accelerating the release cycle while reducing business risks.

Ultimately, Service Virtualization has given the team an efficient way to ensure that new ecommerce services are validated extensively and accurately, then are fully optimized before deployment. As a result, the company has been able to reduce costs, accelerate the delivery of innovative new functionality, and achieve their ultimate goal of ensuring a positive, seamless customer experience across the web site, mobile applications, and retail stores. With results like that, what's not to love?

Read more about service virtualization at this leading US retailer

Why Ignis Asset Management Loves Service Virtualization
Ignis recently embarked on a large project aimed at outsourcing the back office as well as implementing the architecture and applications required to support the outsourcing model. A number of projects had to be developed and delivered in parallel, but they didn't have the resources, budget, and management capacity required to create and maintain multiple test environments internally. This limited test environment access impeded their ability to validate each application under test's (AUT) integration with third-party architectures. Moreover, their third-party providers also had limited test environment access, which restricted the time and scope of their joint integration testing.

With an enterprise-grade API Testing solution deployed in concert with leading service virtualization technologies, they were able to reduce the execution and verification time for their transaction regression test plan from 10 days to a half day. This testing is not only automated, but also quite extensive. For example, to test the Ignis system's integration with one business partner's trading system, Ignis's fully automated regression testing now covers 300 test scenarios in a near UAT-level approach-with 12,600 validation checkpoints per test run.

Beyond addressing the original challenges posed by the project, the service virtualization and API testing solution has also enabled automated testing to occur all the way from the component/unit level to system integration. To achieve this impressive level of automation, testers fostered close relationships with the development team. Now, testers' role within the organization is elevated, and there's much more love and collaboration between development and testers.

Read more about service virtualization and API Testing at Ignis

More Service Virtualization Love Stories
Still warming up to the idea of bringing service virtualization into your own organization? Here are even more reasons why so many leading enterprises are now devoted to service virtualization:

  • Financial: Eliminating Third-Party Test Environment Access Fees: A leading brokerage firm needed to increase the scope and frequency of testing without incurring increased fees for accessing their banking partner's test environment. Service Virtualization enabled them to cut their dependency on the partner's test environment by simulating the behavior of partner services (including those communicating over a specialized FIX protocol).
  • Travel: Parallel Development of Highly-Interdependent Components: A global resort group needed to roll out a new heterogeneous, distributed system that involved numerous contractors developing interdependent components in parallel. Service Virtualization allowed the organization to eliminate development deadlocks that stemmed from this extreme interdependency. By virtualizing the expected behavior of "not yet implemented" components across multiple protocols and technologies (JSON, MQ, JMS, REST, SOAP, etc.), the organization enabled each contractor to start developing and testing their assigned components without waiting for dependencies.
  • Insurance: Reducing Infrastructure Costs for Staged Test Environments: An insurance company needed to establish seven distinct test environments for a new application. Each environment had to leverage data from over 20 back-end systems. This was not only complicated, but also costly: licensing the MQ broker that drove communications between the application under test and the back-end systems cost approximately 100,000 Euros per environment. Using Service Virtualization to emulate the interface to the back-end systems, they were able to cut the dependency on those systems and significantly reduce the costs associated with establishing the expected test environments.
  • Utilities: Facilitating Partner Integration & Validation: To establish the infrastructure needed to efficiently transact in a recently-deregulated energy market, this leading energy organization created a new message format and API to streamline communications related to energy delivery and administration. Since the project was on a very strict deadline, partners had to develop and test their integration with this new API at the same time that it was being developed, they simulated the anticipated API behavior using Service Virtualization. They also automated the validation process, lending objectivity and traceability to the partner certification process.
  • Financial: Removing Test Data Management Bottlenecks for Agile: A financial services provider was migrating to Agile to support a continuous delivery model. They soon discovered that completing 2-week scrums on time would be impossible unless they reduced the lengthy wait time for accessing a test environment configured with the appropriate test data. Service Virtualization enabled them to provide immediate access to the necessary test environments; virtual environments with the appropriate data could be set up in hours rather than weeks. Since functional and performance testers can now easily access the same level of sophisticated data much earlier in each cycle, they have been able to expand test coverage and begin continuous regression testing.
  • Non-Profit: Cloud-Based Solution for Continuous Access to a Highly-Restricted Government System: An education portal application developed by a European non-profit organization links students to the higher education institutions where they wish to study, as well as to the government agency that helps them finance their education. When educational institutions want to develop and test transactions involving this portal, they need access to the behavior of the interconnected government agency's system-however, this system is not readily available for testing. Service Virtualization provides these institutions continuous, secure access to the government system behavior that is critical for completing thorough end-to-end tests against the portal application.
  • Telecom: Enabling Faster, Earlier, and More Complete Testing: To accelerate application release cycles, KPN needed to address a critical bottleneck in the testing process. Their end-to-end test scenarios interacted with dependencies controlled by other divisions and external entities, and gaining access to the required dependencies was a slow and frustrating process. Due to these test environment access constraints, testing efforts were regularly delayed and cut short. Service Virtualization enabled them to test faster, earlier, and more completely-enabling fully-automated continuous testing.
  • Government: Service Virtualization in a VMware Environment: At one of NZ's largest government agencies (the Inland Revenue Department), software development procedures had exceeded their architectural limits; projects were taking longer than necessary and costing more due to the serial nature of the software development lifecycle. The impact of this was outages due to issues that should have been resolved during the testing phases earlier in the project. Service Virtualization, working hand-in-hand with VMware, enables testing to commence at a much earlier stage and allows multiple projects to be tested in parallel. Not only does this reduce the time taken to complete projects, it also significantly improves both their quality and reliability

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

@ThingsExpo Stories
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...
Intelligent Automation is now one of the key business imperatives for CIOs and CISOs impacting all areas of business today. In his session at 21st Cloud Expo, Brian Boeggeman, VP Alliances & Partnerships at Ayehu, will talk about how business value is created and delivered through intelligent automation to today’s enterprises. The open ecosystem platform approach toward Intelligent Automation that Ayehu delivers to the market is core to enabling the creation of the self-driving enterprise.
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
Consumers increasingly expect their electronic "things" to be connected to smart phones, tablets and the Internet. When that thing happens to be a medical device, the risks and benefits of connectivity must be carefully weighed. Once the decision is made that connecting the device is beneficial, medical device manufacturers must design their products to maintain patient safety and prevent compromised personal health information in the face of cybersecurity threats. In his session at @ThingsExpo...
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Massive Networks will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Massive Networks mission is simple. To help your business operate seamlessly with fast, reliable, and secure internet and network solutions. Improve your customer's experience with outstanding connections to your cloud.
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...
Because IoT devices are deployed in mission-critical environments more than ever before, it’s increasingly imperative they be truly smart. IoT sensors simply stockpiling data isn’t useful. IoT must be artificially and naturally intelligent in order to provide more value In his session at @ThingsExpo, John Crupi, Vice President and Engineering System Architect at Greenwave Systems, will discuss how IoT artificial intelligence (AI) can be carried out via edge analytics and machine learning techn...
Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution and join Akvelon expert and IoT industry leader, Sergey Grebnov, in his session at @ThingsExpo, for an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, will examine the regulations and provide insight on how it affects technology, challenges the established rules and will usher in new levels of diligence a...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics ...
In the enterprise today, connected IoT devices are everywhere – both inside and outside corporate environments. The need to identify, manage, control and secure a quickly growing web of connections and outside devices is making the already challenging task of security even more important, and onerous. In his session at @ThingsExpo, Rich Boyer, CISO and Chief Architect for Security at NTT i3, discussed new ways of thinking and the approaches needed to address the emerging challenges of security i...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
There is only one world-class Cloud event on earth, and that is Cloud Expo – which returns to Silicon Valley for the 21st Cloud Expo at the Santa Clara Convention Center, October 31 - November 2, 2017. Every Global 2000 enterprise in the world is now integrating cloud computing in some form into its IT development and operations. Midsize and small businesses are also migrating to the cloud in increasing numbers. Companies are each developing their unique mix of cloud technologies and service...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that Datera, that offers a radically new data management architecture, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera is transforming the traditional datacenter model through modern cloud simplicity. The technology industry is at another major inflection point. The rise of mobile, the Internet of Things, data storage and Big...
SYS-CON Events announced today that Akvelon will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Akvelon is a business and technology consulting firm that specializes in applying cutting-edge technology to problems in fields as diverse as mobile technology, sports technology, finance, and healthcare.