Welcome!

Virtualization Authors: Carmen Gonzalez, Elizabeth White, Esmeralda Swartz, Pat Romanski, Liz McMillan

Related Topics: Virtualization

Virtualization: Article

The Next Generation of Test Environment Management

Application-Behavior Virtualization

Traditional hardware and OS virtualization technology reduce software development/testing infrastructure costs and increase access to constrained systems. Yet, it's not always feasible to leverage hardware or OS virtualization for many large systems such as mainframes and ERPs. More pointedly, configuring and maintaining the environment and data needed to support development and test efforts still requires considerable time and resources. As a result, keeping a complex staged environment in synch with today's constantly-evolving Agile projects is a time-consuming, never-ending task.

Complementing traditional virtualization, Application-Behavior Virtualization (ABV) provides a new way for developers and testers to exercise their applications in incomplete, constantly evolving, and/or difficult-to-access environments. Rather than virtualizing entire applications and/or databases, Application-Behavior Virtualization focuses on virtualizing only the specific behavior that is exercised as developers and testers execute their core use cases. Beyond "service virtualization," it extends across all aspects of composite applications - services, mainframes, web and mobile device UIs, ERPs, ESB/JMS, legacy systems, and more.

This new breed of virtualization radically reduces the configuration time, hardware overhead, and data management efforts involved in standing up and managing a realistic and sustainable dev/test environment.

The Complexity of Quality
Today's complex, interdependent systems wreak havoc on parallel development and functional/performance testing efforts - significantly impacting productivity, quality, and project timelines. As systems become more complex and interdependent, development and quality efforts are further complicated by constraints that limit developer and tester access to realistic test environments. These constraints often include:

  • Missing/unstable components
  • Evolving development environments
  • Inaccessible third-party/partner systems and services
  • Systems that are too complex for test labs (mainframes or large ERPs)
  • Internal and external resources with multiple "owners"

The scope of what needs to be tested is increasing exponentially. With multiple new interfaces and ways for people to access core technology, systems and architectures have grown broader, larger, and more distributed - with multiple endpoints and access points. For example, you might have a thick client, a web browser, a device, and a mobile application all accessing the same critical component. Not surprisingly, testing in this environment has become very difficult and time consuming.

Furthermore, the number and range of people involved with software quality is rising. Advancements in development methodologies such as Agile are drawing more and more people into quality matters throughout the SDLC. For instance, business analysts are increasingly involved with user acceptance testing, QA has become responsible for a broader and more iterative quality cycle, and the development team is playing a more prominent role in the process of software quality and validation. Moreover, today's large distributed teams also exhibit a similar increase in team members involved with quality.

Also increasing are the permutations of moving parts - not only hardware and operating systems, but also client/server system upgrades, patches, and dependent third-party applications. As the service-oriented world broke apart many monolithic applications, service orientation also increased and distributed the number of connections and integration points involved in executing a business process.

Hardware and OS Virtualization Lowers Cost & Increases Access - Yet Significant Gaps Remain
In an attempt to provide all of the necessary team members ubiquitous access to realistic dev/test environments in light of these complexities, many organizations have turned to hardware and OS virtualization. Virtualizing the core test foundations - specific operating systems, configurations, platforms, etc. - has been a tremendous step forward for dev/test environment management. This virtualization provides considerable freedom from the live system, simultaneously reducing infrastructure costs and increasing access to certain types of systems. Moreover, leveraging the cloud in concert with virtualization provides a nearly unlimited bandwidth for scaling dependent systems.

Nevertheless, in terms of development or test environments, some significant gaps remain. First, some assets cannot be easily virtualized. For example, it's often unfeasible to leverage hardware or OS virtualization technology for large mainframe applications, third-party applications, or large ERPs.

Moreover, even when virtualization can be completed, you still need to configure and manage each one of those applications on top of the virtualized stack. Managing and maintaining the appropriate configuration and data integrity for all the dependent systems remains an ominous and time-consuming task. It is also a task that you will need some outside help with - you will inevitably be relying on other groups, such as operations or DevOps, to assist with at least certain aspects of the environment configuration and management.

Application-Behavior Virtualization reduces this configuration and data management overhead by enabling the developer or tester to rapidly isolate and virtualize just the behavior of the specific dependent components that they need to exercise in order to complete their end-to-end transactions. Rather than virtualizing entire systems, you virtualize only specific slices of dependent behavior critical to the execution of development and testing tasks.

It is completely feasible to use the cloud for scalability with Application-Behavior Virtualization. Nevertheless, since you're virtualizing only the specific behavior involved in dev/test transactions (not entire systems), the scope of what's being virtualized is diminished... and so is the need for significant incremental scalability.

What Is Application-Behavior Virtualization?
Application-Behavior Virtualization is a more focused and efficient strategy for eliminating the system and environment constraints that impede the team's ability to test their heterogeneous component-based applications. Instead of trying to virtualize the complete dependent component - the entire database, the entire third-party application, and so forth - you virtualize only the specific behavior that developers and testers actually need to exercise as they work on their particular applications, components, or scenarios.

For instance, instead of virtualizing an entire database (and performing all associated test data management as well as setting up the database for each test session), you monitor how the application interacts with the database, then you virtualize the related database behavior (the SQL queries that are passed to the database, the corresponding result sets that are returned, and so forth). This can then be accessed and adjusted as needed for different development and test scenarios.

To start, you designate which components you want to virtualize, then - as the application is exercised - the behavior of the associated transactions, messages, services, etc., is captured in what we call a "virtual asset." You can then configure this virtual asset by parameterizing its conditional behavior, performance criteria, and test data. This virtual asset can then emulate the actual behavior of the dependent system from that point forward, even if the live system is no longer accessible for development and testing.

Test data can be associated with these virtual assets, reducing the need for a dependent database and the need to configure and manage the dependent database that, if shared, usually gets corrupted.

By applying Application-Behavior Virtualization in this manner, you can remove the dependency on the actual live system/architecture while maintaining access to the dependent behavior. This ultra-focused approach significantly reduces the time and cost involved in managing multiple environments as well as complex test data management.

What Does Application-Behavior Virtualization Involve?
Application-Behavior Virtualization is achieved via the following phases:

  • Capture or model the real behavior of dependent systems
  • Configure the virtualized asset to meet demands of the test scenarios
  • Provision the virtualized asset for the appropriate team members or partners to access and test on their schedule

Phase 1: Capture
Real system behavior is captured using monitors to record live transaction details on the system under test by analyzing transaction logs, or by modeling behavior from a simple interface.

The intent here is to capture the behavior and performance of the dependent application for the system under test and leverage that behavior for development and testing efforts. This capturing can be done in three ways:

  1. If you have access to the live system, you can capture behavior by monitoring live system traffic. With a proxy monitoring traffic on the dependent system, the related messages are monitored, then the observed behavior is represented in a virtualized asset. This capturing can cover simple or composite behavior (e.g., a call to transfer funds in one endpoint can trigger an account balance update on another).
  2. If you want to emulate the behavior represented in transaction logs, virtual assets can be created by analyzing those logs. This is a more passive (and less politically volatile) approach to capturing the system behavior.
  3. If you're working in an environment that is evolving to include new functionality, you might want to model the behavior of the "not yet implemented" functionality within the Application-Behavior Virtualization interface. Leveraging the broad scope of protocol support available to facilitate modeling, you can rapidly build a virtual asset that emulates practically any anticipated behavior. For instance, you can visually model various message formats such as XML, JSON, and various legacy, financial, healthcare, and other domain-specific formats.

Phase 2: Configure
The virtualized asset's behavior can be fine-tuned, including performance, data source usage, and conditional response criteria.

After you use any of the three above methods to create a virtual asset, you can then instruct that asset to fine-tune or extend the behavior that it emulates. For instance, you can apply Quality of Service metrics so you can alter how you would like the asset to behave from the performance (timing, latency, and delay) perspective. You can also apply and modify test data for each particular asset to reproduce specific conditions critical for completing dev/test tasks. For example, you can configure various error and failure conditions that are difficult to reproduce or replicate with real systems. By adding data sources and providing conditional response criteria, you can tune the virtualized asset to perform as expected - or as unexpected (for negative testing).

Phase 3: Provision and Test
The environment is then provisioned for secure access across teams and business partners. The virtualized asset can then be leveraged for testing.

Once a virtualized asset is created, it can be provisioned for simplified uniform access across teams and business partners - either locally or globally (on a globally-accessible server, or in the cloud). They can then be used in unit, functional, and performance tests. Since virtual assets leverage a wide array of native protocols, they can be accessed for manual testing or automated testing by any test suite or any test framework, including Parasoft Test, HP Quality Center suite, IBM Rational Quality Management suite, and Oracle ATS. It is also easy to scale virtualized assets to support large-scale, high-throughput load and performance tests.

Even after the initial provisioning, these virtual assets are still easily modifiable and reusable to assist you in various dev/test scenarios. For instance, one of your test scenarios might access a particular virtual asset that applies a certain set of conditional responses. You can instantly construct an additional virtual asset that inherits those original conditions and then you can adjust them as needed to meet the needs of a similar test scenario.

How Application-Behavior Virtualization Speeds Testing and Cuts Costs: Three Common Use Cases
To conclude, let's look at how organizations have successfully applied Application-Behavior Virtualization to address dev/test environment management challenges in three common contexts:

  1. Performance/capacity-constrained environment
  2. Complex, difficult-to-access systems (mainframes, large ERPs, 3rd party systems)
  3. Parallel development (Agile or other iterative processes)

Performance/Capacity-Constrained Environments
Staged environments frequently lack the infrastructure bandwidth required to deliver realistic performance. Placing multiple virtualized applications on a single piece of hardware can increase access to a constrained resource, but the cost of this increased access is often degraded performance. Although the increased access could technically enable the execution of performance and load tests, the results typically would not reflect real-world behavior, significantly undermining the value of such testing efforts.

Application-Behavior Virtualization allows you to replicate realistic performance data independent of the live system. Once you create a virtual asset that captures the current performance, you can adjust the parameters to simulate more realistic performance. Performance tests can then run against the virtual asset (with realistic performance per the Quality of Service agreement) rather than the staged asset (with degraded performance).

Controlling the virtual asset's performance criteria is simply a matter of adjusting controls for timing, latency, and delay. In addition to simulating realistic behavior, this can also be used to instantly reproduce performance conditions that would otherwise be difficult to set up and control. For instance, you can simulate various levels of slow performance in a dependent component, then zero in on how your application component responds to such bottlenecks.

Even when it is possible to test against systems that are performing realistically, it is often not feasible to hit various components with the volume typical of effective load/stress tests. For example, you might need to validate how your application responds to extreme traffic volumes simulating peak conditions, but how do you proceed if your end-to-end transactions pass through a third-party service that charges per-transaction access fees?

If your performance tests pass through a component that you cannot (or do not want to) access under extreme load testing conditions, Application-Behavior Virtualization enables you to capture its behavior under a low-volume test (e.g., a single user transaction), adjust the captured performance criteria as desired, then perform all subsequent load testing against that virtualized component instead of the actual asset. In the event that the constrained component is not available for capture, you can create a virtual asset from scratch - using Application-Behavior Virtualization visual modeling interfaces to define its expected behavior and performance.

Complex, Difficult-to-Access Systems (Mainframes, Large ERPs, Third-Party Systems)
With large complex systems (mainframes, large ERPs, third-party systems), multiple development and test teams are commonly vying for limited system access for testing. Most of these systems are too complex for a test lab or a staged environment. To exercise end-to-end transactions involving these components, teams usually need to schedule (and pay for) access to a shared resource. This approach commonly causes test efforts to be delayed and/or prevents the team from performing the level and breadth of testing that they would like. For iterative development processes (e.g., Agile), the demand for frequent and immediate testing increases the severity of these delays and fees exponentially.

Even if organizations manage to use virtualization for these complex systems, proper configuration for the team's distinct testing needs would require a tremendous amount of work. And once that obstacle is overcome, another is right on its heels: developing and managing the necessary set of test data can also be overwhelming.

When teams use Application-Behavior Virtualization in such contexts, they only need to access the dependent resources long enough to capture the specific functionality related to the components and transactions they are working on. With this behavior captured in virtual assets, developers and testers can then access it continuously, allowing them to exercise end-to-end transactions at whatever time they want (without scheduling) and as frequently as they want (without incurring exorbitant transaction/access fees).

Parallel Development (Agile or Other Iterative Processes)
Even for simple applications, providing continued access to a realistic test environment can be challenging for teams engaged in parallel development (Agile or other iterative processes). A wide range of team members, including developers, testers, sometimes business analysts, all need easy access to a dev/test environment that is evolving in synch with their application. If the team decided to take the traditional virtualization route here, they would not only face all the initial setup overhead, but also be mired in constant work to ensure that the virtualized systems remain in step with the changes introduced in the latest iteration. When the team ends up waiting for access to dependent functionality, agility is stifled

Application-Behavior Virtualization reduces these constraints and associated delays by giving developers and testers the ability to rapidly emulate the needed behavior rather than having to wait for others to upgrade, configure, and manage the dependent systems. Even if anticipated functionality or components are not yet implemented, their behavior can be modeled rapidly, then deployed so team members can execute the necessary end-to-end transactions without delay, And if the dependent functionality recently changed, previously captured behavior can be easily modified either by re-capturing key transactions or by adjusting behavior settings in a graphical interface (without scripting or coding).

For example, many organizations are developing mobile applications, and this development is typically performed by a separate mobile development team. Since mobile applications commonly depend on core application components developed and maintained by other teams, the mobile team is often delayed as they wait for the other teams to complete work on the core components that their own mobile apps need to interact with. Application-Behavior Virtualization can eliminate these delays by allowing the mobile development team to emulate the behavior of the dependent components even if the actual components are incomplete, evolving, or otherwise difficult-to-access during the parallel development process.

Key Takeaways
Leveraging Application-Behavior Virtualization, teams reduce the complexity and the costs of managing multiple environments while providing ubiquitous access for development and test. Application-Behavior Virtualization helps you:

  • Reduce infrastructure costs
  • Improve provisioning/maintenance of test environments
  • Increase test coverage
  • Reduce defects
  • Improve predictability/control of software cycle times
  • Increase development productivity
  • Reduce third-party access fees

More Stories By Wayne Ariola

Wayne Ariola is Vice President of Strategy and Corporate Development at Parasoft, a leading provider of integrated software development management, quality lifecycle management, and dev/test environment management solutions. He leverages customer input and fosters partnerships with industry leaders to ensure that Parasoft solutions continuously evolve to support the ever-changing complexities of real-world business processes and systems. Ariola has more than 15 years of strategic consulting experience within the technology and software development industries. He holds a BA from the University of California at Santa Barbara and an MBA from Indiana University.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that Matrix.org has been named “Silver Sponsor” of Internet of @ThingsExpo, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Matrix is an ambitious new open standard for open, distributed, real-time communication over IP. It defines a new approach for interoperable Instant Messaging and VoIP based on pragmatic HTTP APIs and WebRTC, and provides open source reference implementations to showcase and bootstrap the new standard. Our focus is on simplicity, security, and supporting the fullest feature set.
Internet of @ThingsExpo Silicon Valley announced on Thursday its first 12 all-star speakers and sessions for its upcoming event, which will take place November 4-6, 2014, at the Santa Clara Convention Center in California. @ThingsExpo, the first and largest IoT event in the world, debuted at the Javits Center in New York City in June 10-12, 2014 with over 6,000 delegates attending the conference. Among the first 12 announced world class speakers, IBM will present two highly popular IoT sessions, which will take place November 4-6, 2014 at the Santa Clara Convention Center in Santa Clara, Calif...
BSQUARE is a global leader of embedded software solutions. We enable smart connected systems at the device level and beyond that millions use every day and provide actionable data solutions for the growing Internet of Things (IoT) market. We empower our world-class customers with our products, services and solutions to achieve innovation and success. For more information, visit www.bsquare.com.
Connected devices are changing the way we go about our everyday life, from wearables to driverless cars, to smart grids and entire industries revolutionizing business opportunities through smart objects, capable of two-way communication. But what happens when objects are given an IP-address, and we rely on that connection, sometimes with our lives? How do we secure those vast data infrastructures and safe-keep the privacy of sensitive information? This session will outline how each and every connected device can uphold a core root of trust via a unique cryptographic signature – a “bir...
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic • Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it’s a mix of architectural style...
SYS-CON Events announced today that Red Hat, the world's leading provider of open source solutions, will exhibit at Internet of @ThingsExpo, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As the connective hub in a global network of enterprises, partners, a...
From a software development perspective IoT is about programming "things," about connecting them with each other or integrating them with existing applications. In his session at @ThingsExpo, Yakov Fain, co-founder of Farata Systems and SuranceBay, will show you how small IoT-enabled devices from multiple manufacturers can be integrated into the workflow of an enterprise application. This is a practical demo of building a framework and components in HTML/Java/Mobile technologies to serve as a platform that can integrate new devices as they become available on the market.
SYS-CON Events announced today that Utimaco will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Utimaco is a leading manufacturer of hardware based security solutions that provide the root of trust to keep cryptographic keys safe, secure critical digital infrastructures and protect high value data assets. Only Utimaco delivers a general-purpose hardware security module (HSM) as a customizable platform to easily integrate into existing software solutions, embed business logic and build s...
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at Internet of @ThingsExpo, Robin Raymond, Chief Architect at Hookflash Inc., will walk through the shifting landscape of traditional telephone a...
SYS-CON Events announced today that SOA Software, an API management leader, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. SOA Software is a leading provider of API Management and SOA Governance products that equip business to deliver APIs and SOA together to drive their company to meet its business strategy quickly and effectively. SOA Software’s technology helps businesses to accelerate their digital channels with APIs, drive partner adoption, monetize their assets, and achieve a...
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
WebRTC defines no default signaling protocol, causing fragmentation between WebRTC silos. SIP and XMPP provide possibilities, but come with considerable complexity and are not designed for use in a web environment. In his session at Internet of @ThingsExpo, Matthew Hodgson, technical co-founder of the Matrix.org, will discuss how Matrix is a new non-profit Open Source Project that defines both a new HTTP-based standard for VoIP & IM signaling and provides reference implementations.

SUNNYVALE, Calif., Oct. 20, 2014 /PRNewswire/ -- Spansion Inc. (NYSE: CODE), a global leader in embedded systems, today added 96 new products to the Spansion® FM4 Family of flexible microcontrollers (MCUs). Based on the ARM® Cortex®-M4F core, the new MCUs boast a 200 MHz operating frequency and support a diverse set of on-chip peripherals for enhanced human machine interfaces (HMIs) and machine-to-machine (M2M) communications. The rich set of periphera...

SYS-CON Events announced today that Aria Systems, the recurring revenue expert, has been named "Bronze Sponsor" of SYS-CON's 15th International Cloud Expo®, which will take place on November 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Aria Systems helps leading businesses connect their customers with the products and services they love. Industry leaders like Pitney Bowes, Experian, AAA NCNU, VMware, HootSuite and many others choose Aria to power their recurring revenue business and deliver exceptional experiences to their customers.
The Internet of Things (IoT) is going to require a new way of thinking and of developing software for speed, security and innovation. This requires IT leaders to balance business as usual while anticipating for the next market and technology trends. Cloud provides the right IT asset portfolio to help today’s IT leaders manage the old and prepare for the new. Today the cloud conversation is evolving from private and public to hybrid. This session will provide use cases and insights to reinforce the value of the network in helping organizations to maximize their company’s cloud experience.
The Internet of Things (IoT) is making everything it touches smarter – smart devices, smart cars and smart cities. And lucky us, we’re just beginning to reap the benefits as we work toward a networked society. However, this technology-driven innovation is impacting more than just individuals. The IoT has an environmental impact as well, which brings us to the theme of this month’s #IoTuesday Twitter chat. The ability to remove inefficiencies through connected objects is driving change throughout every sector, including waste management. BigBelly Solar, located just outside of Boston, is trans...
SYS-CON Events announced today that Matrix.org has been named “Silver Sponsor” of Internet of @ThingsExpo, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Matrix is an ambitious new open standard for open, distributed, real-time communication over IP. It defines a new approach for interoperable Instant Messaging and VoIP based on pragmatic HTTP APIs and WebRTC, and provides open source reference implementations to showcase and bootstrap the new standard. Our focus is on simplicity, security, and supporting the fullest feature set.
Predicted by Gartner to add $1.9 trillion to the global economy by 2020, the Internet of Everything (IoE) is based on the idea that devices, systems and services will connect in simple, transparent ways, enabling seamless interactions among devices across brands and sectors. As this vision unfolds, it is clear that no single company can accomplish the level of interoperability required to support the horizontal aspects of the IoE. The AllSeen Alliance, announced in December 2013, was formed with the goal to advance IoE adoption and innovation in the connected home, healthcare, education, aut...
SYS-CON Events announced today that Red Hat, the world's leading provider of open source solutions, will exhibit at Internet of @ThingsExpo, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As the connective hub in a global network of enterprises, partners, a...
The only place to be June 9-11 is Cloud Expo & @ThingsExpo 2015 East at the Javits Center in New York City. Join us there as delegates from all over the world come to listen to and engage with speakers & sponsors from the leading Cloud Computing, IoT & Big Data companies. Cloud Expo & @ThingsExpo are the leading events covering the booming market of Cloud Computing, IoT & Big Data for the enterprise. Speakers from all over the world will be hand-picked for their ability to explore the economic strategies that utility/cloud computing provides. Whether public, private, or in a hybrid form, clo...