Welcome!

Containers Expo Blog Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Zakia Bouachraoui, Elizabeth White

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Microsoft Cloud, Open Source Cloud, @CloudExpo

Containers Expo Blog: Article

Be Thankful for Service Virtualization & Simulated Test Environments

Test earlier, faster, and more completely

To reduce the risk of business interruption in today's interconnected systems, organizations need to test across a complex set of applications, such as SAP, mainframes, third-party services, etc.. However, such systems are extraordinarily difficult to access for the purpose of testing. Service Virtualization provides simulated test environments that eliminate these constraints- enabling organizations to test earlier, faster, and more completely. Here are 10 specific reasons to be thankful for service virtualization...

Thankful For Service Virtualization

10. Dev.QA control over the test environment inclusive of dependencies
Development and QA often need to jump through hoops in order to get access to the test environments required to complete their development and testing tasks. Even worse, when the test environment is finally available, it typically lacks applications that lie beyond the organization's control. Service virtualization, with its test environment simulation technology, gives development and QA access to all the relevant application dependencies-including third-party applications-to create complete test environments on demand.

9. Scenario based testing from the outside-in
With today's highly-distributed systems, developers and testers need to invest a significant amount of effort to properly manipulate the environment that the application under test interacts with. As crunch time hits, the amount of work required often becomes prohibitive, resulting in incomplete testing. With service virtualization, it's fast and easy to immediately alter dependent system behavior so that tests can address a broad array of scenarios.

8. Reduce the risks of project failure
It's well known that delaying quality efforts until the end of the project places the entire project at risk-not only for missed deadlines and go-to-market dates, but also significant business risks. Using simulated test environments allows for continuous testing much earlier in the SDLC, which significantly reduces the organization's exposure to risk.

7. Release from large complex data management scenarios
Managing and resetting data from the database perspective requires considerable setup and teardown time. Service virtualization gives you granular control of test data at the component level. This allows the team to start testing earlier, and frees up resources previously required for test data management.

6. Performance testing under variable load from dependent systems
There's no doubt that server virtualization technology has enabled broader access for performance testing. However, the instability of this environment does not allow for consistent testing. Moreover, server virtualization is not applicable for applications that lie beyond the organization's control. Service virtualization's simulated test environments not only allow for discrete independent control over each endpoint, but also enable any permutation of endpoints to be orchestrated in the various ways needed to mimic realistic variable load from dependent systems.

5. Freedom to test early, getting the big showstoppers out of the way
When the team has early dev/test access to a simulated test environment, critical security, performance, and reliability issues will surface earlier-when they are exponentially faster, easier and cheaper to fix.  This early identification and resolution of defects allows for more complete testing later in the lifecycle and increases the prospects of meeting schedule and budget targets.

4. Simulate the performance of mobile applications
The biggest concern around mobile applications is variable performance of mobile apps across different provider networks. Service virtualization can simulate network performance (e.g., latency, error conditions, sporadic connection), allowing for the broad testing needed to test under a realistic spectrum of real-world conditions.

3. Make Agile teams truly agile
It's widely-accepted that testing has become a casualty of iterative development processes. Incomplete and evolving systems seem to limit the depth and breadth of tests that dev and QA are able to execute.  Additionally, the challenge of accessing a realistic test environment typically delays testing until late in each iteration. Service virtualization's test environment simulation eliminates these barriers by providing a realistic, complete test environment on demand-allowing Agile or Agile-ish teams to get to "done."

2. Test from the perspective of an environment, not just the app
The migration to cloud/SaaS applications, as well as SOA/composite applications, has distributed dependencies to a previously unfathomable extent. Service virtualization technologies give developers and testers visibility into-and control over-these "dependencies gone wild." They 1) paint a complete picture of the many dependencies associated with a test environment; 2) provide flexible access to a complete test environment (including the behavior of dependencies such as APIs and third-party applications); and 3) help the team identify evolving environment conditions that impact their test and service virtualization assets-and automatically refactor those assets for fast, intelligent updating.

1. Significantly reduce CapEx and OpEx associated with test infrastructure
Although server virtualization can assist to reduce the CapEx associated with test environments, it applies only to applications that are under your organization's control. Extending staged environments is extraordinarily costly and the OpEx associated with staged environments is a significant deterrence given the total cost of ownership. Using service virtualization and its test environment simulation technologies delivers control to the end users (dev/QA) and eliminates the need for superfluous hardware.

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

IoT & Smart Cities Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...