Welcome!

Virtualization Authors: Elizabeth White, Carmen Gonzalez, Victoria Livschitz, Pat Romanski, Lori MacVittie

Related Topics: Virtualization, Cloud Expo

Virtualization: Article

Ensuring Application Availability in a Virtual Environment

A successful shift to a virtualized data center requires meeting goals of cost-savings and high availability

Virtualization is quickly gaining traction in IT departments around the world. According to Symantec's recent Virtualization and Evolution to the Cloud survey, 76 percent of enterprises are at least discussing virtualization. In the wake of the recent recession, the benefits are too valuable for businesses to ignore. Virtualization can reduce capital/operational expenditures, allow for faster deployment of computing resources, and facilitate management of business processes. Applicable to more than just servers, virtualization is becoming more common in a variety of different IT applications, including storage and desktops. Despite the inroads the technology has made in the data center, however, organizations have remained reluctant to virtualize business-critical applications.

According to the virtualization survey, among organizations implementing server virtualization, more than 50 percent plan to implement virtualization for Web and database applications over the next 12 months. When it comes to business-critical applications, however, 40 percent of CEOs and 42 percent of CFOs are reluctant to make the leap. The most common concern keeping enterprises from making the transition is reliability, cited by 78 percent of survey respondents.

Virtualization Challenges
A successful shift to a virtualized data center requires meeting goals of cost-savings and high availability. Tight budgets mean reduced staff and limited resources are the norm in IT departments today; and cost savings can only be realized if application uptime remains as high as possible. For the typical enterprise, the recovery time objective (RTO [the tolerable amount of application downtime for business-critical applications]) is less than one hour. In some cases, no more than a few minutes can be tolerated because downtime can mean millions of dollars in lost revenue or worker productivity. In light of these needs, IT staff should be aware of the following challenges inherent to a virtualized environment.

Consolidated Points of Failure
Virtualization can increase availability risks by consolidating the points of failure on fewer servers. In addition, further complexities are introduced by ensuring high availability for business-critical applications such as Exchange Server, SQL, SAP, and Oracle that are deployed on a combination of physical and virtual server nodes. For example, an ERP application may have middleware components running in a virtual server, but the underlying database is on a physical server. This combination of increased complexity and "putting all your eggs in one basket" can lead to single points of failure that could disrupt business operations if it is not addressed.

Visibility Limitations
One additional risk posed by virtualization is the lack of visibility into virtualized apps for troubleshooting purposes. When the application components such as the OS and drivers are encapsulated to make portability easier, the result is reduced visibility into the state of those components.

Human Error
Because multiple tools from varying vendors will increase management complexity, human error is more likely to occur with virtualized software. This increases because many high-availability tools are unable to monitor application health sufficiently. According to the Uptime Institute, a New York-based research and consulting organization that focuses on data-center performance, human error causes roughly 70 percent of the problems that plague data centers.

Take a Proactive Approach
In order to mitigate these risks and ensure high application availability, IT staff needs to carefully consider their organization's approach to virtualization. By taking a proactive approach to application management, rather than simply reacting to problems as they occur, you will significantly improve your uptime by avoiding problems before they occur.

Finding the right management software is the simplest way to ensure high availability, but most solutions from virtualization vendors fail to fully meet the scope of an organization's needs. As you consider implementing a comprehensive high-availability solution, look for the following characteristics.

Extensive Support
Look for a solution that provides support for not just the hardware, but also for the different operating systems you run, including UNIX, Windows, Linux, and virtual platforms, as well as a wide range of heterogeneous hardware configurations. Implementing one solution across all platforms will reduce complexity and increase reliability, with the additional benefit of minimizing costs related to training and administration.

Automated Failover
An effective solution will detect faults in an application and all its dependent components, including the associated database, operating system, network, and storage resources. In the event of an outage, the solution must be able to restart the application, connect it to the appropriate resources, and resume normal operations.

Automated Disaster Recovery Testing
With servers and applications constantly changing, the regular testing of a disaster recovery strategy is critical in order to guarantee a successful recovery in the event of a system or site-wide outage. Non-disruptive testing is necessary in order to maintain productivity while identifying potential issues.

Multi-cluster Management and Reporting
Visibility is one of the most important goals in virtualization, but it remains difficult to achieve. Administrators need to be able to monitor, manage, and report on multiple clusters on different platforms, ideally from a single location. The proper reporting tools also make it easier to resolve problems and streamline the operations of your virtualized systems.

Conclusion
The risk of downtime to business-critical applications keeps many enterprises from realizing the full benefits of virtualization. While there are increased risks due to the consolidation of resources and lack of visibility, these can be managed by implementing a virtualization solution with robust features. By automating as much of the process as possible, and improving visibility into virtualized applications, businesses can avoid the pitfalls and enjoy increased productivity and efficiency in the data center.

More Stories By Dan Lamorena

Dan Lamorena is Director, Product Marketing, Storage and Availability Management Group, responsible for Symantec's Storage Management and High Availability products. He has spent the last five years with Symantec working with customers to help them optimize storage and improve application availability and is a frequent contributor to industry trade publications related to storage and disaster recovery.

Prior to joining Symantec, Dan held product marketing, business development, and strategy management roles with Cisco Systems, Electronic Arts, Ernst and Young, and mobileID.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrategies, will examine why IT must finally fulfill its role in support of its SBUs or face a new round of...
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device experiences grounded in people's real needs and desires.
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.