Welcome!

Virtualization Authors: John Wetherill, Carmen Gonzalez, Kathy Thomas, Elizabeth White, Liz McMillan

Related Topics: Virtualization

Virtualization: Article

Balancing the Virtualization Equation

Get the most from your virtualized environment

Enterprises committed to a virtualization strategy need to ensure that management and automation of mission-critical IT systems and applications are included in their planning. Enterprises also need to establish procedures that allow them to maximize the benefits of consolidating to a virtualized platform and mitigate potential business risk across a landscape that has become abstract. Failure to do so will impact the success of projects and dilute the value of a virtualization strategy.

Spiraling energy costs, squeezing extra IT power out of fixed data center real estate footprints and environmental concerns, have shifted virtualization from a commodity tool to a center-stage role in the IT strategy of many organizations.

The history of virtualization can be tracked back to the 1970s when mainframe computers could be virtually partitioned to host multiple guest machines. It proved an ideal environment in which to install and configure new operating platforms, upgrade existing systems, and give software developers a sandbox for isolation testing. In its 21st century incarnation, history has repeated itself with virtualization usually starting life deep within the data center of most enterprises. IT operations and application development teams rapidly recognized the extra flexibility they could get from not needing to procure extra hardware to service ad hoc processing demands or for software testing.

With the shift from commodity to a center-stage role for virtualization, there is a corresponding shift in planning required to ensure that all IT layers in an enterprise are fully aligned to perform in a new virtualized landscape. In addition to ensuring that the underlying IT infrastructure components are in place each time a new virtual machine is provisioned, it's imperative that the business applications as well as the operational processes and procedures are fully established to provide the comprehensive set of services that end users rely on to do their jobs.

Factor
From an end-user or functional user perspective, whether an environment is virtualized or not is largely irrelevant. Such users simply expect their applications and programs to work - virtualization for them is a back-office, and therefore mostly unseen, technology. Planning for virtualization should strive to minimize apparent adverse impact on users' day-to-day activities.

Virtualization transforms a data center into a dynamic IT environment that can provide the flexibility and scalability capable of responding to the varying demands driven by a dynamic 24x7 global marketplace. However, while the ability to add and subtract processing capacity without needing to power up extra hardware offers enterprises greater agility, there are accompanying challenges that require addressing.

Factor
An organization's current system monitoring tools are probably very good at monitoring server statistics (like CPU utilization, I/O, etc.) and raising alarms if certain thresholds are exceeded. In a virtualized environment, such alarms should be expected to initiate action that can start, stop, or move virtual machines within the environment to help alleviate the detected resource exception. Planning should consider how system monitors can take actions that modify the virtual environment.

As each new virtual machine is spawned, the IT Operations team is left with the challenge of recognizing that there is an extra machine available that requires managing and monitoring. This same team also assumes responsibility for manually routing workload to this additional resource, continually checking systems performance and being ready to respond to messages and resolve problems as and when they occur.

Factor
A long-running, complex business process is known to contain a large processing "spike" at a certain point. In a virtualized environment, additional virtual machines can be started just prior to the spike (and stopped just after) to provide additional processing horsepower. The orchestrator (personnel or product) of the business process should be expected to be sufficiently aware of the virtualized environment to note the additional virtual machine(s) and take advantage of them. Without that awareness, even with the flexibility to dynamically add horsepower, an important potential benefit of the virtualized environment is lost. Planning should look at how business process orchestrators can take actions that affect the virtual environment.

This increase in workload combined with the perennial lack of qualified, skilled personnel puts tremendous pressure on IT operations. Instead of continually trying to find, train, and retain staff, organizations need to incorporate the tribal operations management knowledge that has accumulated over many years into the fabric of their virtualized environments. Adopting an automated approach would not only reduce operational pressures; it would also mitigate business risk by reducing the exposure of critical systems and applications to unaccountable manual intervention.

Factor
Drilling down into the previous example - if personnel are responsible for orchestrating the business process, one can envision a very detailed and carefully written manual process document for them to follow to manage the spike, taking advantage of the established virtualized environment. The burden (what higher-value activity could a person be doing?) and risk (what if a person makes a mistake?) of such a manual procedure could be eliminated by using an automated orchestrator - but only so far as the orchestrator is aware of and can interact with and control the virtualized environment. Again, without the awareness, an important potential benefit of the virtualized environment is lost. Planning should work to convert or translate manual processes (to the greatest extent possible) into automated processes.

Ensuring that extra virtual machines are brought online to cater for peak processing demands, optimizing the distribution of batch jobs to complete ahead of critical deadlines through to automatically responding and taking corrective actions against errors are just a few examples of workload management challenges arising in a virtualized world that can be simplified using automation. Beyond the infrastructure layer there's an equivalent set of tasks and procedures that have to be done to drive application processing that have traditionally relied on manual interaction, either by data center or end-user personnel. The virtualization of applications generates a similar set of challenges and requires equal attention if enterprises are going to realize benefits throughout their IT landscape.

In virtualized environments, the fixed relationships between hardware, systems, and applications no longer exist. Hardwired, proscribed associations, ranging from a command sequence in an operations handbook to fixed parameters embedded in a piece of application code, can result in different interpretations when presented in a virtualized world. Virtualization introduces an extra layer of abstraction between physical hardware devices and the software systems that an enterprise runs to support its business.

Factor
It's easy for a developer to write a program that runs well on a single server. However, without due consideration of the virtualized environment, it's all too likely that that same program won't run successfully across a landscape of virtual machines or hypervisors. Support for virtualized environments must be built into custom-developed code.

At the IT infrastructure management layer, there are IT housekeeping and administrative tasks that need to be executed: backups, snapshots, database clean-ups, file-transfer handling, and starting and stopping VMs. At the business application layer, there are functional processes and procedures that need to be undertaken: sales data uploads, order processing, invoicing, logistics, production, analytics and forecasting, finance and accounting, HR and customer care. Bringing together the execution of these activities ensures that everything around business and IT processes are properly managed and maintained. The scope of activities required will usually go well beyond the capability of an individual business application or systems management solution. Enterprises need to manage the suite of all interfaces around their virtual environments. They also need to be able to integrate the real and virtual environments in such a way that they can fully leverage the breadth and the depth of functionality that can be derived from their core applications and operating platforms.

Factor
IT housekeeping and administrative applications certainly must be "virtualization-aware" - indeed, some of the IT housekeeping tasks listed above are included in various hypervisors (e.g., snapshots). Business applications such as ERP, CRM, BI and DW must also be aware - it would make no sense to bring another virtual machine online for a particular application if the application itself had no awareness of its virtualized environment. There's some opportunity for application consolidation in terms of the applications used for managing IT housekeeping, administration, and business applications. The distinctions have blurred between certain classes of applications (e.g., job schedulers, system managers, business process managers) to such a degree that one new application may be able to replace the functionality of two or more older applications (see the references to an "orchestrator" in other parts of this article). Planning must include the business applications and each one's unique requirements.

Forming logical associations and utilizing logical views when managing virtualized systems and applications will allow IT departments to achieve greater flexibility and agility. When seeking to automate IT housekeeping procedures through to business processes, such as financial period-end close, creating a centralized single set of policy definitions that have embedded parameter variables not only ensures consistency and transparency across all virtualized machines and hypervisors - it will also reduce maintenance and administration overheads.

Factor
Establishing a single metadata repository for such items as policy definitions, processing rules, and business processes is a positive step in any virtualized environment. If such a repository also holds data about the current state of play of the policies in force, which rules are in control, and processing status then such data can be used in a predictive manner to proactively determine what virtual resources might be needed near-term AND take action to make those resources available. Effort should be spent planning how metadata can be used to allow proactive management of the virtual environment.

Establishing the availability of virtual resources, determining current systems performance, and analysis of other metrics can be used at runtime to optimize the routing and dispatching of workloads. Process definitions can be dynamically configured using parameter overrides to run on the hypervisor server best suited to ensure end-user SLAs are satisfied.

Factor
In the absence of an orchestrator to automate processing, system monitors can detect system events and raise alarms in a reactive fashion. Proactive and reactive attempts to modify the virtual environment are certainly valid. However, doing neither wastes some of the potential advantages of virtualization. Both proactive and reactive adjustments of the virtual environment should be planned for.

Securing and administering all process definitions in a centralized repository will support change control management. There's no need to manually check that script updates, necessary because a new version of a backup utility is being rolled out, have been propagated to all virtual machines. Critical activities that need to be run on virtual machines are protected against unauthorized updates and illegal use. Being able to maintain a record and report on all changes made to process definitions, as well as details of who executed what, where, when, and the outcome, supports enterprises in ensuring that their use of virtualization doesn't introduce additional operational risk and is compliant with IT governance strategy.

Factor
As highlighted earlier, automation provides a highly effective alternative to manual processes. If changes to the virtualized environment are automated (e.g., though predictive use of state data, automated response to alarms, and planned changes in a business process) then one expectation should be the existence of a good solid audit trail of actions taken by the automation orchestrator. Planning for compliance is a must.

Conclusion
Instead of dusting down an old IT operations run book and updating it to support a virtualization strategy, enterprises need to realize that embedding knowledge and experience into automated procedures not only simplifies management and control of a virtualized world; it can also ensure smart decisions are taken at the right time in the right context. An automated approach translates into improved throughput, greater accuracy, fewer errors, and less risk. Putting technology to work by allowing it to analyze resource utilization and respond instantaneously, provisioning extra resource in a virtualized environment enhances productivity and throughput.

More Stories By Alex Givens

Alex Givens is a Senior Solutions Architect for UC4 Software, Inc., makers of UC4 Workload Automation Suite. For 13 years, Alex has helped organizations improve the efficiency and effectiveness of their business processing. Alex has spoken on business process automation at many international, national and regional conferences.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
DevOps Summit 2015 New York, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
The Internet of Things is not new. Historically, smart businesses have used its basic concept of leveraging data to drive better decision making and have capitalized on those insights to realize additional revenue opportunities. So, what has changed to make the Internet of Things one of the hottest topics in tech? In his session at @ThingsExpo, Chris Gray, Director, Embedded and Internet of Things, discussed the underlying factors that are driving the economics of intelligent systems. Discover how hardware commoditization, the ubiquitous nature of connectivity, and the emergence of Big Data a...

ARMONK, N.Y., Nov. 20, 2014 /PRNewswire/ --  IBM (NYSE: IBM) today announced that it is bringing a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix, IBM's platform-as-a-service. The new platform enables developers to build ap...

"BSQUARE is in the business of selling software solutions for smart connected devices. It's obvious that IoT has moved from being a technology to being a fundamental part of business, and in the last 18 months people have said let's figure out how to do it and let's put some focus on it, " explained Dave Wagstaff, VP & Chief Architect, at BSQUARE Corporation, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that IDenticard will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. IDenticard™ is the security division of Brady Corp (NYSE: BRC), a $1.5 billion manufacturer of identification products. We have small-company values with the strength and stability of a major corporation. IDenticard offers local sales, support and service to our customers across the United States and Canada. Our partner network encompasses some 300 of the world's leading systems integrators and security s...
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nigeria has the largest economy in Africa, at more than US$500 billion, and ranks 23rd in the world. A recent re-evaluation of Nigeria's true economic size doubled the previous estimate, and brought it well ahead of South Africa, which is a member (unlike Nigeria) of the G20 club for political as well as economic reasons. Nigeria's economy can be said to be quite diverse from one point of view, but heavily dependent on oil and gas at the same time. Oil and natural gas account for about 15% of Nigera's overall economy, but traditionally represent more than 90% of the country's exports and as...
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session at @ThingsExpo, Ryan Bagnulo, Solution Architect / Software Engineer at SOA Software, focused on desi...
"At our booth we are showing how to provide trust in the Internet of Things. Trust is where everything starts to become secure and trustworthy. Now with the scaling of the Internet of Things it becomes an interesting question – I've heard numbers from 200 billion devices next year up to a trillion in the next 10 to 15 years," explained Johannes Lintzen, Vice President of Sales at Utimaco, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"For over 25 years we have been working with a lot of enterprise customers and we have seen how companies create applications. And now that we have moved to cloud computing, mobile, social and the Internet of Things, we see that the market needs a new way of creating applications," stated Jesse Shiah, CEO, President and Co-Founder of AgilePoint Inc., in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Gridstore™, the leader in hyper-converged infrastructure purpose-built to optimize Microsoft workloads, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Gridstore™ is the leader in hyper-converged infrastructure purpose-built for Microsoft workloads and designed to accelerate applications in virtualized environments. Gridstore’s hyper-converged infrastructure is the industry’s first all flash version of HyperConverged Appliances that include both compute and storag...
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core of our infrastructures. At the same time, we have old approaches made new again like micro-services...
Code Halos - aka "digital fingerprints" - are the key organizing principle to understand a) how dumb things become smart and b) how to monetize this dynamic. In his session at @ThingsExpo, Robert Brown, AVP, Center for the Future of Work at Cognizant Technology Solutions, outlined research, analysis and recommendations from his recently published book on this phenomena on the way leading edge organizations like GE and Disney are unlocking the Internet of Things opportunity and what steps your organization should be taking to position itself for the next platform of digital competition.
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
As the Internet of Things unfolds, mobile and wearable devices are blurring the line between physical and digital, integrating ever more closely with our interests, our routines, our daily lives. Contextual computing and smart, sensor-equipped spaces bring the potential to walk through a world that recognizes us and responds accordingly. We become continuous transmitters and receivers of data. In his session at @ThingsExpo, Andrew Bolwell, Director of Innovation for HP's Printing and Personal Systems Group, discussed how key attributes of mobile technology – touch input, sensors, social, and ...
In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect at GE, and Ibrahim Gokcen, who leads GE's advanced IoT analytics, focused on the Internet of Things / Industrial Internet and how to make it operational for business end-users. Learn about the challenges posed by machine and sensor data and how to marry it with enterprise data. They also discussed the tips and tricks to provide the Industrial Internet as an end-user consumable service using Big Data Analytics and Industrial Cloud.