Containers Expo Blog Authors: Liz McMillan, Yeshim Deniz, Pat Romanski, Elizabeth White, Ravi Rajamiyer

Related Topics: Containers Expo Blog

Containers Expo Blog: Blog Feed Post

Virtualization 101

With virtualization becoming intertwined with cloud computing it is worth taking a step back

With Virtualization becoming intertwined with cloud computing it is worth taking a step back and looking once again what virtualization is, and is not. Virtualization and Emulation are often compared, but there are a set of important differences. Emulation provides the functionality of a target processor completely in software. The main advantage being that you can emulate one type of processor on any other type of processor. Unfortunately it tends to be slow. Virtualization however involves taking a physical processor and partitioning it into multiple contexts. All of which take turns running directly on the processor itself. Because of this, Virtualization in faster than emulation.

Virtualization introduces an abstraction layer on top of resources, so that physical characteristics are hidden from the user. This abstraction layer takes care of resource allocation in order to meet the needs of the applications being run. In essence virtualization enables you to create one or more virtual machines that can run simultaneously at the same time as the host operating system. In its early days virtualization was more specialized and was utilized specifically in a vendor-controlled way, such as IBM’s LPAR approach for example. Virtualization vendors claim consolidation ratios of 4, with the potential for making available up to 75 percent of new infrastructure available in a date center.

Chipset manufacturers are now optimising the processors to support virtualisation. Both Intel and AMD have extended the instruction sets of their newer processors to give increased support for virtualisation. ‘AMD-V’ is what AMD have labelled their technology and Intel’s technology is called ‘VT.’ Expect even further advances. For example the Intel Xeon 7400 Dunnington processors include something called FlexMigration. This allows virtual machines to be moved around easily in a server pool.  You will need to understand in detail the processors that any virtualised environment runs upon as they offer a key mechanism for optimization.

Key to the virtualisation architecture is the hypervisor, the virtual machine manager. A hypervisor is a program that allows multiple operating systems to share a single hardware host. Although each operating system appears to have the host’s processor, memory, and other resources all to itself, the hypervisor is actually controlling the host processor and resources. It allocates what is needed to each operating system in turn, and these allocations can be managed and tuned.

There are two types of Hypervisor:

q Type 1: This is referred to as a bare-metal or native hypervisor. This type of hypervisor is software based and runs directly on a specific hardware and hosts a guest operating system. XEN, VMWARE, ESX, Parallels Server, Hyper-V all have examples of this type of hypervisor.

q Type 2: This type of hypervisor runs within an actual operating system. VMWARE Server (GSX), VirtualBox, Parallels Workstation and Desktop are examples of this type of hypervisor. The Type 2 Hypervisor is typically people are referring to when they think of Virtualisation.

There is a third element: Paravirtualisation. This is when the Operating System has been modified to be aware of the Hypervisor that is running. This makes the interaction and integration between the two much smoother and in theory less prone to any errors that may be generated. ‘Enlightenment” in Windows Server 2008 is an example of this as it enables the OS to interact directly with the Hypervisor.

With computing resources at a premium in terms of space, power, location, and cost, the use of virtualised infrastructures is a very compelling proposition for existing servers and hardware that are under utilised or have spare capacity cycles. Virtualisation can actually be thought of as addressing one of the deficiencies of building a large infrastructure, that of resource. It also addresses differences in OS infrastructure, software stacks etc. With virtualisation, on-demand deployment of pre-configured virtual machines containing all the software required by a job is possible. Flexibility is also added to resource management and application execution. For example running virtual machines can be controlled by freezing them (similar to check-pointing) or by migrating them in a real-time scenario while keeping the virtualised infrastructure running

Indeed this type of proposition is beginning to be thought of as a ‘private cloud’, in which virtualisation is used to deliver services across an organisation and in which best practices utilised in ‘public clouds’ are used to deliver this. These include Infrastructure-as-a-Service and Platform-as-a-Service type concepts in which Virtualisation and PaasS providers are releasing products and tools to enable the deployment and management of such private clouds. Examples of this are GigaSpaces who recently announcing tighter integration with VMWare which enables GigaSpaces to dynamically manage and scale VMWare instances and enable them to participate in the scaling of GigaSpaces hosted applications. A PaaS cloud provider such as GigaSpaces is able to do this due to VMWare’s launch of VSphere which opens up their VM product to allow management of both internal and external clouds. VMWare is pitching vSphere as the first Cloud-OS, which is able to break up separate hardware platforms into what they offer in terms of resources.

In terms of virtualisation, there are also drawbacks to watch out for. When you communicate ‘to’ and ‘from’ a virtualized node, the packets needs to pass through the virtualised communications layer. This is an overhead and you should estimate between 10-20% of a performance hit for this. Furthermore the VM is not an indication of the speed or performance of your grid. For example running four virtual machines on a 4 core 4-GHz chip is not the same as having 4 & 1Ghz dedicate chips for each VM. Also when one of your virtual machines is idle, if other VM’s are co-hosted they will get the majority of the power.

As the machines are virtual, and using resource cycles that are not in use, you may find that certain nodes are not available when you need or expect them. To this end you should ensure you have the ability to burst when required and have virtualised management infrastructure in place to handle this.

If you intend to embrace virtualisation then you will need to review machine specifications, paying special attention to processors and RAM,  and review storage and network infrastructure.

The positives however far outweigh any drawbacks and ultimately virtualisation will, over time, save money and, with all the innovation currently occurring around virtualisation, will make server administration easier.

Content adapted from my book “TheSavvyGuideTo HPC, Grid, DataGrid, Virtualisation and Cloud Computing” available on Amazon.

Read the original blog entry...

More Stories By Jim Liddle

Jim is CEO of Storage Made Easy. Jim is a regular blogger at SYS-CON.com since 2004, covering mobile, Grid, and Cloud Computing Topics.

IoT & Smart Cities Stories
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and sh...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
What are the new priorities for the connected business? First: businesses need to think differently about the types of connections they will need to make – these span well beyond the traditional app to app into more modern forms of integration including SaaS integrations, mobile integrations, APIs, device integration and Big Data integration. It’s important these are unified together vs. doing them all piecemeal. Second, these types of connections need to be simple to design, adapt and configure...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determ...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...