Click here to close now.

Welcome!

Virtualization Authors: Leo Reiter, Roger Strukhoff, Klaus Enzenhofer, Trevor Parsons, Sean Dwyer

Related Topics: Virtualization, Java, SOA & WOA, .NET, Open Source, Cloud Expo

Virtualization: Article

The Benefits of Virtualization

What does the latest Sandy Bridge mean for virtualization in the central office?

Those familiar with deploying virtual machines (VMs) know that in order to ensure performance, VMs must be tied to physical platforms. As the demand for data-intensive virtualized and cloud solutions continues to increase, more powerful server platforms will be required to deliver this performance without significantly multiplying hardware infrastructure for every VM.

The new Intel Sandy Bridge series (Intel's Xeon E5-2600 processor family) is ideally suited for enabling more powerful and efficient virtualized solutions for high-throughput, processing-intensive communications applications. This latest dual-processor architecture features an increased core count, I/O, and memory performance to allow more virtual machines to run on a single physical platform. Virtualization can be extremely memory-intensive, as more VMs typically require more total system memory. In order to optimize performance and easily manage VMs, each one usually requires at least one physical processor core. Using the new Sandy Bridge E5-2600 series architecture can enable individual physical servers to support greater numbers of virtualized appliances, thereby consolidating hardware for lower operational costs, preventing against VM sprawl, and simplifying transition to the cloud with opportunities for scaling up over multiple cores.

The Benefits of Virtualization
Modern carrier-grade platforms comprise unprecedented amounts of processing, memory, and network I/O resources. For developers, though, these goodies also come with the mandate to make the most effective use of modern platforms through scaling and other techniques. Through the intelligent use of carrier-class virtualization, developers can create highly scalable platforms and often eliminate unnecessary over-provisioning of resources for peak usage.

Current advances in multicore processors, cryptography accelerators, and high-throughput Ethernet silicon make it possible to consolidate what previously required multiple specialized server platforms into a single private cloud. 4G wireless deployments, HD-quality video to all devices, the continuing transition to VoIP technologies, increased security concerns, and power efficiency requirements are all driving the need for more flexible solutions.

By deploying a private cloud with virtual machine infrastructure, one's hardware becomes a pool of resources available to be provisioned as needed. The control plane, data plane, and networking can all share the same pool of common hardware.

Deployments can be easily upgraded by simply adding physical resources to the managed pool. Additionally, migrating VM instances from one compute node to another, as Figure 1 shows, can be non-disruptive.

Many telecom solutions require multiple different hardware solutions simply because they are made up of applications that run on different operating systems. In a private cloud deployment, multiple operating systems can be run on the same physical hardware, eliminating this requirement.

A private cloud enables running instances (VMs assigned to a specific function) to be tailored to different workload environments. For example, a dedicated service level can be assigned to each instance, and as demand increases or decreases, other instances can be spawned or decommissioned as necessary. This allows each process workload to be tailored for the moment-in-time demand required (see Figure 2). This ability to tailor each process workload to address moment-in-time demand means the practice of over-provisioning all resources for a "peak workload" can go by the wayside. As resources are no longer needed, they are simply added back into the pool to be used by other instances that may need to be spawned.

Virtual machines allow for the more efficient use of hardware resources by allowing multiple instances to share the same physical hardware, maximizing the use of those resources and increasing the work per watt of power consumed when compared to traditional infrastructure.

VMs also allow for 1+1 and N+1 redundancy through the use of multiple virtual instances running fewer independent hardware nodes, such as AdvancedTCA SBCs. In addition, VMs often require fewer physical nodes to achieve the same level of redundancy. By reducing the physical node count to achieve the same uptime goals, less power is consumed overall (see Figure 3).

AdvancedTCA and Virtualization
For private clouds running VM infrastructure, choosing AdvancedTCA chassis with SBCs for the compute node (the most common core element in any private cloud) makes sense because of their commonality, variety, manageability, and ease of deployment.

Network switches with Layer 3 functionality are the glue that holds the private cloud together. The selection of AdvancedTCA switches will depend largely on the internal and external bandwidth required for each compute node. Video streaming or deep packet inspection typically requires much more bandwidth (and thus higher bandwidth switches) than SMSC messaging, for example, to optimize performance.

The last necessity is also one of the most critical: shared storage. For an instance to be launched or migrated to any physical node, all nodes must also have access to the same storage. In private cloud infrastructure, a high-performance SAN and a cluster file system often supply this access. Connectivity options typically include Fibre Channel, SAS, and iSCSI connectivity. iSCSI with link speeds of up to 10 Gbps is the least intrusive approach to implementation to each node, as the SAN can be connected to AdvancedTCA fabric switches to provide storage connectivity to each node.

To avoid excessive use of fabric bandwidth for storage connectivity in high-throughput environments, employing SAS or Fibre Channels that are directly attached and connected externally to each node via RTMs is a viable option. With multiple manufacturers now making AdvancedTCA blade-based SANs as well as NEBS certified external SANs, many options are available to meet the SAN requirements for a carrier-grade private cloud.

How Sandy Bridge Processors Optimize AdvancedTCA Platforms
The new Intel Xeon processor E5 family, based on the Sandy Bridge microarchitecture, changes how well software applications run on AdvancedTCA platforms. It supports innovative networking through 40-gigabit Ethernet, and its features allow for advanced virtualization and cloud computing techniques.

The Intel Xeon E5-2600 series CPUs consist of up to eight cores, each running up to 55 percent faster than its Xeon 5600 predecessor. It can therefore deliver much higher server performance to the enterprise market. Furthermore, new enterprise servers can support up to 32 GB dual in-line memory modules (DIMMs) so memory capacity can increase from 288 GB to 768 GB using 24 slots. E5-based AdvancedTCA compute blades with more limited board real estate are expected to support up to 256 GB in 16 VLP RDIMM slots at launch. This represents a 40 percentincrease over prior blades.

Greater power efficiency is another key benefit. The E5 family provides up to a 70 percent performance gain per watt over previous generation CPUs. Communications OEMs can develop power-efficient dual processor blades for service providers that fully meet or beat AdvancedTCA power specifications.

But the real game-changer lies in the E5-2600's integrated I/O, which allows designers to reduce latency significantly and increase bandwidth. AdvancedTCA's 40G fabric has been backplane-ready since 2010 in anticipation of an updated PICMG specification release. Since then, solution providers have sought ways to eliminate bottlenecks and utilize as much of the fabric as possible. Now that Intel has integrated the new PCI-Express 3.0 with 40 lanes aboard each Xeon® processor and Quickpath Interconnects (QPIs) linking each CPU together, I/O bottlenecks are reduced, throughput is increased, and I/O latency is cut by up to 30 percent. A standard dual Xeon® E5-2600 CPU configuration offers up to 80 lanes of PCIe Gen3, which provides 200 percent more throughput than the previous generation architectures.

The overall result is much higher I/O throughput. New AdvancedTCA blades will now be able to deliver more than 10 Gb/s per node. This is a critical milestone for wireless video applications that service providers are so hungry to launch. Greater overall performance and higher performance per watt are significant by themselves, but having enough I/O capacity to match the processor capabilities makes for even greater advances in application throughput.

More Stories By Austin Hipes

Austin Hipes currently serves as the director of field engineering for NEI. In this role, he manages field applications engineers (FAEs) supporting sales design activities and educating customers on hardware and the latest technology trends. Over the last eight years, Austin has been focused on designing systems for network equipment providers requiring carrier grade solutions. He was previously director of technology at Alliance Systems and a field applications engineer for Arrow Electronics. He received his Bachelor’s degree from the University of Texas at Dallas.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The Workspace-as-a-Service (WaaS) market will grow to $6.4B by 2018. In his session at 16th Cloud Expo, Seth Bostock, CEO of IndependenceIT, will begin by walking the audience through the evolution of Workspace as-a-Service, where it is now vs. where it going. To look beyond the desktop we must understand exactly what WaaS is, who the users are, and where it is going in the future. IT departments, ISVs and service providers must look to workflow and automation capabilities to adapt to growing demand and the rapidly changing workspace model.
Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, will focus on how to set up a cloud data governance program and s...
As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will discuss how to cut costs, scale easily, and unleash insight with CommVault Simpana software, the only si...
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS solutions that provide a Hadoop flavor either make choices for customers very flexible in the name of opti...
Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch of Docker's initial release in March of 2013, interest was revved up several notches. Then late last...
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been involved at the beginning of four IT industries: EDA, Open Systems, Computer Security and now SOA.
HP and Aruba Networks on Monday announced a definitive agreement for HP to acquire Aruba, a provider of next-generation network access solutions for the mobile enterprise, for $24.67 per share in cash. The equity value of the transaction is approximately $3.0 billion, and net of cash and debt approximately $2.7 billion. Both companies' boards of directors have approved the deal. "Enterprises are facing a mobile-first world and are looking for solutions that help them transition legacy investments to the new style of IT," said Meg Whitman, Chairman, President and Chief Executive Officer of HP...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, representing a model of how to analyze rea...
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add scalable realtime functionality with minimal effort and cost.”
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i...
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to make sense of it all.
With several hundred implementations of IoT-enabled solutions in the past 12 months alone, this session will focus on experience over the art of the possible. Many can only imagine the most advanced telematics platform ever deployed, supporting millions of customers, producing tens of thousands events or GBs per trip, and hundreds of TBs per month. With the ability to support a billion sensor events per second, over 30PB of warm data for analytics, and hundreds of PBs for an data analytics archive, in his session at @ThingsExpo, Jim Kaskade, Vice President and General Manager, Big Data & Ana...
One of the biggest impacts of the Internet of Things is and will continue to be on data; specifically data volume, management and usage. Companies are scrambling to adapt to this new and unpredictable data reality with legacy infrastructure that cannot handle the speed and volume of data. In his session at @ThingsExpo, Don DeLoach, CEO and president of Infobright, will discuss how companies need to rethink their data infrastructure to participate in the IoT, including: Data storage: Understanding the kinds of data: structured, unstructured, big/small? Analytics: What kinds and how responsiv...