Click here to close now.



Welcome!

Containers Expo Blog Authors: Kevin Jackson, Elizabeth White, Liz McMillan, XebiaLabs Blog, Pat Romanski

Related Topics: Containers Expo Blog

Containers Expo Blog: Article

Virtualization for High Performance Computing

Aggregation, the next logical step forward

To understand where the High Performance Computing (HPC) paradigm is headed, it is useful to understand its history. High performance in computing comes from parallelism and faster and denser circuitry. Seymour Cray was a pioneer in this field and introduced the first production supercomputers in the 1960s (CDC 6600) and 1970s (Cray 1). Cray Research established the modern-day supercomputer architecture through multiprocessor (XMP) architecture and the vector processor. Other computer manufacturers adopted this architecture in the early 1980s.

It became evident with the advent of the modern microprocessor that clusters of microprocessors would challenge the dominance of vector supercomputers. In the second half of the 1980s, Encore and Sequent were building shared-memory systems that created a shared bus so that any of the microprocessors could access all of the memory in the system. By 2001, clusters and shared-memory systems based on microprocessors constituted 90% of the Top 500 machines, compared to 10% for vector-based machines.

The Beowulf project pioneered the idea of using cheap off-the-shelf hardware and software configured as a cluster of machines to build high-performance computers. By the early 2000s, this concept had become very successful in the industry, with the unification of public domain parallel tools (MPI programming model, PVM programming, parallel file system, tools to configure, manage parallel applications) and commercial applications for the scientific community. Cluster computing adopted commodity microprocessors (Intel) and the Linux operating system.

Today more than 70% of the newly installed HPC systems utilize commodity x86 clusters, with the remainder using shared-memory systems. Shared-memory systems have been losing out to clusters in HPC for a number of years, and this trend is driven by two factors. The advantage of cluster systems is the low initial acquisition cost of the hardware and absence of vendor lock-in. They are significantly cheaper and offer better performance than the large SMP systems that typically run on proprietary Unix platforms. Most commercial HPC applications today are designed to run on cluster infrastructures.

One interesting question one could ask is why there hasn't been a proliferation of x86-based shared-memory SMP systems to replace Unix-based SMP systems. It's driven by two factors. The first one is economic. Given the commoditization of x86 systems, innovation has suffered at the system level, given the lack of differentiation and low profit margins. The second reason pertains to the fact that the system-level companies have no control over the chip vendors and there's a significant mismatch between chip-level and system-level product and development lifecycles. The x86 architecture evolves according to Moore's Law and a new generation is spawned every 18 months, while it takes about three years to design a state-of-the art x86 SMP. This makes it very difficult for the system designers to plan or predict what type of chip will be available in three years time.

There's a downside to cluster computing. The disadvantage is the complexity of installation and ongoing management of the infrastructure, as well as the restrictions put on end users because of the programming model.

Installation & Ongoing Management Costs
These cluster solutions are significantly more expensive to deploy and manage compared to large server systems, requiring:

  • OS per server: Higher OS deployment cost and complexity such as network boot or other centralized OS deployment techniques, resulting in a need for higher IT skill sets
  • Solution for shared I/O: Providing the application with access to common storage requires a cluster file system, and SAN or NAS deployments. Achieving high-performance I/O with such solutions is still a work in progress in the marketplace today
  • Application provisioning: Load-balancing and distributed resource management solutions are needed to accommodate proper scheduling and resource management
  • Cluster interconnect: A dedicated network for intra-cluster communication is required to provide high bandwidth and low latency for application-level communication. This network is usually separate from the network the cluster uses to communicate with the outside world (such as users)

Programming Model
Besides complexity, cluster deployment poses two challenges at the application level:

  • Programming model: A specific programming model is needed to accommodate the distributed nature of the computing resource. This is usually achieved via MPI programming. In-house or legacy code has to be modified to run on such systems.
  • Lack of large memory footprint: Each processor can access only the "cluster" node's local memory, which is usually limited to keep the physical size (leveraging 1U systems) and the cost of the cluster to a minimum. This poses a significant challenge to applications that use large memory in some processing phases, requiring an additional system with a large amount of local memory for these application phases. This is usually referred to as "cluster head node," and requires additional programming efforts or application provisioning techniques to accommodate the need to run different application phases on different computing resources.

Aggregation: The New Virtualization Paradigm
Computing virtualization is a technique for hiding the physical characteristics of a compute resource from the operating system, applications, or end users interacting with that compute resource.

There are two types of computing virtualization paradigms in the market today:

  1. Server virtualization: A single physical server appears to function as multiple logical (virtual) servers. It could also be defined as partitioning.
  2. Desktop virtualization: The physical location of the PC desktop is separated from the user accessing the PC. The remotely accessed PC can be located at home, the office or the data center, while the user is located elsewhere. It could also be defined as remoting.

There is a new emerging, third kind of computing virtualization: high-end virtualization in which multiple physical systems appear to function as a single logical system. This virtualization paradigm is known as aggregation and it is basically the opposite of partitioning. The building blocks of this approach are the same x86 industry standard servers used in the scale-out (clustering) approach, preserving the low cost. In addition, by running a single logical system, customers manage a single operating system, and take advantage of large contiguous memory and unified I/O architecture.

Benefits of Aggregation
Large Memory System
For workloads that require a large contiguous memory, customers have traditionally used the scale-up approach. Aggregation provides a cost-effective alternative to buying expensive and large proprietary shared-memory systems for such workloads. It enables an application requiring large amounts memory to leverage the memory of multiple systems, and reduce the need to use a hard drive for swap or scratch space. Application runtime can be dramatically reduced by running simulations with in-core solvers or by using memory instead of swap for large-memory footprint models.

Aggregation thus provides a cost-effective virtual x86 platform with a large shared memory that minimizes the physical infrastructure requirements and can run both distributed applications, as well as applications requiring a large memory footprint at optimal performance on the same physical infrastructure.

Compute-Intensive, Shared-Memory Applications
For workloads that require a high core count coupled with shared memory, customers have traditionally used proprietary shared-memory systems. Aggregation provides a cost-effective x86 alternative to these expensive and proprietary RISC systems.

Aggregation technology combines memory bandwidth across boards, as opposed to traditional SMP or NUMA architecture where memory bandwidth decreases as the machine scales. This enables solutions based on aggregation technology to show close-to-linear memory bandwidth scaling, thereby delivering excellent performance for threaded applications.

Ease of Use
For workloads that otherwise require a scale-out approach, the primary value provided by aggregation technology is ease-of-use driven by having a single system to manage compared the complexities involved with managing a cluster. A single system removes the need for cluster file systems, cluster interconnect issues, application provisioning, and installation and update of multiple operating systems and applications. The use of one operating system instead of one per node, results in significant savings in time and money during installation, as well as on-going management costs.

Simplified I/O Architecture
I/O requirements for a scale-out model can be very complex and costly involving networked storage with accompanying costs related to additional HBAs and FC switch infrastructure. Aggregation technology consolidates each individual server's network and storage interfaces. I/O resource consolidation reduces the number of drivers, HBAs, NICs, cables, and switch ports, and all the associated maintenance overhead. The user needs fewer I/O devices to purchase, manage, and service with increased availability, resiliency, and runtime scalability of I/O resources.

Improved Utilization
Even in large cluster deployments in data centers, it makes sense to deploy aggregation, since fewer larger nodes mean less cluster complexity and better utilization of the infrastructure due to reduced fragmentation of the resources. An example can be found in the financial services industry, where organizations need to run hundreds or thousands of simulations at once. A common deployment model will involve hundreds of servers, where each will execute a few simulations. In this example, each cluster node is running a single application at 80% utilization. By using aggregation to create fewer larger nodes, every four aggregated systems can run another copy of the application, leveraging the underutilized resources and driving an additional 25% utilization.

Summary
The future of High Performance Computing is here and aggregation represents the next logical step forward on this journey for better performance, lower cost, and complexity. It addresses the fundamental limitation of clusters in that they perform poorly on applications that require large shared memory. It also addresses the fundamental barriers many technical computing customers face when adopting clusters due to the lack of appropriate IT skills to install and manage clusters. And it addresses the limitations of the traditional SMP systems of high cost and vendor lock-in.

Aggregation works well for compute-intensive applications (numerical and engineering simulations) and memory-intensive applications (very large modeling and business intelligence).

The benefits of this approach are cluster consolidation and infrastructure optimization (reducing the number of managed entities), improved utilization (reducing data center fragmentation), and physical infrastructure cost reduction (traditional SMP systems, unified I/O) as well as greener computing. The result is fewer systems to manage and a large shared-memory system at industry-standard cluster pricing.

More Stories By Shai Fultheim

As founder and CEO of ScaleMP, Shai Fultheim designed and architected the core technology behind the company, and is now responsible for its strategy and direction. He has more than 15 years of experience in technology and business roles, including a few years on the IT end-user side. Before founding ScaleMP, Shai was CTO of BRM Capital, a first-tier Israeli venture capital firm. Prior to BRM, he was co-founder, CTO, and VP R&D at several technology startups. He has also served in the Israeli Defense Force's entral intelligence unit, where he led a large IT organization. He holds a bachelor of technology and applied science from the Jerusalem College of Technology. Shai has been an active member of several open source initiatives such as Apache, Jakarta Tomcat, Amanda and the Linux kernel.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management...
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, will provide an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profes...
SYS-CON Events announced today that VAI, a leading ERP software provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. VAI (Vormittag Associates, Inc.) is a leading independent mid-market ERP software developer renowned for its flexible solutions and ability to automate critical business functions for the distribution, manufacturing, specialty retail and service sectors. An IBM Premier Business Part...
SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful...
Fortunately, meaningful and tangible business cases for IoT are plentiful in a broad array of industries and vertical markets. These range from simple warranty cost reduction for capital intensive assets, to minimizing downtime for vital business tools, to creating feedback loops improving product design, to improving and enhancing enterprise customer experiences. All of these business cases, which will be briefly explored in this session, hinge on cost effectively extracting relevant data from ...
With the Apple Watch making its way onto wrists all over the world, it’s only a matter of time before it becomes a staple in the workplace. In fact, Forrester reported that 68 percent of technology and business decision-makers characterize wearables as a top priority for 2015. Recognizing their business value early on, FinancialForce.com was the first to bring ERP to wearables, helping streamline communication across front and back office functions. In his session at @ThingsExpo, Kevin Roberts...
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 ad...
With an estimated 50 billion devices connected to the Internet by 2020, several industries will begin to expand their capabilities for retaining end point data at the edge to better utilize the range of data types and sheer volume of M2M data generated by the Internet of Things. In his session at @ThingsExpo, Don DeLoach, CEO and President of Infobright, will discuss the infrastructures businesses will need to implement to handle this explosion of data by providing specific use cases for filte...
As enterprises work to take advantage of Big Data technologies, they frequently become distracted by product-level decisions. In most new Big Data builds this approach is completely counter-productive: it presupposes tools that may not be a fit for development teams, forces IT to take on the burden of evaluating and maintaining unfamiliar technology, and represents a major up-front expense. In his session at @BigDataExpo at @ThingsExpo, Andrew Warfield, CTO and Co-Founder of Coho Data, will dis...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry's single source for the cloud. Fusion's advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including clou...
Most people haven’t heard the word, “gamification,” even though they probably, and perhaps unwittingly, participate in it every day. Gamification is “the process of adding games or game-like elements to something (as a task) so as to encourage participation.” Further, gamification is about bringing game mechanics – rules, constructs, processes, and methods – into the real world in an effort to engage people. In his session at @ThingsExpo, Robert Endo, owner and engagement manager of Intrepid D...
Eighty percent of a data scientist’s time is spent gathering and cleaning up data, and 80% of all data is unstructured and almost never analyzed. Cognitive computing, in combination with Big Data, is changing the equation by creating data reservoirs and using natural language processing to enable analysis of unstructured data sources. This is impacting every aspect of the analytics profession from how data is mined (and by whom) to how it is delivered. This is not some futuristic vision: it's ha...
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Learn how IoT, cloud, social networks and last but not least, humans, can be integrated into a seamless integration of cooperative organisms both cybernetic and biological. This has been enabled by recent advances in IoT device capabilities, messaging frameworks, presence and collaboration services, where devices can share information and make independent and human assisted decisions based upon social status from other entities. In his session at @ThingsExpo, Michael Heydt, founder of Seamless...
The IoT's basic concept of collecting data from as many sources possible to drive better decision making, create process innovation and realize additional revenue has been in use at large enterprises with deep pockets for decades. So what has changed? In his session at @ThingsExpo, Prasanna Sivaramakrishnan, Solutions Architect at Red Hat, discussed the impact commodity hardware, ubiquitous connectivity, and innovations in open source software are having on the connected universe of people, thi...
WebRTC: together these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at WebRTC Summit, Cary Bran, VP of Innovation and New Ventures at Plantronics and PLT Labs, provided an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it may enable, complement or entirely transform.
For manufacturers, the Internet of Things (IoT) represents a jumping-off point for innovation, jobs, and revenue creation. But to adequately seize the opportunity, manufacturers must design devices that are interconnected, can continually sense their environment and process huge amounts of data. As a first step, manufacturers must embrace a new product development ecosystem in order to support these products.
There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, showed how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants received the download information, scripts, and complete end-t...
Manufacturing connected IoT versions of traditional products requires more than multiple deep technology skills. It also requires a shift in mindset, to realize that connected, sensor-enabled “things” act more like services than what we usually think of as products. In his session at @ThingsExpo, David Friedman, CEO and co-founder of Ayla Networks, discussed how when sensors start generating detailed real-world data about products and how they’re being used, smart manufacturers can use the dat...
When it comes to IoT in the enterprise, namely the commercial building and hospitality markets, a benefit not getting the attention it deserves is energy efficiency, and IoT’s direct impact on a cleaner, greener environment when installed in smart buildings. Until now clean technology was offered piecemeal and led with point solutions that require significant systems integration to orchestrate and deploy. There didn't exist a 'top down' approach that can manage and monitor the way a Smart Buildi...