|By Shai Fultheim||
|February 11, 2009 12:44 PM EST||
To understand where the High Performance Computing (HPC) paradigm is headed, it is useful to understand its history. High performance in computing comes from parallelism and faster and denser circuitry. Seymour Cray was a pioneer in this field and introduced the first production supercomputers in the 1960s (CDC 6600) and 1970s (Cray 1). Cray Research established the modern-day supercomputer architecture through multiprocessor (XMP) architecture and the vector processor. Other computer manufacturers adopted this architecture in the early 1980s.
It became evident with the advent of the modern microprocessor that clusters of microprocessors would challenge the dominance of vector supercomputers. In the second half of the 1980s, Encore and Sequent were building shared-memory systems that created a shared bus so that any of the microprocessors could access all of the memory in the system. By 2001, clusters and shared-memory systems based on microprocessors constituted 90% of the Top 500 machines, compared to 10% for vector-based machines.
The Beowulf project pioneered the idea of using cheap off-the-shelf hardware and software configured as a cluster of machines to build high-performance computers. By the early 2000s, this concept had become very successful in the industry, with the unification of public domain parallel tools (MPI programming model, PVM programming, parallel file system, tools to configure, manage parallel applications) and commercial applications for the scientific community. Cluster computing adopted commodity microprocessors (Intel) and the Linux operating system.
Today more than 70% of the newly installed HPC systems utilize commodity x86 clusters, with the remainder using shared-memory systems. Shared-memory systems have been losing out to clusters in HPC for a number of years, and this trend is driven by two factors. The advantage of cluster systems is the low initial acquisition cost of the hardware and absence of vendor lock-in. They are significantly cheaper and offer better performance than the large SMP systems that typically run on proprietary Unix platforms. Most commercial HPC applications today are designed to run on cluster infrastructures.
One interesting question one could ask is why there hasn't been a proliferation of x86-based shared-memory SMP systems to replace Unix-based SMP systems. It's driven by two factors. The first one is economic. Given the commoditization of x86 systems, innovation has suffered at the system level, given the lack of differentiation and low profit margins. The second reason pertains to the fact that the system-level companies have no control over the chip vendors and there's a significant mismatch between chip-level and system-level product and development lifecycles. The x86 architecture evolves according to Moore's Law and a new generation is spawned every 18 months, while it takes about three years to design a state-of-the art x86 SMP. This makes it very difficult for the system designers to plan or predict what type of chip will be available in three years time.
There's a downside to cluster computing. The disadvantage is the complexity of installation and ongoing management of the infrastructure, as well as the restrictions put on end users because of the programming model.
Installation & Ongoing Management Costs
These cluster solutions are significantly more expensive to deploy and manage compared to large server systems, requiring:
- OS per server: Higher OS deployment cost and complexity such as network boot or other centralized OS deployment techniques, resulting in a need for higher IT skill sets
- Solution for shared I/O: Providing the application with access to common storage requires a cluster file system, and SAN or NAS deployments. Achieving high-performance I/O with such solutions is still a work in progress in the marketplace today
- Application provisioning: Load-balancing and distributed resource management solutions are needed to accommodate proper scheduling and resource management
- Cluster interconnect: A dedicated network for intra-cluster communication is required to provide high bandwidth and low latency for application-level communication. This network is usually separate from the network the cluster uses to communicate with the outside world (such as users)
Besides complexity, cluster deployment poses two challenges at the application level:
- Programming model: A specific programming model is needed to accommodate the distributed nature of the computing resource. This is usually achieved via MPI programming. In-house or legacy code has to be modified to run on such systems.
- Lack of large memory footprint: Each processor can access only the "cluster" node's local memory, which is usually limited to keep the physical size (leveraging 1U systems) and the cost of the cluster to a minimum. This poses a significant challenge to applications that use large memory in some processing phases, requiring an additional system with a large amount of local memory for these application phases. This is usually referred to as "cluster head node," and requires additional programming efforts or application provisioning techniques to accommodate the need to run different application phases on different computing resources.
Aggregation: The New Virtualization Paradigm
Computing virtualization is a technique for hiding the physical characteristics of a compute resource from the operating system, applications, or end users interacting with that compute resource.
There are two types of computing virtualization paradigms in the market today:
- Server virtualization: A single physical server appears to function as multiple logical (virtual) servers. It could also be defined as partitioning.
- Desktop virtualization: The physical location of the PC desktop is separated from the user accessing the PC. The remotely accessed PC can be located at home, the office or the data center, while the user is located elsewhere. It could also be defined as remoting.
There is a new emerging, third kind of computing virtualization: high-end virtualization in which multiple physical systems appear to function as a single logical system. This virtualization paradigm is known as aggregation and it is basically the opposite of partitioning. The building blocks of this approach are the same x86 industry standard servers used in the scale-out (clustering) approach, preserving the low cost. In addition, by running a single logical system, customers manage a single operating system, and take advantage of large contiguous memory and unified I/O architecture.
Benefits of Aggregation
Large Memory System
For workloads that require a large contiguous memory, customers have traditionally used the scale-up approach. Aggregation provides a cost-effective alternative to buying expensive and large proprietary shared-memory systems for such workloads. It enables an application requiring large amounts memory to leverage the memory of multiple systems, and reduce the need to use a hard drive for swap or scratch space. Application runtime can be dramatically reduced by running simulations with in-core solvers or by using memory instead of swap for large-memory footprint models.
Aggregation thus provides a cost-effective virtual x86 platform with a large shared memory that minimizes the physical infrastructure requirements and can run both distributed applications, as well as applications requiring a large memory footprint at optimal performance on the same physical infrastructure.
Compute-Intensive, Shared-Memory Applications
For workloads that require a high core count coupled with shared memory, customers have traditionally used proprietary shared-memory systems. Aggregation provides a cost-effective x86 alternative to these expensive and proprietary RISC systems.
Aggregation technology combines memory bandwidth across boards, as opposed to traditional SMP or NUMA architecture where memory bandwidth decreases as the machine scales. This enables solutions based on aggregation technology to show close-to-linear memory bandwidth scaling, thereby delivering excellent performance for threaded applications.
Ease of Use
For workloads that otherwise require a scale-out approach, the primary value provided by aggregation technology is ease-of-use driven by having a single system to manage compared the complexities involved with managing a cluster. A single system removes the need for cluster file systems, cluster interconnect issues, application provisioning, and installation and update of multiple operating systems and applications. The use of one operating system instead of one per node, results in significant savings in time and money during installation, as well as on-going management costs.
Simplified I/O Architecture
I/O requirements for a scale-out model can be very complex and costly involving networked storage with accompanying costs related to additional HBAs and FC switch infrastructure. Aggregation technology consolidates each individual server's network and storage interfaces. I/O resource consolidation reduces the number of drivers, HBAs, NICs, cables, and switch ports, and all the associated maintenance overhead. The user needs fewer I/O devices to purchase, manage, and service with increased availability, resiliency, and runtime scalability of I/O resources.
Even in large cluster deployments in data centers, it makes sense to deploy aggregation, since fewer larger nodes mean less cluster complexity and better utilization of the infrastructure due to reduced fragmentation of the resources. An example can be found in the financial services industry, where organizations need to run hundreds or thousands of simulations at once. A common deployment model will involve hundreds of servers, where each will execute a few simulations. In this example, each cluster node is running a single application at 80% utilization. By using aggregation to create fewer larger nodes, every four aggregated systems can run another copy of the application, leveraging the underutilized resources and driving an additional 25% utilization.
The future of High Performance Computing is here and aggregation represents the next logical step forward on this journey for better performance, lower cost, and complexity. It addresses the fundamental limitation of clusters in that they perform poorly on applications that require large shared memory. It also addresses the fundamental barriers many technical computing customers face when adopting clusters due to the lack of appropriate IT skills to install and manage clusters. And it addresses the limitations of the traditional SMP systems of high cost and vendor lock-in.
Aggregation works well for compute-intensive applications (numerical and engineering simulations) and memory-intensive applications (very large modeling and business intelligence).
The benefits of this approach are cluster consolidation and infrastructure optimization (reducing the number of managed entities), improved utilization (reducing data center fragmentation), and physical infrastructure cost reduction (traditional SMP systems, unified I/O) as well as greener computing. The result is fewer systems to manage and a large shared-memory system at industry-standard cluster pricing.
The Internet of Things is not new. Historically, smart businesses have used its basic concept of leveraging data to drive better decision making and have capitalized on those insights to realize additional revenue opportunities. So, what has changed to make the Internet of Things one of the hottest topics in tech? In his session at @ThingsExpo, Chris Gray, Director, Embedded and Internet of Things, discussed the underlying factors that are driving the economics of intelligent systems. Discover how hardware commoditization, the ubiquitous nature of connectivity, and the emergence of Big Data a...
May. 30, 2015 04:00 PM EDT Reads: 6,942
Health care systems across the globe are under enormous strain, as facilities reach capacity and costs continue to rise. M2M and the Internet of Things have the potential to transform the industry through connected health solutions that can make care more efficient while reducing costs. In fact, Vodafone's annual M2M Barometer Report forecasts M2M applications rising to 57 percent in health care and life sciences by 2016. Lively is one of Vodafone's health care partners, whose solutions enable older adults to live independent lives while staying connected to loved ones. M2M will continue to gr...
May. 30, 2015 03:45 PM EDT Reads: 2,664
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
May. 30, 2015 02:00 PM EDT Reads: 2,628
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In this session, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, will describe how to revolutionize your architecture and...
May. 30, 2015 01:00 PM EDT Reads: 1,208
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
May. 30, 2015 12:00 PM EDT Reads: 2,768
The enterprise market will drive IoT device adoption over the next five years. In his session at @ThingsExpo, John Greenough, an analyst at BI Intelligence, division of Business Insider, will analyze how companies will adopt IoT products and the associated cost of adopting those products. John Greenough is the lead analyst covering the Internet of Things for BI Intelligence- Business Insider’s paid research service. Numerous IoT companies have cited his analysis of the IoT. Prior to joining BI Intelligence, he worked analyzing bank technology for Corporate Insight and The Clearing House Pay...
May. 30, 2015 11:45 AM EDT Reads: 1,029
As the Internet of Things unfolds, mobile and wearable devices are blurring the line between physical and digital, integrating ever more closely with our interests, our routines, our daily lives. Contextual computing and smart, sensor-equipped spaces bring the potential to walk through a world that recognizes us and responds accordingly. We become continuous transmitters and receivers of data. In his session at @ThingsExpo, Andrew Bolwell, Director of Innovation for HP's Printing and Personal Systems Group, discussed how key attributes of mobile technology – touch input, sensors, social, and ...
May. 30, 2015 11:30 AM EDT Reads: 4,608
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be
May. 30, 2015 11:15 AM EDT Reads: 3,226
SYS-CON Events announced today that BMC will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. BMC delivers software solutions that help IT transform digital enterprises for the ultimate competitive business advantage. BMC has worked with thousands of leading companies to create and deliver powerful IT management services. From mainframe to cloud to mobile, BMC pairs high-speed digital innovation with robust IT industrialization – allowing customers to provide amazing user experiences with optimized IT per...
May. 30, 2015 11:15 AM EDT Reads: 1,950
The world is at a tipping point where the technology, the device and global adoption are converging to such a point that we will see an explosion of a world where smartphone devices not only allow us to talk to each other, but allow for communication between everything – serving as a central hub from which we control our world – MediaTek is at the heart of both driving this and allowing the markets to drive this reality forward themselves. The next wave of consumer gadgets is here – smart, connected, and small. If your ambitions are big, so are ours. In his session at @ThingsExpo, Jack Hu, D...
May. 30, 2015 11:15 AM EDT Reads: 1,479
The multi-trillion economic opportunity around the "Internet of Things" (IoT) is emerging as the hottest topic for investors in 2015. As we connect the physical world with information technology, data from actions, processes and the environment can increase sales, improve efficiencies, automate daily activities and minimize risk. In his session at @ThingsExpo, Ed Maguire, Senior Analyst at CLSA Americas, will describe what is new and different about IoT, explore financial, technological and real-world impact across consumer and business use cases. Why now? Significant corporate and venture...
May. 30, 2015 11:00 AM EDT Reads: 1,185
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
May. 30, 2015 11:00 AM EDT Reads: 5,950
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will addresses this very serious issue of profound change in the industry.
May. 30, 2015 11:00 AM EDT Reads: 1,767
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
May. 30, 2015 10:30 AM EDT Reads: 4,939
WebRTC defines no default signaling protocol, causing fragmentation between WebRTC silos. SIP and XMPP provide possibilities, but come with considerable complexity and are not designed for use in a web environment. In his session at @ThingsExpo, Matthew Hodgson, technical co-founder of the Matrix.org, discussed how Matrix is a new non-profit Open Source Project that defines both a new HTTP-based standard for VoIP & IM signaling and provides reference implementations.
May. 30, 2015 10:30 AM EDT Reads: 5,740
SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York City, NY. O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, and conferences. Since 1978, O'Reilly Media has been a chronicler and catalyst of cutting-edge development, homing in on the technology trends that really matter and spurring their adoption by amplifying "faint signals" from the alpha geeks who are creating the future. An active participa...
May. 30, 2015 10:30 AM EDT Reads: 1,563
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
May. 30, 2015 10:15 AM EDT Reads: 1,288
2015 predictions circa 1970: houses anticipate our needs and adapt, city infrastructure is citizen and situation aware, office buildings identify and preprocess you. Today smart buildings have no such collective conscience, no shared set of fundamental services to identify, predict and synchronize around us. LiveSpace and M2Mi are changing that. LiveSpace Smart Environment devices deliver over the M2Mi IoT Platform real time presence, awareness and intent analytics as a service to local connected devices. In her session at @ThingsExpo, Sarah Cooper, VP Business of Development at M2Mi, will d...
May. 30, 2015 10:00 AM EDT Reads: 1,315
Thanks to widespread Internet adoption and more than 10 billion connected devices around the world, companies became more excited than ever about the Internet of Things in 2014. Add in the hype around Google Glass and the Nest Thermostat, and nearly every business, including those from traditionally low-tech industries, wanted in. But despite the buzz, some very real business questions emerged – mainly, not if a device can be connected, or even when, but why? Why does connecting to the cloud create greater value for the user? Why do connected features improve the overall experience? And why do...
May. 30, 2015 09:45 AM EDT Reads: 1,398
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
May. 30, 2015 09:30 AM EDT Reads: 7,415