|By Shai Fultheim||
|February 11, 2009 12:44 PM EST||
To understand where the High Performance Computing (HPC) paradigm is headed, it is useful to understand its history. High performance in computing comes from parallelism and faster and denser circuitry. Seymour Cray was a pioneer in this field and introduced the first production supercomputers in the 1960s (CDC 6600) and 1970s (Cray 1). Cray Research established the modern-day supercomputer architecture through multiprocessor (XMP) architecture and the vector processor. Other computer manufacturers adopted this architecture in the early 1980s.
It became evident with the advent of the modern microprocessor that clusters of microprocessors would challenge the dominance of vector supercomputers. In the second half of the 1980s, Encore and Sequent were building shared-memory systems that created a shared bus so that any of the microprocessors could access all of the memory in the system. By 2001, clusters and shared-memory systems based on microprocessors constituted 90% of the Top 500 machines, compared to 10% for vector-based machines.
The Beowulf project pioneered the idea of using cheap off-the-shelf hardware and software configured as a cluster of machines to build high-performance computers. By the early 2000s, this concept had become very successful in the industry, with the unification of public domain parallel tools (MPI programming model, PVM programming, parallel file system, tools to configure, manage parallel applications) and commercial applications for the scientific community. Cluster computing adopted commodity microprocessors (Intel) and the Linux operating system.
Today more than 70% of the newly installed HPC systems utilize commodity x86 clusters, with the remainder using shared-memory systems. Shared-memory systems have been losing out to clusters in HPC for a number of years, and this trend is driven by two factors. The advantage of cluster systems is the low initial acquisition cost of the hardware and absence of vendor lock-in. They are significantly cheaper and offer better performance than the large SMP systems that typically run on proprietary Unix platforms. Most commercial HPC applications today are designed to run on cluster infrastructures.
One interesting question one could ask is why there hasn't been a proliferation of x86-based shared-memory SMP systems to replace Unix-based SMP systems. It's driven by two factors. The first one is economic. Given the commoditization of x86 systems, innovation has suffered at the system level, given the lack of differentiation and low profit margins. The second reason pertains to the fact that the system-level companies have no control over the chip vendors and there's a significant mismatch between chip-level and system-level product and development lifecycles. The x86 architecture evolves according to Moore's Law and a new generation is spawned every 18 months, while it takes about three years to design a state-of-the art x86 SMP. This makes it very difficult for the system designers to plan or predict what type of chip will be available in three years time.
There's a downside to cluster computing. The disadvantage is the complexity of installation and ongoing management of the infrastructure, as well as the restrictions put on end users because of the programming model.
Installation & Ongoing Management Costs
These cluster solutions are significantly more expensive to deploy and manage compared to large server systems, requiring:
- OS per server: Higher OS deployment cost and complexity such as network boot or other centralized OS deployment techniques, resulting in a need for higher IT skill sets
- Solution for shared I/O: Providing the application with access to common storage requires a cluster file system, and SAN or NAS deployments. Achieving high-performance I/O with such solutions is still a work in progress in the marketplace today
- Application provisioning: Load-balancing and distributed resource management solutions are needed to accommodate proper scheduling and resource management
- Cluster interconnect: A dedicated network for intra-cluster communication is required to provide high bandwidth and low latency for application-level communication. This network is usually separate from the network the cluster uses to communicate with the outside world (such as users)
Besides complexity, cluster deployment poses two challenges at the application level:
- Programming model: A specific programming model is needed to accommodate the distributed nature of the computing resource. This is usually achieved via MPI programming. In-house or legacy code has to be modified to run on such systems.
- Lack of large memory footprint: Each processor can access only the "cluster" node's local memory, which is usually limited to keep the physical size (leveraging 1U systems) and the cost of the cluster to a minimum. This poses a significant challenge to applications that use large memory in some processing phases, requiring an additional system with a large amount of local memory for these application phases. This is usually referred to as "cluster head node," and requires additional programming efforts or application provisioning techniques to accommodate the need to run different application phases on different computing resources.
Aggregation: The New Virtualization Paradigm
Computing virtualization is a technique for hiding the physical characteristics of a compute resource from the operating system, applications, or end users interacting with that compute resource.
There are two types of computing virtualization paradigms in the market today:
- Server virtualization: A single physical server appears to function as multiple logical (virtual) servers. It could also be defined as partitioning.
- Desktop virtualization: The physical location of the PC desktop is separated from the user accessing the PC. The remotely accessed PC can be located at home, the office or the data center, while the user is located elsewhere. It could also be defined as remoting.
There is a new emerging, third kind of computing virtualization: high-end virtualization in which multiple physical systems appear to function as a single logical system. This virtualization paradigm is known as aggregation and it is basically the opposite of partitioning. The building blocks of this approach are the same x86 industry standard servers used in the scale-out (clustering) approach, preserving the low cost. In addition, by running a single logical system, customers manage a single operating system, and take advantage of large contiguous memory and unified I/O architecture.
Benefits of Aggregation
Large Memory System
For workloads that require a large contiguous memory, customers have traditionally used the scale-up approach. Aggregation provides a cost-effective alternative to buying expensive and large proprietary shared-memory systems for such workloads. It enables an application requiring large amounts memory to leverage the memory of multiple systems, and reduce the need to use a hard drive for swap or scratch space. Application runtime can be dramatically reduced by running simulations with in-core solvers or by using memory instead of swap for large-memory footprint models.
Aggregation thus provides a cost-effective virtual x86 platform with a large shared memory that minimizes the physical infrastructure requirements and can run both distributed applications, as well as applications requiring a large memory footprint at optimal performance on the same physical infrastructure.
Compute-Intensive, Shared-Memory Applications
For workloads that require a high core count coupled with shared memory, customers have traditionally used proprietary shared-memory systems. Aggregation provides a cost-effective x86 alternative to these expensive and proprietary RISC systems.
Aggregation technology combines memory bandwidth across boards, as opposed to traditional SMP or NUMA architecture where memory bandwidth decreases as the machine scales. This enables solutions based on aggregation technology to show close-to-linear memory bandwidth scaling, thereby delivering excellent performance for threaded applications.
Ease of Use
For workloads that otherwise require a scale-out approach, the primary value provided by aggregation technology is ease-of-use driven by having a single system to manage compared the complexities involved with managing a cluster. A single system removes the need for cluster file systems, cluster interconnect issues, application provisioning, and installation and update of multiple operating systems and applications. The use of one operating system instead of one per node, results in significant savings in time and money during installation, as well as on-going management costs.
Simplified I/O Architecture
I/O requirements for a scale-out model can be very complex and costly involving networked storage with accompanying costs related to additional HBAs and FC switch infrastructure. Aggregation technology consolidates each individual server's network and storage interfaces. I/O resource consolidation reduces the number of drivers, HBAs, NICs, cables, and switch ports, and all the associated maintenance overhead. The user needs fewer I/O devices to purchase, manage, and service with increased availability, resiliency, and runtime scalability of I/O resources.
Even in large cluster deployments in data centers, it makes sense to deploy aggregation, since fewer larger nodes mean less cluster complexity and better utilization of the infrastructure due to reduced fragmentation of the resources. An example can be found in the financial services industry, where organizations need to run hundreds or thousands of simulations at once. A common deployment model will involve hundreds of servers, where each will execute a few simulations. In this example, each cluster node is running a single application at 80% utilization. By using aggregation to create fewer larger nodes, every four aggregated systems can run another copy of the application, leveraging the underutilized resources and driving an additional 25% utilization.
The future of High Performance Computing is here and aggregation represents the next logical step forward on this journey for better performance, lower cost, and complexity. It addresses the fundamental limitation of clusters in that they perform poorly on applications that require large shared memory. It also addresses the fundamental barriers many technical computing customers face when adopting clusters due to the lack of appropriate IT skills to install and manage clusters. And it addresses the limitations of the traditional SMP systems of high cost and vendor lock-in.
Aggregation works well for compute-intensive applications (numerical and engineering simulations) and memory-intensive applications (very large modeling and business intelligence).
The benefits of this approach are cluster consolidation and infrastructure optimization (reducing the number of managed entities), improved utilization (reducing data center fragmentation), and physical infrastructure cost reduction (traditional SMP systems, unified I/O) as well as greener computing. The result is fewer systems to manage and a large shared-memory system at industry-standard cluster pricing.
Identity is in everything and customers are looking to their providers to ensure the security of their identities, transactions and data. With the increased reliance on cloud-based services, service providers must build security and trust into their offerings, adding value to customers and improving the user experience. Making identity, security and privacy easy for customers provides a unique advantage over the competition.
Jul. 28, 2016 11:45 PM EDT Reads: 1,203
"We've discovered that after shows 80% if leads that people get, 80% of the conversations end up on the show floor, meaning people forget about it, people forget who they talk to, people forget that there are actual business opportunities to be had here so we try to help out and keep the conversations going," explained Jeff Mesnik, Founder and President of ContentMX, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 28, 2016 10:15 PM EDT Reads: 1,436
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci...
Jul. 28, 2016 10:00 PM EDT Reads: 2,096
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
Jul. 28, 2016 09:00 PM EDT Reads: 2,706
I wanted to gather all of my Internet of Things (IOT) blogs into a single blog (that I could later use with my University of San Francisco (USF) Big Data “MBA” course). However as I started to pull these blogs together, I realized that my IOT discussion lacked a vision; it lacked an end point towards which an organization could drive their IOT envisioning, proof of value, app dev, data engineering and data science efforts. And I think that the IOT end point is really quite simple…
Jul. 28, 2016 07:15 PM EDT Reads: 1,230
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jul. 28, 2016 05:30 PM EDT Reads: 2,216
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
Jul. 28, 2016 04:30 PM EDT Reads: 1,196
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Jul. 28, 2016 04:15 PM EDT Reads: 1,781
WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector. In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, discussed the importance of WebRTC and how it enables companies to focus...
Jul. 28, 2016 03:45 PM EDT Reads: 1,040
Verizon Communications Inc. (NYSE, Nasdaq: VZ) and Yahoo! Inc. (Nasdaq: YHOO) have entered into a definitive agreement under which Verizon will acquire Yahoo's operating business for approximately $4.83 billion in cash, subject to customary closing adjustments. Yahoo informs, connects and entertains a global audience of more than 1 billion monthly active users** -- including 600 million monthly active mobile users*** through its search, communications and digital content products. Yahoo also co...
Jul. 28, 2016 03:15 PM EDT Reads: 725
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
Jul. 28, 2016 03:15 PM EDT Reads: 1,908
Amazon has gradually rolled out parts of its IoT offerings in the last year, but these are just the tip of the iceberg. In addition to optimizing their back-end AWS offerings, Amazon is laying the ground work to be a major force in IoT – especially in the connected home and office. Amazon is extending its reach by building on its dominant Cloud IoT platform, its Dash Button strategy, recently announced Replenishment Services, the Echo/Alexa voice recognition control platform, the 6-7 strategic...
Jul. 28, 2016 12:30 PM EDT Reads: 623
The best-practices for building IoT applications with Go Code that attendees can use to build their own IoT applications. In his session at @ThingsExpo, Indraneel Mitra, Senior Solutions Architect & Technology Evangelist at Cognizant, provided valuable information and resources for both novice and experienced developers on how to get started with IoT and Golang in a day. He also provided information on how to use Intel Arduino Kit, Go Robotics API and AWS IoT stack to build an application tha...
Jul. 28, 2016 12:00 PM EDT Reads: 1,247
IoT generates lots of temporal data. But how do you unlock its value? You need to discover patterns that are repeatable in vast quantities of data, understand their meaning, and implement scalable monitoring across multiple data streams in order to monetize the discoveries and insights. Motif discovery and deep learning platforms are emerging to visualize sensor data, to search for patterns and to build application that can monitor real time streams efficiently. In his session at @ThingsExpo, ...
Jul. 28, 2016 11:15 AM EDT Reads: 1,167
SYS-CON Events announced today that LeaseWeb USA, a cloud Infrastructure-as-a-Service (IaaS) provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. LeaseWeb is one of the world's largest hosting brands. The company helps customers define, develop and deploy IT infrastructure tailored to their exact business needs, by combining various kinds cloud solutions.
Jul. 28, 2016 10:45 AM EDT Reads: 1,298
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
Jul. 28, 2016 10:30 AM EDT Reads: 851
Big Data, cloud, analytics, contextual information, wearable tech, sensors, mobility, and WebRTC: together, these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at @ThingsExpo, Erik Perotti, Senior Manager of New Ventures on Plantronics’ Innovation team, provided an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it ...
Jul. 28, 2016 10:00 AM EDT Reads: 293
SYS-CON Events announced today that Venafi, the Immune System for the Internet™ and the leading provider of Next Generation Trust Protection, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Venafi is the Immune System for the Internet™ that protects the foundation of all cybersecurity – cryptographic keys and digital certificates – so they can’t be misused by bad guys in attacks...
Jul. 28, 2016 09:30 AM EDT Reads: 1,426
It’s 2016: buildings are smart, connected and the IoT is fundamentally altering how control and operating systems work and speak to each other. Platforms across the enterprise are networked via inexpensive sensors to collect massive amounts of data for analytics, information management, and insights that can be used to continuously improve operations. In his session at @ThingsExpo, Brian Chemel, Co-Founder and CTO of Digital Lumens, will explore: The benefits sensor-networked systems bring to ...
Jul. 28, 2016 09:00 AM EDT Reads: 1,626
Manufacturers are embracing the Industrial Internet the same way consumers are leveraging Fitbits – to improve overall health and wellness. Both can provide consistent measurement, visibility, and suggest performance improvements customized to help reach goals. Fitbit users can view real-time data and make adjustments to increase their activity. In his session at @ThingsExpo, Mark Bernardo Professional Services Leader, Americas, at GE Digital, discussed how leveraging the Industrial Internet a...
Jul. 28, 2016 07:30 AM EDT Reads: 554