Welcome!

Containers Expo Blog Authors: Liz McMillan, Elizabeth White, Lori MacVittie, Anders Wallgren, Pat Romanski

Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Article

Memory: The Real Data Center Bottleneck

Memory virtualization solves the key barrier to increasing the efficiency of existing network resources

CIOs and IT managers agree that memory is emerging as a critical resource constraint in the data center for both economic and operational reasons. Regardless of density, memory is not a shareable resource across the data center. In fact, new servers are often purchased to increase memory capacity, rather than to add compute power. While storage capacity and CPU performance have advanced geometrically over time, memory density and storage performance have not kept pace. Data center architects refresh servers every few years, over-provision memory and storage, and are forced to bear the costs of the associated space, power and management overhead. The result of this inefficiency has been high data center costs with marginal performance improvement.

Memory: Where Are We?
Business-critical applications demand high performance from all network resources to derive value out of the ever-increasing volume of data. Memory is one of the three key computing resources, along with CPU and storage, which determine overall data center performance. However, memory has lagged far behind the advances of both processors and storage in capacity, price, and performance ratios. While processor vendors assert that data centers are processor-bound and storage vendors imply that data centers are storage-bound, in many cases the true performance barrier is memory. To that end, both a major network vendor and a dominant server vendor have recently made announcements about dramatic increases in the memory footprint of servers to better support data center virtualization.

The major network vendor built their first-ever blade server with custom developed hardware to support a larger memory footprint (up to 384 GB) for one dual-processor blade. This is significantly larger than the 144 GB maximum that is typical in high-end systems. The dominant server vendor enables individual VMs to use more of the local system memory.

Industry Challenges
Memory constraints continue to impact application performance for a number of industries. For example, data for seismic processing of oil and gas extraction, flight and reservation information, or business analytics quickly add up to terabytes, much too large to fit in even large-scale (and expensive) local RAM. These growing data sets create huge performance slowdowns in applications where latency and throughput matter. Multi-core processors are underutilized, waiting for data they can't get fast enough. And, currently available solutions are inefficient and don't entirely solve the problem.

Latency, or the delay in delivering the initial piece of data, is critical to application performance in areas such as manufacturing, pharmaceuticals, energy, and capital markets. As an example, algorithmic traders can execute hundreds of thousands of trades per day. Twenty-five percent of securities trades are now algorithmic trades - trades initiated by computers in response to market conditions or trading in patterns and sequences that generate profits. These trades leverage trade execution speed to make money, and it's a race to performance. The fastest trading desks will profit most.

Alongside the significant impact of peak performance is the need for certified messaging. Trading data streams must be certified - reliably stored for record keeping and rollback. Current solutions to the trading message problem are difficult to integrate, expensive, and cannot meet the performance requirements of the algorithmic trading desk.

A leading vendor's message bus solution has transaction latencies in the millisecond range, and reaches maximum throughput at close to 5,000 transactions per second. This performance hampers algorithmic trading, and throughput is not enough to meet the peak trading volumes at the opening bell, closing bell, or during market-moving events.

Memory Virtualization - Breaking the Memory Barrier
The introduction of memory virtualization shatters a long-standing and tolerated assumption in data processing - that servers are restricted to the memory that is physically installed. Until now, the data center has been primarily focused on server virtualization and storage virtualization.

Memory virtualization is the key to overcoming physical memory limitations, a common bottleneck in information technology performance. This technology allows servers in the data center to share a common, aggregated pool of memory that lives between the application and operating system. Memory virtualization is logically decoupled from local physical machines and made available to any connected computer as a global network resource.

This technology dramatically changes the price and performance model of the data center by bringing the performance benefits of resource virtualization, while reducing infrastructure costs.

In addition, it eliminates the need for changes to applications in order to take advantage of the pool. This creates a very large memory resource that is much faster than local or networked storage.

Memory virtualization scales across commodity hardware, takes advantage of existing data center equipment, and is implemented without application changes to deliver unmatched transactional throughput. High-performance computing now exists in the enterprise data center on commodity equipment, reducing capital and operational costs.

Memory Virtualization in Action - Large Working Data Set Applications
Memory virtualization reduces hundreds to thousands of reads from storage or databases to one, by making frequently read data available in a cache of virtualized memory with microsecond access speeds. This decreases reliance on expensive load balancers and allows servers to perform optimally even with simple, inexpensive round-robin load balancing by linking into common file system calls or application-level API integration. Any server may contribute RAM into the cache by using a command-line interface or a configuration and management dashboard that sets up and controls the virtualized memory pool through a web-based user interface. Memory virtualization then uses native high-speed fabric integration to move data rapidly between servers.

For applications with large working data sets, larger than will fit in physical memory, such as those found in high-volume Internet, predictive analytics, HPC and oil and gas, memory virtualization brings faster results and improves end-user experiences. In capital markets, memory virtualization delivers the lowest trade execution latencies, includes certified messaging, and integrates simply as demanded in this competitive market.

The associated performance gains relative to traditional storage are huge. NWChem is a computational chemistry application typically deployed in an HPC environment. In a 4 node cluster with a 4 GB / node running NWChem, memory virtualization cut the test run time from 17 minutes down to 6 minutes 15 seconds with no additional hardware, simply by creating an 8 GB cache with 2 GB contributed from each node.

Alternatives Fall Short
Attempts to address these challenges include scaling out (adding servers), over-provisioning (adding more storage or memory than is needed), scaling up (adding memory to existing or larger servers), or even designing software around the current constraints.

Larger data centers draw more power and require more IT staff and maintenance. For example, a 16-server data center with 32 GB RAM/server costs $190,000 in capital and operational expense over two years. Scaling out that data center to 32 servers would double the cost to $375,000 (see Figure 1). Scaling up the servers to 64GB RAM/server would raise the cost to $279,000 (data center costs based on the cost of scaling up a 16-node cluster from 32GB to 64GB per server, and scaling out a 16-node cluster to 32-nodes, two years operational expense).

What does this investment buy you? You get more servers to work on the problem - but performance has not improved significantly because they aren't working together; each server is still working only with its own local memory. By trying to divide and conquer your data set, you've fragmented it. Like fragmented drives, fragmented data sets restrict the flow of data and force data to be replicated across the network. The overhead of drawing data into each server consumes resources that should be focused on one thing - application performance.

By sharing memory, data centers require less memory per server because they have access to a much larger pool of virtualized memory. Memory virtualization also enables fewer servers to accomplish the same level of application performance, meaning less rack space, power consumption, and management staff (see Figure 2).

Additional cache capacity can be added dynamically with no downtime, and application servers can easily connect to virtualized network memory to share and consume data at any time without re-provisioning.

High Availability features eliminate data loss when servers or networks go down by keeping multiple copies of data in the cache and employing persistent writes to comply with certified messaging standards.

In the storage area, SAN and NAS have decoupled storage from computing, but storage is not the place for the active working data set. Storage acceleration can only marginally improve application performance because it connects too far down the stack and is not application-aware (understands state). The millisecond latencies of storage requests are unacceptable bottlenecks for business and mission-critical applications.

In some cases, data center architects have turned to data grids in the search for performance. Data grids impose a high management overhead and performance load and tend to replicate the working data set, rather than truly share it. These solutions are difficult to integrate, debug, and optimize, and remain tightly coupled to your application, reducing flexibility. Architects who have implemented these solutions complain of the "black box" software to which they have tied their applications' performance and disappointing acceleration results.

Conclusion
Memory virtualization has solved the key barrier to increasing the efficiency of existing network resources in order to improve the performance of business-critical applications. This capability decouples the memory from its physical environment, making it a shared resource across the data center or cluster. Addressing today's IT performance challenges, virtualized memory enables new business computing scenarios by eliminating application bottlenecks associated with memory and data sharing. Currently available, memory virtualization is delivering optimized data center utilization, performance and reliability with minimum risk and immediate business results.

More Stories By Clive Cook

Clive Cook is CEO of RNA Networks, a leading provider of memory virtualization software that transforms server memory into a shared network resource. He has a track record of success building and leading technology businesses including VeriLAN and Elematics, and holds an MBA from the Ivey School of Business, University of Western Ontario.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, will provide an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profes...
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
As enterprises work to take advantage of Big Data technologies, they frequently become distracted by product-level decisions. In most new Big Data builds this approach is completely counter-productive: it presupposes tools that may not be a fit for development teams, forces IT to take on the burden of evaluating and maintaining unfamiliar technology, and represents a major up-front expense. In his session at @BigDataExpo at @ThingsExpo, Andrew Warfield, CTO and Co-Founder of Coho Data, will dis...
Fortunately, meaningful and tangible business cases for IoT are plentiful in a broad array of industries and vertical markets. These range from simple warranty cost reduction for capital intensive assets, to minimizing downtime for vital business tools, to creating feedback loops improving product design, to improving and enhancing enterprise customer experiences. All of these business cases, which will be briefly explored in this session, hinge on cost effectively extracting relevant data from ...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies adopt disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advanced analytics, and DevO...
SYS-CON Events announced today that iDevices®, the preeminent brand in the connected home industry, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. iDevices, the preeminent brand in the connected home industry, has a growing line of HomeKit-enabled products available at the largest retailers worldwide. Through the “Designed with iDevices” co-development program and its custom-built IoT Cloud Infrastruc...
With an estimated 50 billion devices connected to the Internet by 2020, several industries will begin to expand their capabilities for retaining end point data at the edge to better utilize the range of data types and sheer volume of M2M data generated by the Internet of Things. In his session at @ThingsExpo, Don DeLoach, CEO and President of Infobright, will discuss the infrastructures businesses will need to implement to handle this explosion of data by providing specific use cases for filte...
SYS-CON Events announced today that VAI, a leading ERP software provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. VAI (Vormittag Associates, Inc.) is a leading independent mid-market ERP software developer renowned for its flexible solutions and ability to automate critical business functions for the distribution, manufacturing, specialty retail and service sectors. An IBM Premier Business Part...
SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful...
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 ad...
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management...
Cognitive Computing is becoming the foundation for a new generation of solutions that have the potential to transform business. Unlike traditional approaches to building solutions, a cognitive computing approach allows the data to help determine the way applications are designed. This contrasts with conventional software development that begins with defining logic based on the current way a business operates. In her session at 18th Cloud Expo, Judith S. Hurwitz, President and CEO of Hurwitz & ...
With the Apple Watch making its way onto wrists all over the world, it’s only a matter of time before it becomes a staple in the workplace. In fact, Forrester reported that 68 percent of technology and business decision-makers characterize wearables as a top priority for 2015. Recognizing their business value early on, FinancialForce.com was the first to bring ERP to wearables, helping streamline communication across front and back office functions. In his session at @ThingsExpo, Kevin Roberts...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry's single source for the cloud. Fusion's advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including clou...
Most people haven’t heard the word, “gamification,” even though they probably, and perhaps unwittingly, participate in it every day. Gamification is “the process of adding games or game-like elements to something (as a task) so as to encourage participation.” Further, gamification is about bringing game mechanics – rules, constructs, processes, and methods – into the real world in an effort to engage people. In his session at @ThingsExpo, Robert Endo, owner and engagement manager of Intrepid D...
Eighty percent of a data scientist’s time is spent gathering and cleaning up data, and 80% of all data is unstructured and almost never analyzed. Cognitive computing, in combination with Big Data, is changing the equation by creating data reservoirs and using natural language processing to enable analysis of unstructured data sources. This is impacting every aspect of the analytics profession from how data is mined (and by whom) to how it is delivered. This is not some futuristic vision: it's ha...
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Learn how IoT, cloud, social networks and last but not least, humans, can be integrated into a seamless integration of cooperative organisms both cybernetic and biological. This has been enabled by recent advances in IoT device capabilities, messaging frameworks, presence and collaboration services, where devices can share information and make independent and human assisted decisions based upon social status from other entities. In his session at @ThingsExpo, Michael Heydt, founder of Seamless...
The IoT's basic concept of collecting data from as many sources possible to drive better decision making, create process innovation and realize additional revenue has been in use at large enterprises with deep pockets for decades. So what has changed? In his session at @ThingsExpo, Prasanna Sivaramakrishnan, Solutions Architect at Red Hat, discussed the impact commodity hardware, ubiquitous connectivity, and innovations in open source software are having on the connected universe of people, thi...
WebRTC: together these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at WebRTC Summit, Cary Bran, VP of Innovation and New Ventures at Plantronics and PLT Labs, provided an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it may enable, complement or entirely transform.