Welcome!

Containers Expo Blog Authors: Elizabeth White, Pat Romanski, Yeshim Deniz, Liz McMillan, Aruna Ravichandran

Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Article

Memory: The Real Data Center Bottleneck

Memory virtualization solves the key barrier to increasing the efficiency of existing network resources

CIOs and IT managers agree that memory is emerging as a critical resource constraint in the data center for both economic and operational reasons. Regardless of density, memory is not a shareable resource across the data center. In fact, new servers are often purchased to increase memory capacity, rather than to add compute power. While storage capacity and CPU performance have advanced geometrically over time, memory density and storage performance have not kept pace. Data center architects refresh servers every few years, over-provision memory and storage, and are forced to bear the costs of the associated space, power and management overhead. The result of this inefficiency has been high data center costs with marginal performance improvement.

Memory: Where Are We?
Business-critical applications demand high performance from all network resources to derive value out of the ever-increasing volume of data. Memory is one of the three key computing resources, along with CPU and storage, which determine overall data center performance. However, memory has lagged far behind the advances of both processors and storage in capacity, price, and performance ratios. While processor vendors assert that data centers are processor-bound and storage vendors imply that data centers are storage-bound, in many cases the true performance barrier is memory. To that end, both a major network vendor and a dominant server vendor have recently made announcements about dramatic increases in the memory footprint of servers to better support data center virtualization.

The major network vendor built their first-ever blade server with custom developed hardware to support a larger memory footprint (up to 384 GB) for one dual-processor blade. This is significantly larger than the 144 GB maximum that is typical in high-end systems. The dominant server vendor enables individual VMs to use more of the local system memory.

Industry Challenges
Memory constraints continue to impact application performance for a number of industries. For example, data for seismic processing of oil and gas extraction, flight and reservation information, or business analytics quickly add up to terabytes, much too large to fit in even large-scale (and expensive) local RAM. These growing data sets create huge performance slowdowns in applications where latency and throughput matter. Multi-core processors are underutilized, waiting for data they can't get fast enough. And, currently available solutions are inefficient and don't entirely solve the problem.

Latency, or the delay in delivering the initial piece of data, is critical to application performance in areas such as manufacturing, pharmaceuticals, energy, and capital markets. As an example, algorithmic traders can execute hundreds of thousands of trades per day. Twenty-five percent of securities trades are now algorithmic trades - trades initiated by computers in response to market conditions or trading in patterns and sequences that generate profits. These trades leverage trade execution speed to make money, and it's a race to performance. The fastest trading desks will profit most.

Alongside the significant impact of peak performance is the need for certified messaging. Trading data streams must be certified - reliably stored for record keeping and rollback. Current solutions to the trading message problem are difficult to integrate, expensive, and cannot meet the performance requirements of the algorithmic trading desk.

A leading vendor's message bus solution has transaction latencies in the millisecond range, and reaches maximum throughput at close to 5,000 transactions per second. This performance hampers algorithmic trading, and throughput is not enough to meet the peak trading volumes at the opening bell, closing bell, or during market-moving events.

Memory Virtualization - Breaking the Memory Barrier
The introduction of memory virtualization shatters a long-standing and tolerated assumption in data processing - that servers are restricted to the memory that is physically installed. Until now, the data center has been primarily focused on server virtualization and storage virtualization.

Memory virtualization is the key to overcoming physical memory limitations, a common bottleneck in information technology performance. This technology allows servers in the data center to share a common, aggregated pool of memory that lives between the application and operating system. Memory virtualization is logically decoupled from local physical machines and made available to any connected computer as a global network resource.

This technology dramatically changes the price and performance model of the data center by bringing the performance benefits of resource virtualization, while reducing infrastructure costs.

In addition, it eliminates the need for changes to applications in order to take advantage of the pool. This creates a very large memory resource that is much faster than local or networked storage.

Memory virtualization scales across commodity hardware, takes advantage of existing data center equipment, and is implemented without application changes to deliver unmatched transactional throughput. High-performance computing now exists in the enterprise data center on commodity equipment, reducing capital and operational costs.

Memory Virtualization in Action - Large Working Data Set Applications
Memory virtualization reduces hundreds to thousands of reads from storage or databases to one, by making frequently read data available in a cache of virtualized memory with microsecond access speeds. This decreases reliance on expensive load balancers and allows servers to perform optimally even with simple, inexpensive round-robin load balancing by linking into common file system calls or application-level API integration. Any server may contribute RAM into the cache by using a command-line interface or a configuration and management dashboard that sets up and controls the virtualized memory pool through a web-based user interface. Memory virtualization then uses native high-speed fabric integration to move data rapidly between servers.

For applications with large working data sets, larger than will fit in physical memory, such as those found in high-volume Internet, predictive analytics, HPC and oil and gas, memory virtualization brings faster results and improves end-user experiences. In capital markets, memory virtualization delivers the lowest trade execution latencies, includes certified messaging, and integrates simply as demanded in this competitive market.

The associated performance gains relative to traditional storage are huge. NWChem is a computational chemistry application typically deployed in an HPC environment. In a 4 node cluster with a 4 GB / node running NWChem, memory virtualization cut the test run time from 17 minutes down to 6 minutes 15 seconds with no additional hardware, simply by creating an 8 GB cache with 2 GB contributed from each node.

Alternatives Fall Short
Attempts to address these challenges include scaling out (adding servers), over-provisioning (adding more storage or memory than is needed), scaling up (adding memory to existing or larger servers), or even designing software around the current constraints.

Larger data centers draw more power and require more IT staff and maintenance. For example, a 16-server data center with 32 GB RAM/server costs $190,000 in capital and operational expense over two years. Scaling out that data center to 32 servers would double the cost to $375,000 (see Figure 1). Scaling up the servers to 64GB RAM/server would raise the cost to $279,000 (data center costs based on the cost of scaling up a 16-node cluster from 32GB to 64GB per server, and scaling out a 16-node cluster to 32-nodes, two years operational expense).

What does this investment buy you? You get more servers to work on the problem - but performance has not improved significantly because they aren't working together; each server is still working only with its own local memory. By trying to divide and conquer your data set, you've fragmented it. Like fragmented drives, fragmented data sets restrict the flow of data and force data to be replicated across the network. The overhead of drawing data into each server consumes resources that should be focused on one thing - application performance.

By sharing memory, data centers require less memory per server because they have access to a much larger pool of virtualized memory. Memory virtualization also enables fewer servers to accomplish the same level of application performance, meaning less rack space, power consumption, and management staff (see Figure 2).

Additional cache capacity can be added dynamically with no downtime, and application servers can easily connect to virtualized network memory to share and consume data at any time without re-provisioning.

High Availability features eliminate data loss when servers or networks go down by keeping multiple copies of data in the cache and employing persistent writes to comply with certified messaging standards.

In the storage area, SAN and NAS have decoupled storage from computing, but storage is not the place for the active working data set. Storage acceleration can only marginally improve application performance because it connects too far down the stack and is not application-aware (understands state). The millisecond latencies of storage requests are unacceptable bottlenecks for business and mission-critical applications.

In some cases, data center architects have turned to data grids in the search for performance. Data grids impose a high management overhead and performance load and tend to replicate the working data set, rather than truly share it. These solutions are difficult to integrate, debug, and optimize, and remain tightly coupled to your application, reducing flexibility. Architects who have implemented these solutions complain of the "black box" software to which they have tied their applications' performance and disappointing acceleration results.

Conclusion
Memory virtualization has solved the key barrier to increasing the efficiency of existing network resources in order to improve the performance of business-critical applications. This capability decouples the memory from its physical environment, making it a shared resource across the data center or cluster. Addressing today's IT performance challenges, virtualized memory enables new business computing scenarios by eliminating application bottlenecks associated with memory and data sharing. Currently available, memory virtualization is delivering optimized data center utilization, performance and reliability with minimum risk and immediate business results.

More Stories By Clive Cook

Clive Cook is CEO of RNA Networks, a leading provider of memory virtualization software that transforms server memory into a shared network resource. He has a track record of success building and leading technology businesses including VeriLAN and Elematics, and holds an MBA from the Ivey School of Business, University of Western Ontario.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Organizations do not need a Big Data strategy; they need a business strategy that incorporates Big Data. Most organizations lack a road map for using Big Data to optimize key business processes, deliver a differentiated customer experience, or uncover new business opportunities. They do not understand what’s possible with respect to integrating Big Data into the business model.
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Amazon is pursuing new markets and disrupting industries at an incredible pace. Almost every industry seems to be in its crosshairs. Companies and industries that once thought they were safe are now worried about being “Amazoned.”. The new watch word should be “Be afraid. Be very afraid.” In his session 21st Cloud Expo, Chris Kocher, a co-founder of Grey Heron, will address questions such as: What new areas is Amazon disrupting? How are they doing this? Where are they likely to go? What are th...
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, will discuss how given the magnitude of today's applicati...
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...