Welcome!

Containers Expo Blog Authors: Elizabeth White, Liz McMillan, Pat Romanski, Mehdi Daoudi, Yeshim Deniz

Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Article

Memory: The Real Data Center Bottleneck

Memory virtualization solves the key barrier to increasing the efficiency of existing network resources

CIOs and IT managers agree that memory is emerging as a critical resource constraint in the data center for both economic and operational reasons. Regardless of density, memory is not a shareable resource across the data center. In fact, new servers are often purchased to increase memory capacity, rather than to add compute power. While storage capacity and CPU performance have advanced geometrically over time, memory density and storage performance have not kept pace. Data center architects refresh servers every few years, over-provision memory and storage, and are forced to bear the costs of the associated space, power and management overhead. The result of this inefficiency has been high data center costs with marginal performance improvement.

Memory: Where Are We?
Business-critical applications demand high performance from all network resources to derive value out of the ever-increasing volume of data. Memory is one of the three key computing resources, along with CPU and storage, which determine overall data center performance. However, memory has lagged far behind the advances of both processors and storage in capacity, price, and performance ratios. While processor vendors assert that data centers are processor-bound and storage vendors imply that data centers are storage-bound, in many cases the true performance barrier is memory. To that end, both a major network vendor and a dominant server vendor have recently made announcements about dramatic increases in the memory footprint of servers to better support data center virtualization.

The major network vendor built their first-ever blade server with custom developed hardware to support a larger memory footprint (up to 384 GB) for one dual-processor blade. This is significantly larger than the 144 GB maximum that is typical in high-end systems. The dominant server vendor enables individual VMs to use more of the local system memory.

Industry Challenges
Memory constraints continue to impact application performance for a number of industries. For example, data for seismic processing of oil and gas extraction, flight and reservation information, or business analytics quickly add up to terabytes, much too large to fit in even large-scale (and expensive) local RAM. These growing data sets create huge performance slowdowns in applications where latency and throughput matter. Multi-core processors are underutilized, waiting for data they can't get fast enough. And, currently available solutions are inefficient and don't entirely solve the problem.

Latency, or the delay in delivering the initial piece of data, is critical to application performance in areas such as manufacturing, pharmaceuticals, energy, and capital markets. As an example, algorithmic traders can execute hundreds of thousands of trades per day. Twenty-five percent of securities trades are now algorithmic trades - trades initiated by computers in response to market conditions or trading in patterns and sequences that generate profits. These trades leverage trade execution speed to make money, and it's a race to performance. The fastest trading desks will profit most.

Alongside the significant impact of peak performance is the need for certified messaging. Trading data streams must be certified - reliably stored for record keeping and rollback. Current solutions to the trading message problem are difficult to integrate, expensive, and cannot meet the performance requirements of the algorithmic trading desk.

A leading vendor's message bus solution has transaction latencies in the millisecond range, and reaches maximum throughput at close to 5,000 transactions per second. This performance hampers algorithmic trading, and throughput is not enough to meet the peak trading volumes at the opening bell, closing bell, or during market-moving events.

Memory Virtualization - Breaking the Memory Barrier
The introduction of memory virtualization shatters a long-standing and tolerated assumption in data processing - that servers are restricted to the memory that is physically installed. Until now, the data center has been primarily focused on server virtualization and storage virtualization.

Memory virtualization is the key to overcoming physical memory limitations, a common bottleneck in information technology performance. This technology allows servers in the data center to share a common, aggregated pool of memory that lives between the application and operating system. Memory virtualization is logically decoupled from local physical machines and made available to any connected computer as a global network resource.

This technology dramatically changes the price and performance model of the data center by bringing the performance benefits of resource virtualization, while reducing infrastructure costs.

In addition, it eliminates the need for changes to applications in order to take advantage of the pool. This creates a very large memory resource that is much faster than local or networked storage.

Memory virtualization scales across commodity hardware, takes advantage of existing data center equipment, and is implemented without application changes to deliver unmatched transactional throughput. High-performance computing now exists in the enterprise data center on commodity equipment, reducing capital and operational costs.

Memory Virtualization in Action - Large Working Data Set Applications
Memory virtualization reduces hundreds to thousands of reads from storage or databases to one, by making frequently read data available in a cache of virtualized memory with microsecond access speeds. This decreases reliance on expensive load balancers and allows servers to perform optimally even with simple, inexpensive round-robin load balancing by linking into common file system calls or application-level API integration. Any server may contribute RAM into the cache by using a command-line interface or a configuration and management dashboard that sets up and controls the virtualized memory pool through a web-based user interface. Memory virtualization then uses native high-speed fabric integration to move data rapidly between servers.

For applications with large working data sets, larger than will fit in physical memory, such as those found in high-volume Internet, predictive analytics, HPC and oil and gas, memory virtualization brings faster results and improves end-user experiences. In capital markets, memory virtualization delivers the lowest trade execution latencies, includes certified messaging, and integrates simply as demanded in this competitive market.

The associated performance gains relative to traditional storage are huge. NWChem is a computational chemistry application typically deployed in an HPC environment. In a 4 node cluster with a 4 GB / node running NWChem, memory virtualization cut the test run time from 17 minutes down to 6 minutes 15 seconds with no additional hardware, simply by creating an 8 GB cache with 2 GB contributed from each node.

Alternatives Fall Short
Attempts to address these challenges include scaling out (adding servers), over-provisioning (adding more storage or memory than is needed), scaling up (adding memory to existing or larger servers), or even designing software around the current constraints.

Larger data centers draw more power and require more IT staff and maintenance. For example, a 16-server data center with 32 GB RAM/server costs $190,000 in capital and operational expense over two years. Scaling out that data center to 32 servers would double the cost to $375,000 (see Figure 1). Scaling up the servers to 64GB RAM/server would raise the cost to $279,000 (data center costs based on the cost of scaling up a 16-node cluster from 32GB to 64GB per server, and scaling out a 16-node cluster to 32-nodes, two years operational expense).

What does this investment buy you? You get more servers to work on the problem - but performance has not improved significantly because they aren't working together; each server is still working only with its own local memory. By trying to divide and conquer your data set, you've fragmented it. Like fragmented drives, fragmented data sets restrict the flow of data and force data to be replicated across the network. The overhead of drawing data into each server consumes resources that should be focused on one thing - application performance.

By sharing memory, data centers require less memory per server because they have access to a much larger pool of virtualized memory. Memory virtualization also enables fewer servers to accomplish the same level of application performance, meaning less rack space, power consumption, and management staff (see Figure 2).

Additional cache capacity can be added dynamically with no downtime, and application servers can easily connect to virtualized network memory to share and consume data at any time without re-provisioning.

High Availability features eliminate data loss when servers or networks go down by keeping multiple copies of data in the cache and employing persistent writes to comply with certified messaging standards.

In the storage area, SAN and NAS have decoupled storage from computing, but storage is not the place for the active working data set. Storage acceleration can only marginally improve application performance because it connects too far down the stack and is not application-aware (understands state). The millisecond latencies of storage requests are unacceptable bottlenecks for business and mission-critical applications.

In some cases, data center architects have turned to data grids in the search for performance. Data grids impose a high management overhead and performance load and tend to replicate the working data set, rather than truly share it. These solutions are difficult to integrate, debug, and optimize, and remain tightly coupled to your application, reducing flexibility. Architects who have implemented these solutions complain of the "black box" software to which they have tied their applications' performance and disappointing acceleration results.

Conclusion
Memory virtualization has solved the key barrier to increasing the efficiency of existing network resources in order to improve the performance of business-critical applications. This capability decouples the memory from its physical environment, making it a shared resource across the data center or cluster. Addressing today's IT performance challenges, virtualized memory enables new business computing scenarios by eliminating application bottlenecks associated with memory and data sharing. Currently available, memory virtualization is delivering optimized data center utilization, performance and reliability with minimum risk and immediate business results.

More Stories By Clive Cook

Clive Cook is CEO of RNA Networks, a leading provider of memory virtualization software that transforms server memory into a shared network resource. He has a track record of success building and leading technology businesses including VeriLAN and Elematics, and holds an MBA from the Ivey School of Business, University of Western Ontario.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that Hitachi Data Systems, a wholly owned subsidiary of Hitachi LTD., will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City. Hitachi Data Systems (HDS) will be featuring the Hitachi Content Platform (HCP) portfolio. This is the industry’s only offering that allows organizations to bring together object storage, file sync and share, cloud storage gateways, and sophisticated search an...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Analytic. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
@ThingsExpo has been named the Most Influential ‘Smart Cities - IIoT' Account and @BigDataExpo has been named fourteenth by Right Relevance (RR), which provides curated information and intelligence on approximately 50,000 topics. In addition, Right Relevance provides an Insights offering that combines the above Topics and Influencers information with real time conversations to provide actionable intelligence with visualizations to enable decision making. The Insights service is applicable to eve...
The age of Digital Disruption is evolving into the next era – Digital Cohesion, an age in which applications securely self-assemble and deliver predictive services that continuously adapt to user behavior. Information from devices, sensors and applications around us will drive services seamlessly across mobile and fixed devices/infrastructure. This evolution is happening now in software defined services and secure networking. Four key drivers – Performance, Economics, Interoperability and Trust ...
Multiple data types are pouring into IoT deployments. Data is coming in small packages as well as enormous files and data streams of many sizes. Widespread use of mobile devices adds to the total. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will look at the tools and environments that are being put to use in IoT deployments, as well as the team skills a modern enterprise IT shop needs to keep things running, get a handle on all this data, and deli...
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ...
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
With billions of sensors deployed worldwide, the amount of machine-generated data will soon exceed what our networks can handle. But consumers and businesses will expect seamless experiences and real-time responsiveness. What does this mean for IoT devices and the infrastructure that supports them? More of the data will need to be handled at - or closer to - the devices themselves.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software in the hope of capturing value in IoT. Although IoT is relatively new in the market, it has already gone through many promotional terms such as IoE, IoX, SDX, Edge/Fog, Mist Compute, etc. Ultimately, irrespective of the name, it is about deriving value from independent software assets participating in an ecosystem as one comprehensive solution.
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists will examine how DevOps helps to meet th...
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help globa...
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Busine...
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).