Welcome!

Containers Expo Blog Authors: Pat Romanski, Elizabeth White, Carl J. Levine, Chris Colosimo, SmartBear Blog

Related Topics: Containers Expo Blog, @CloudExpo

Containers Expo Blog: Article

Memory: The Real Data Center Bottleneck

Memory virtualization solves the key barrier to increasing the efficiency of existing network resources

CIOs and IT managers agree that memory is emerging as a critical resource constraint in the data center for both economic and operational reasons. Regardless of density, memory is not a shareable resource across the data center. In fact, new servers are often purchased to increase memory capacity, rather than to add compute power. While storage capacity and CPU performance have advanced geometrically over time, memory density and storage performance have not kept pace. Data center architects refresh servers every few years, over-provision memory and storage, and are forced to bear the costs of the associated space, power and management overhead. The result of this inefficiency has been high data center costs with marginal performance improvement.

Memory: Where Are We?
Business-critical applications demand high performance from all network resources to derive value out of the ever-increasing volume of data. Memory is one of the three key computing resources, along with CPU and storage, which determine overall data center performance. However, memory has lagged far behind the advances of both processors and storage in capacity, price, and performance ratios. While processor vendors assert that data centers are processor-bound and storage vendors imply that data centers are storage-bound, in many cases the true performance barrier is memory. To that end, both a major network vendor and a dominant server vendor have recently made announcements about dramatic increases in the memory footprint of servers to better support data center virtualization.

The major network vendor built their first-ever blade server with custom developed hardware to support a larger memory footprint (up to 384 GB) for one dual-processor blade. This is significantly larger than the 144 GB maximum that is typical in high-end systems. The dominant server vendor enables individual VMs to use more of the local system memory.

Industry Challenges
Memory constraints continue to impact application performance for a number of industries. For example, data for seismic processing of oil and gas extraction, flight and reservation information, or business analytics quickly add up to terabytes, much too large to fit in even large-scale (and expensive) local RAM. These growing data sets create huge performance slowdowns in applications where latency and throughput matter. Multi-core processors are underutilized, waiting for data they can't get fast enough. And, currently available solutions are inefficient and don't entirely solve the problem.

Latency, or the delay in delivering the initial piece of data, is critical to application performance in areas such as manufacturing, pharmaceuticals, energy, and capital markets. As an example, algorithmic traders can execute hundreds of thousands of trades per day. Twenty-five percent of securities trades are now algorithmic trades - trades initiated by computers in response to market conditions or trading in patterns and sequences that generate profits. These trades leverage trade execution speed to make money, and it's a race to performance. The fastest trading desks will profit most.

Alongside the significant impact of peak performance is the need for certified messaging. Trading data streams must be certified - reliably stored for record keeping and rollback. Current solutions to the trading message problem are difficult to integrate, expensive, and cannot meet the performance requirements of the algorithmic trading desk.

A leading vendor's message bus solution has transaction latencies in the millisecond range, and reaches maximum throughput at close to 5,000 transactions per second. This performance hampers algorithmic trading, and throughput is not enough to meet the peak trading volumes at the opening bell, closing bell, or during market-moving events.

Memory Virtualization - Breaking the Memory Barrier
The introduction of memory virtualization shatters a long-standing and tolerated assumption in data processing - that servers are restricted to the memory that is physically installed. Until now, the data center has been primarily focused on server virtualization and storage virtualization.

Memory virtualization is the key to overcoming physical memory limitations, a common bottleneck in information technology performance. This technology allows servers in the data center to share a common, aggregated pool of memory that lives between the application and operating system. Memory virtualization is logically decoupled from local physical machines and made available to any connected computer as a global network resource.

This technology dramatically changes the price and performance model of the data center by bringing the performance benefits of resource virtualization, while reducing infrastructure costs.

In addition, it eliminates the need for changes to applications in order to take advantage of the pool. This creates a very large memory resource that is much faster than local or networked storage.

Memory virtualization scales across commodity hardware, takes advantage of existing data center equipment, and is implemented without application changes to deliver unmatched transactional throughput. High-performance computing now exists in the enterprise data center on commodity equipment, reducing capital and operational costs.

Memory Virtualization in Action - Large Working Data Set Applications
Memory virtualization reduces hundreds to thousands of reads from storage or databases to one, by making frequently read data available in a cache of virtualized memory with microsecond access speeds. This decreases reliance on expensive load balancers and allows servers to perform optimally even with simple, inexpensive round-robin load balancing by linking into common file system calls or application-level API integration. Any server may contribute RAM into the cache by using a command-line interface or a configuration and management dashboard that sets up and controls the virtualized memory pool through a web-based user interface. Memory virtualization then uses native high-speed fabric integration to move data rapidly between servers.

For applications with large working data sets, larger than will fit in physical memory, such as those found in high-volume Internet, predictive analytics, HPC and oil and gas, memory virtualization brings faster results and improves end-user experiences. In capital markets, memory virtualization delivers the lowest trade execution latencies, includes certified messaging, and integrates simply as demanded in this competitive market.

The associated performance gains relative to traditional storage are huge. NWChem is a computational chemistry application typically deployed in an HPC environment. In a 4 node cluster with a 4 GB / node running NWChem, memory virtualization cut the test run time from 17 minutes down to 6 minutes 15 seconds with no additional hardware, simply by creating an 8 GB cache with 2 GB contributed from each node.

Alternatives Fall Short
Attempts to address these challenges include scaling out (adding servers), over-provisioning (adding more storage or memory than is needed), scaling up (adding memory to existing or larger servers), or even designing software around the current constraints.

Larger data centers draw more power and require more IT staff and maintenance. For example, a 16-server data center with 32 GB RAM/server costs $190,000 in capital and operational expense over two years. Scaling out that data center to 32 servers would double the cost to $375,000 (see Figure 1). Scaling up the servers to 64GB RAM/server would raise the cost to $279,000 (data center costs based on the cost of scaling up a 16-node cluster from 32GB to 64GB per server, and scaling out a 16-node cluster to 32-nodes, two years operational expense).

What does this investment buy you? You get more servers to work on the problem - but performance has not improved significantly because they aren't working together; each server is still working only with its own local memory. By trying to divide and conquer your data set, you've fragmented it. Like fragmented drives, fragmented data sets restrict the flow of data and force data to be replicated across the network. The overhead of drawing data into each server consumes resources that should be focused on one thing - application performance.

By sharing memory, data centers require less memory per server because they have access to a much larger pool of virtualized memory. Memory virtualization also enables fewer servers to accomplish the same level of application performance, meaning less rack space, power consumption, and management staff (see Figure 2).

Additional cache capacity can be added dynamically with no downtime, and application servers can easily connect to virtualized network memory to share and consume data at any time without re-provisioning.

High Availability features eliminate data loss when servers or networks go down by keeping multiple copies of data in the cache and employing persistent writes to comply with certified messaging standards.

In the storage area, SAN and NAS have decoupled storage from computing, but storage is not the place for the active working data set. Storage acceleration can only marginally improve application performance because it connects too far down the stack and is not application-aware (understands state). The millisecond latencies of storage requests are unacceptable bottlenecks for business and mission-critical applications.

In some cases, data center architects have turned to data grids in the search for performance. Data grids impose a high management overhead and performance load and tend to replicate the working data set, rather than truly share it. These solutions are difficult to integrate, debug, and optimize, and remain tightly coupled to your application, reducing flexibility. Architects who have implemented these solutions complain of the "black box" software to which they have tied their applications' performance and disappointing acceleration results.

Conclusion
Memory virtualization has solved the key barrier to increasing the efficiency of existing network resources in order to improve the performance of business-critical applications. This capability decouples the memory from its physical environment, making it a shared resource across the data center or cluster. Addressing today's IT performance challenges, virtualized memory enables new business computing scenarios by eliminating application bottlenecks associated with memory and data sharing. Currently available, memory virtualization is delivering optimized data center utilization, performance and reliability with minimum risk and immediate business results.

More Stories By Clive Cook

Clive Cook is CEO of RNA Networks, a leading provider of memory virtualization software that transforms server memory into a shared network resource. He has a track record of success building and leading technology businesses including VeriLAN and Elematics, and holds an MBA from the Ivey School of Business, University of Western Ontario.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, represent...
Things are changing so quickly in IoT that it would take a wizard to predict which ecosystem will gain the most traction. In order for IoT to reach its potential, smart devices must be able to work together. Today, there are a slew of interoperability standards being promoted by big names to make this happen: HomeKit, Brillo and Alljoyn. In his session at @ThingsExpo, Adam Justice, vice president and general manager of Grid Connect, will review what happens when smart devices don’t work togethe...
SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
In his session at @ThingsExpo, Eric Lachapelle, CEO of the Professional Evaluation and Certification Board (PECB), will provide an overview of various initiatives to certifiy the security of connected devices and future trends in ensuring public trust of IoT. Eric Lachapelle is the Chief Executive Officer of the Professional Evaluation and Certification Board (PECB), an international certification body. His role is to help companies and individuals to achieve professional, accredited and worldw...
SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, will posit that disruption is inevitable for c...
SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
SYS-CON Events announced today that Loom Systems will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2015, Loom Systems delivers an advanced AI solution to predict and prevent problems in the digital business. Loom stands alone in the industry as an AI analysis platform requiring no prior math knowledge from operators, leveraging the existing staff to succeed in the digital era. With offices in S...
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buyers...
SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ...
SYS-CON Events announced today that Infranics will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Since 2000, Infranics has developed SysMaster Suite, which is required for the stable and efficient management of ICT infrastructure. The ICT management solution developed and provided by Infranics continues to add intelligence to the ICT infrastructure through the IMC (Infra Management Cycle) based on mathemat...
SYS-CON Events announced today that SD Times | BZ Media has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. BZ Media LLC is a high-tech media company that produces technical conferences and expositions, and publishes a magazine, newsletters and websites in the software development, SharePoint, mobile development and commercial UAV markets.
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"I think that everyone recognizes that for IoT to really realize its full potential and value that it is about creating ecosystems and marketplaces and that no single vendor is able to support what is required," explained Esmeralda Swartz, VP, Marketing Enterprise and Cloud at Ericsson, in this SYS-CON.tv interview at @ThingsExpo, held June 7-9, 2016, at the Javits Center in New York City, NY.
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...