|By Clive Cook||
|December 10, 2009 04:00 PM EST||
CIOs and IT managers agree that memory is emerging as a critical resource constraint in the data center for both economic and operational reasons. Regardless of density, memory is not a shareable resource across the data center. In fact, new servers are often purchased to increase memory capacity, rather than to add compute power. While storage capacity and CPU performance have advanced geometrically over time, memory density and storage performance have not kept pace. Data center architects refresh servers every few years, over-provision memory and storage, and are forced to bear the costs of the associated space, power and management overhead. The result of this inefficiency has been high data center costs with marginal performance improvement.
Memory: Where Are We?
Business-critical applications demand high performance from all network resources to derive value out of the ever-increasing volume of data. Memory is one of the three key computing resources, along with CPU and storage, which determine overall data center performance. However, memory has lagged far behind the advances of both processors and storage in capacity, price, and performance ratios. While processor vendors assert that data centers are processor-bound and storage vendors imply that data centers are storage-bound, in many cases the true performance barrier is memory. To that end, both a major network vendor and a dominant server vendor have recently made announcements about dramatic increases in the memory footprint of servers to better support data center virtualization.
The major network vendor built their first-ever blade server with custom developed hardware to support a larger memory footprint (up to 384 GB) for one dual-processor blade. This is significantly larger than the 144 GB maximum that is typical in high-end systems. The dominant server vendor enables individual VMs to use more of the local system memory.
Memory constraints continue to impact application performance for a number of industries. For example, data for seismic processing of oil and gas extraction, flight and reservation information, or business analytics quickly add up to terabytes, much too large to fit in even large-scale (and expensive) local RAM. These growing data sets create huge performance slowdowns in applications where latency and throughput matter. Multi-core processors are underutilized, waiting for data they can't get fast enough. And, currently available solutions are inefficient and don't entirely solve the problem.
Latency, or the delay in delivering the initial piece of data, is critical to application performance in areas such as manufacturing, pharmaceuticals, energy, and capital markets. As an example, algorithmic traders can execute hundreds of thousands of trades per day. Twenty-five percent of securities trades are now algorithmic trades - trades initiated by computers in response to market conditions or trading in patterns and sequences that generate profits. These trades leverage trade execution speed to make money, and it's a race to performance. The fastest trading desks will profit most.
Alongside the significant impact of peak performance is the need for certified messaging. Trading data streams must be certified - reliably stored for record keeping and rollback. Current solutions to the trading message problem are difficult to integrate, expensive, and cannot meet the performance requirements of the algorithmic trading desk.
A leading vendor's message bus solution has transaction latencies in the millisecond range, and reaches maximum throughput at close to 5,000 transactions per second. This performance hampers algorithmic trading, and throughput is not enough to meet the peak trading volumes at the opening bell, closing bell, or during market-moving events.
Memory Virtualization - Breaking the Memory Barrier
The introduction of memory virtualization shatters a long-standing and tolerated assumption in data processing - that servers are restricted to the memory that is physically installed. Until now, the data center has been primarily focused on server virtualization and storage virtualization.
Memory virtualization is the key to overcoming physical memory limitations, a common bottleneck in information technology performance. This technology allows servers in the data center to share a common, aggregated pool of memory that lives between the application and operating system. Memory virtualization is logically decoupled from local physical machines and made available to any connected computer as a global network resource.
This technology dramatically changes the price and performance model of the data center by bringing the performance benefits of resource virtualization, while reducing infrastructure costs.
In addition, it eliminates the need for changes to applications in order to take advantage of the pool. This creates a very large memory resource that is much faster than local or networked storage.
Memory virtualization scales across commodity hardware, takes advantage of existing data center equipment, and is implemented without application changes to deliver unmatched transactional throughput. High-performance computing now exists in the enterprise data center on commodity equipment, reducing capital and operational costs.
Memory Virtualization in Action - Large Working Data Set Applications
Memory virtualization reduces hundreds to thousands of reads from storage or databases to one, by making frequently read data available in a cache of virtualized memory with microsecond access speeds. This decreases reliance on expensive load balancers and allows servers to perform optimally even with simple, inexpensive round-robin load balancing by linking into common file system calls or application-level API integration. Any server may contribute RAM into the cache by using a command-line interface or a configuration and management dashboard that sets up and controls the virtualized memory pool through a web-based user interface. Memory virtualization then uses native high-speed fabric integration to move data rapidly between servers.
For applications with large working data sets, larger than will fit in physical memory, such as those found in high-volume Internet, predictive analytics, HPC and oil and gas, memory virtualization brings faster results and improves end-user experiences. In capital markets, memory virtualization delivers the lowest trade execution latencies, includes certified messaging, and integrates simply as demanded in this competitive market.
The associated performance gains relative to traditional storage are huge. NWChem is a computational chemistry application typically deployed in an HPC environment. In a 4 node cluster with a 4 GB / node running NWChem, memory virtualization cut the test run time from 17 minutes down to 6 minutes 15 seconds with no additional hardware, simply by creating an 8 GB cache with 2 GB contributed from each node.
Alternatives Fall Short
Attempts to address these challenges include scaling out (adding servers), over-provisioning (adding more storage or memory than is needed), scaling up (adding memory to existing or larger servers), or even designing software around the current constraints.
Larger data centers draw more power and require more IT staff and maintenance. For example, a 16-server data center with 32 GB RAM/server costs $190,000 in capital and operational expense over two years. Scaling out that data center to 32 servers would double the cost to $375,000 (see Figure 1). Scaling up the servers to 64GB RAM/server would raise the cost to $279,000 (data center costs based on the cost of scaling up a 16-node cluster from 32GB to 64GB per server, and scaling out a 16-node cluster to 32-nodes, two years operational expense).
What does this investment buy you? You get more servers to work on the problem - but performance has not improved significantly because they aren't working together; each server is still working only with its own local memory. By trying to divide and conquer your data set, you've fragmented it. Like fragmented drives, fragmented data sets restrict the flow of data and force data to be replicated across the network. The overhead of drawing data into each server consumes resources that should be focused on one thing - application performance.
By sharing memory, data centers require less memory per server because they have access to a much larger pool of virtualized memory. Memory virtualization also enables fewer servers to accomplish the same level of application performance, meaning less rack space, power consumption, and management staff (see Figure 2).
Additional cache capacity can be added dynamically with no downtime, and application servers can easily connect to virtualized network memory to share and consume data at any time without re-provisioning.
High Availability features eliminate data loss when servers or networks go down by keeping multiple copies of data in the cache and employing persistent writes to comply with certified messaging standards.
In the storage area, SAN and NAS have decoupled storage from computing, but storage is not the place for the active working data set. Storage acceleration can only marginally improve application performance because it connects too far down the stack and is not application-aware (understands state). The millisecond latencies of storage requests are unacceptable bottlenecks for business and mission-critical applications.
In some cases, data center architects have turned to data grids in the search for performance. Data grids impose a high management overhead and performance load and tend to replicate the working data set, rather than truly share it. These solutions are difficult to integrate, debug, and optimize, and remain tightly coupled to your application, reducing flexibility. Architects who have implemented these solutions complain of the "black box" software to which they have tied their applications' performance and disappointing acceleration results.
Memory virtualization has solved the key barrier to increasing the efficiency of existing network resources in order to improve the performance of business-critical applications. This capability decouples the memory from its physical environment, making it a shared resource across the data center or cluster. Addressing today's IT performance challenges, virtualized memory enables new business computing scenarios by eliminating application bottlenecks associated with memory and data sharing. Currently available, memory virtualization is delivering optimized data center utilization, performance and reliability with minimum risk and immediate business results.
SYS-CON Events announced today that Solgenia will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY, and the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between Personal and Professional Social, Mobile and Cloud user experiences, our solutions help large and medium-sized organizations dr...
Mar. 29, 2015 03:00 PM EDT Reads: 2,797
SYS-CON Events announced today that Liaison Technologies, a leading provider of data management and integration cloud services and solutions, has been named "Silver Sponsor" of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York, NY. Liaison Technologies is a recognized market leader in providing cloud-enabled data integration and data management solutions to break down complex information barriers, enabling enterprises to make smarter decisions, faster.
Mar. 29, 2015 03:00 PM EDT Reads: 3,433
The WebRTC Summit 2014 New York, to be held June 9-11, 2015, at the Javits Center in New York, NY, announces that its Call for Papers is open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 16th International Cloud Expo, @ThingsExpo, Big Data Expo, and DevOps Summit.
Mar. 29, 2015 02:00 PM EDT Reads: 1,565
@ThingsExpo has been named the Top 5 Most Influential M2M Brand by Onalytica in the ‘Machine to Machine: Top 100 Influencers and Brands.' Onalytica analyzed the online debate on M2M by looking at over 85,000 tweets to provide the most influential individuals and brands that drive the discussion. According to Onalytica the "analysis showed a very engaged community with a lot of interactive tweets. The M2M discussion seems to be more fragmented and driven by some of the major brands present in the M2M space. This really allows some room for influential individuals to create more high value inter...
Mar. 29, 2015 01:45 PM EDT Reads: 4,657
After making a doctor’s appointment via your mobile device, you receive a calendar invite. The day of your appointment, you get a reminder with the doctor’s location and contact information. As you enter the doctor’s exam room, the medical team is equipped with the latest tablet containing your medical history – he or she makes real time updates to your medical file. At the end of your visit, you receive an electronic prescription to your preferred pharmacy and can schedule your next appointment.
Mar. 29, 2015 12:00 PM EDT Reads: 740
The world's leading Cloud event, Cloud Expo has launched Microservices Journal on the SYS-CON.com portal, featuring over 19,000 original articles, news stories, features, and blog entries. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. Microservices Journal offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. Follow new article posts on Twitter at @MicroservicesE
Mar. 29, 2015 12:00 PM EDT Reads: 1,436
The list of ‘new paradigm’ technologies that now surrounds us appears to be at an all time high. From cloud computing and Big Data analytics to Bring Your Own Device (BYOD) and the Internet of Things (IoT), today we have to deal with what the industry likes to call ‘paradigm shifts’ at every level of IT. This is disruption; of course, we understand that – change is almost always disruptive.
Mar. 29, 2015 11:45 AM EDT Reads: 1,093
SYS-CON Events announced today the IoT Bootcamp – Jumpstart Your IoT Strategy, being held June 9–10, 2015, in conjunction with 16th Cloud Expo and Internet of @ThingsExpo at the Javits Center in New York City. This is your chance to jumpstart your IoT strategy. Combined with real-world scenarios and use cases, the IoT Bootcamp is not just based on presentations but includes hands-on demos and walkthroughs. We will introduce you to a variety of Do-It-Yourself IoT platforms including Arduino, Raspberry Pi, BeagleBone, Spark and Intel Edison. You will also get an overview of cloud technologies s...
Mar. 29, 2015 11:00 AM EDT Reads: 2,086
SYS-CON Events announced today that SafeLogic has been named “Bag Sponsor” of SYS-CON's 16th International Cloud Expo® New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. SafeLogic provides security products for applications in mobile and server/appliance environments. SafeLogic’s flagship product CryptoComply is a FIPS 140-2 validated cryptographic engine designed to secure data on servers, workstations, appliances, mobile devices, and in the Cloud.
Mar. 29, 2015 11:00 AM EDT Reads: 1,406
Wearable technology was dominant at this year’s International Consumer Electronics Show (CES) , and MWC was no exception to this trend. New versions of favorites, such as the Samsung Gear (three new products were released: the Gear 2, the Gear 2 Neo and the Gear Fit), shared the limelight with new wearables like Pebble Time Steel (the new premium version of the company’s previously released smartwatch) and the LG Watch Urbane. The most dramatic difference at MWC was an emphasis on presenting wearables as fashion accessories and moving away from the original clunky technology associated with t...
Mar. 29, 2015 09:00 AM EDT Reads: 1,390
SOA Software has changed its name to Akana. With roots in Web Services and SOA Governance, Akana has established itself as a leader in API Management and is expanding into cloud integration as an alternative to the traditional heavyweight enterprise service bus (ESB). The company recently announced that it achieved more than 90% year-over-year growth. As Akana, the company now addresses the evolution and diversification of SOA, unifying security, management, and DevOps across SOA, APIs, microservices, and more.
Mar. 29, 2015 08:30 AM EDT Reads: 2,047
GENBAND has announced that SageNet is leveraging the Nuvia platform to deliver Unified Communications as a Service (UCaaS) to its large base of retail and enterprise customers. Nuvia’s cloud-based solution provides SageNet’s customers with a full suite of business communications and collaboration tools. Two large national SageNet retail customers have recently signed up to deploy the Nuvia platform and the company will continue to sell the service to new and existing customers. Nuvia’s capabilities include HD voice, video, multimedia messaging, mobility, conferencing, Web collaboration, deskt...
Mar. 29, 2015 01:00 AM EDT Reads: 1,456
SYS-CON Media announced today that @WebRTCSummit Blog, the largest WebRTC resource in the world, has been launched. @WebRTCSummit Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. @WebRTCSummit Blog can be bookmarked ▸ Here @WebRTCSummit conference site can be bookmarked ▸ Here
Mar. 28, 2015 08:00 PM EDT Reads: 1,793
SYS-CON Events announced today that Cisco, the worldwide leader in IT that transforms how people connect, communicate and collaborate, has been named “Gold Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Cisco makes amazing things happen by connecting the unconnected. Cisco has shaped the future of the Internet by becoming the worldwide leader in transforming how people connect, communicate and collaborate. Cisco and our partners are building the platform for the Internet of Everything by connecting the...
Mar. 28, 2015 07:00 PM EDT Reads: 5,187
SYS-CON Events announced today that robomq.io will exhibit at SYS-CON's @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. robomq.io is an interoperable and composable platform that connects any device to any application. It helps systems integrators and the solution providers build new and innovative products and service for industries requiring monitoring or intelligence from devices and sensors.
Mar. 28, 2015 06:00 PM EDT Reads: 1,461
Temasys has announced senior management additions to its team. Joining are David Holloway as Vice President of Commercial and Nadine Yap as Vice President of Product. Over the past 12 months Temasys has doubled in size as it adds new customers and expands the development of its Skylink platform. Skylink leads the charge to move WebRTC, traditionally seen as a desktop, browser based technology, to become a ubiquitous web communications technology on web and mobile, as well as Internet of Things compatible devices.
Mar. 28, 2015 06:00 PM EDT Reads: 1,828
Docker is an excellent platform for organizations interested in running microservices. It offers portability and consistency between development and production environments, quick provisioning times, and a simple way to isolate services. In his session at DevOps Summit at 16th Cloud Expo, Shannon Williams, co-founder of Rancher Labs, will walk through these and other benefits of using Docker to run microservices, and provide an overview of RancherOS, a minimalist distribution of Linux designed expressly to run Docker. He will also discuss Rancher, an orchestration and service discovery platf...
Mar. 28, 2015 04:15 PM EDT Reads: 2,419
SYS-CON Events announced today that Akana, formerly SOA Software, has been named “Bronze Sponsor” of SYS-CON's 16th International Cloud Expo® New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Akana’s comprehensive suite of API Management, API Security, Integrated SOA Governance, and Cloud Integration solutions helps businesses accelerate digital transformation by securely extending their reach across multiple channels – mobile, cloud and Internet of Things. Akana enables enterprises to share data as APIs, connect and integrate applications, drive part...
Mar. 28, 2015 04:15 PM EDT Reads: 1,537
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
Mar. 28, 2015 03:30 PM EDT Reads: 2,156
Cloud is not a commodity. And no matter what you call it, computing doesn’t come out of the sky. It comes from physical hardware inside brick and mortar facilities connected by hundreds of miles of networking cable. And no two clouds are built the same way. SoftLayer gives you the highest performing cloud infrastructure available. One platform that takes data centers around the world that are full of the widest range of cloud computing options, and then integrates and automates everything. Join SoftLayer on June 9 at 16th Cloud Expo to learn about IBM Cloud's SoftLayer platform, explore se...
Mar. 28, 2015 02:00 PM EDT Reads: 1,632