Welcome!

Containers Expo Blog Authors: William Schmarzo, Elizabeth White, Mehdi Daoudi, APM Blog, Stackify Blog

Related Topics: @CloudExpo, Linux Containers, Containers Expo Blog, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Performance: The Key to Data Efficiency By @Permabit | @CloudExpo [#Cloud]

Data efficiency encompasses a variety of different technologies that enable the most effective use of space on a storage device

Data efficiency - the combination of technologies including data deduplication, compression, zero elimination and thin provisioning - transformed the backup storage appliance market in well under a decade. Why has it taken so long for the same changes to occur in the primary storage appliance market? The answer can be found by looking back at the early evolution of the backup appliance market, and understanding why EMC's Data Domain continues to hold a commanding lead in that market today.

Data Efficiency Technologies
The term "data efficiency" encompasses a variety of different technologies that enable the most effective use of space on a storage device by both reducing wasted space and eliminating redundant information. These technologies include thin provisioning, which is now commonplace in primary storage, as well as less extensively deployed features such as compression and deduplication.

Compression is the use of an algorithm to identify data redundancies within a small distance, for example, finding repeated words within a 64 KB window. Compression algorithms often take other steps to increase the entropy (or information density) of a set of data such as more compactly representing parts of bytes that change rarely, like the high bits of a piece of ASCII text. These sorts of algorithms always operate "locally", within a data object like a file, or more frequently on only a small portion of that data object at a time. As such, compression is well suited to provide savings on textual content, databases (particularly NoSQL databases), and mail or other content servers. Compression algorithms typically achieve a savings of 2x to 4x on such data types.

Deduplication, on the other hand, identifies redundancies across a much larger set of data, for example, finding larger 4 KB repeats across an entire storage system. This requires both more memory and much more sophisticated data structures and algorithms, so deduplication is a relative newcomer to the efficiency game compared to compression. Because deduplication has a much greater scope, it has the opportunity to deliver much greater savings - as much as 25x on some data types. Deduplication is particularly effective on virtual machine images as used for server virtualization and VDI, as well as development file shares. It also shows very high space savings in database environments as used for DevOps, where multiple similar copies may exist for development, test and deployment purposes.

The Evolution of Data Efficiency
In less than ten years, data deduplication and compression shifted billions of dollars of customer investment from tape-based backup solutions to purpose-built disk-based backup appliances. The simple but incomplete reason for this is that these technologies made disk cheaper to use for backup. While this particular aspect enabled the switch to disk, it wasn't the driver for the change.

The reason customers switched from tape to disk was that backup and particularly restore to and from disk, respectively, is much, much faster. Enterprise environments were facing increasing challenges in meeting their backup windows, recovery point objectives, and (especially) recovery time objectives with tape-based backup systems. Customers were already using disk-based backup in critical environments, and they were slowly expanding the use of disk as the gradual price decline of disk allowed.

Deduplication enabled a media transition for backup by dramatically changing the price structure for disk-based vs tape. While the disk-based backup is still more expensive, deduplication has made it faster and better.

It's also worth noting that Data Domain, the market leader early on, still commands a majority share of the market. This can be partially explained by history, reputation and the EMC sales machine, but other early market entrants including Quantum, Sepaton and IBM have struggled to gain share, so this doesn't fully explain Data Domain's prolonged dominance.

The rest of the explanation is that deduplication technology is extremely difficult to build well, and Data Domain's product is a solid solution for disk-based backup. In particular, it is extremely fast for sequential write workloads like backup, and thus doesn't compromise performance of streaming to disk. Remember, customers aren't buying these systems for "cheap disk-based backup;" they're buying them for "affordable, fast backup and restore." Performance is the most important feature. Many of the competitors are still delivering the former - cost savings - without delivering the golden egg, which is actually performance.

Lessons for Primary Data Efficiency
What does the history of deduplication in the backup storage market teach us about the future of data efficiency in the primary storage market? First, we should note that data efficiency is catalyzing the same media transition in primary storage as it did in backup, on the same timeframe - this time from disk to flash, instead of tape to disk.

As was the case in backup, cheaper products aren't the major driver for customers in primary storage. Primary storage solutions still need to perform as well as (or better than) systems without data efficiency, under the same workloads. Storage consumers want more performance, not less, and technologies like deduplication enable them to get that performance from flash at a price they can afford. A flash-based system with deduplication doesn't have to be cheaper than the disk-based system it replaces, but it does have to be better overall!

This also explains the slow adoption of efficiency technologies by primary storage vendors. Building compression and deduplication for fully random access storage is an extremely difficult and complex thing to do right. Doing this while maintaining performance - a strict requirement, as we learn from the history of backup - requires years of engineering effort. Most of the solutions currently shipping with data efficiency are relatively disappointing and many other vendors have simply failed at their efforts, leaving only a handful of successful products on the market today.

It's not that vendors don't want to deliver data efficiency on their primary storage, it's that they simply haven't been able to develop it so far and have underestimated the difficulty of this task.

Hits and Misses (and Mostly Misses)
If we take a look at primary storage systems shipping with some form of data efficiency today, we see that the offerings are largely lackluster. The reason that offerings with efficiency features haven't taken the market by storm is because they deliver the same thing as less successful disk backup products - cheaper storage, not better storage. Almost universally, they deliver space savings at a steep cost in performance, a tradeoff no customer wants to make. If customers simply wanted to spend less, they would buy bulk SATA disk rather than fast SAS spindles or flash.

Take NetApp, for example. One of the very first to the market with deduplication, they proved that customers wanted efficiency - but they were also quickly turned off by the limitations of the ONTAP implementation. Take a look at the NetApp's Deduplication Deployment and Implementation Guide (TR-3505). Some choice quotes include, "if 1TB of new data has been added [...], this deduplication operation takes about 10 to 12 hours to complete," and "With eight deduplication processes running, there may be as much as a 15% to 50% performance penalty on other applications running on the system." Their "50% Virtualization Guarantee* Program" has 15 pages of terms and exceptions behind that little asterisk. It's no surprise that most NetApp users choose not to turn on deduplication.

VNX is another case in point. The "EMC VNX Deduplication and Compression" white paper is similarly frightening. Compression is offered, but it's available only as a capacity tier: "compression is not suggested to be used on active datasets." Deduplication is available as a post-process operation, but "for applications requiring consistent and predictable performance [...] Block Deduplication should not be used."

Finally, I'd like to address Pure Storage, which has set the standard for offering "cheap flash" without delivering the full performance of the medium. They represent the most successful of the all-flash array offerings on the market today and have deeply integrated data efficiency features, but they struggle to meet a sustained 150,000 IOPS. Their arrays deliver a solid win on price over all of the flash arrays without optimization, but that performance is not going to tip the balance for primary in the same way Data Domain did for backup.

To be fair to the above products, there are lots of others that must have tried to build their own deduplication and simply failed to deliver something that meets their exacting business standards. IBM, EMC VMAX, Violin Memory and others surely have tried to build their own efficiency features, and have even announced promises to deliver over the years, but none have shipped to date.

Finally, there are some leaders in the primary efficiency game so far! Hitachi is delivering "Deduplication without Compromise" on their HNAS and HUS platforms, providing deduplication (based on Permabit's AlbireoTM technology) that doesn't impact the fantastic performance of the platform. This solution delivers savings and performance for file storage, although the block side of HUS still lacks efficiency features.

EMC XtremIO is another winner in the all-flash array sector of the primary storage market. XtremIO has been able to deliver outstanding performance with fully inline data deduplication capabilities. The platform isn't yet scalable or dense in capacity, but it does deliver the required savings and performance necessary to make a change in the market.

Requirements for Change
The history of the backup appliance market makes the requirement for change in the primary storage market clear. Data efficiency simply cannot compromise performance, which is the reason why a customer is buying a particular storage platform in the first place. We're seeing the seeds of this change in products like HUS and XtremIO, but it's not yet clear who will be the Data Domain of the primary array storage deduplication market. The game is still young.

The good news is that data efficiency can do more than just reduce cost; it can also increase performance as well - making a better product overall, as we saw in the backup market. Inline deduplication can eliminate writes before they ever reach disk or flash, and deduplication can inherently sequentialize writes in a way that vastly improves random write performance in critical environments like OLTP databases. These are some of the requirements for a tipping point in the primary storage market.

Data efficiency in primary storage must deliver uncompromising performance in order to be successful. At a technical level, this means that any implementation must deliver predictable inline performance, a deduplication window that spans the entire capacity of the existing storage platform, and performance scalability to meet the application environment. The current winning solutions provide some of these features today, but it remains to be seen which product will capture them all first.

Inline Efficiency
Inline deduplication and compression - eliminating duplicates as they are written, rather than with a separate process that examines data hours (or days) later - is an absolute requirement for performance in the primary storage market, just as we've previously seen in the backup market. By operating in an inline manner, efficiency operations provide immediate savings, deliver greater and more predictable performance, and allow for greatly accelerated data protection.

With inline deduplication and compression, the customer sees immediate savings because duplicate data never consumes additional space. This is critical in high data change rate scenarios, such as VDI and database environments, because non-inline implementations can run out of space and prevent normal operation. In a post-process implementation, or one using garbage collection, duplicate copies of data can pile up on the media waiting for the optimization process to catch up. If a database, VM, or desktop is cloned many times in succession, the storage rapidly fills and becomes unusable. Inline operations prevent this bottleneck, one called out explicitly in the NetApp documentation above where at most 2 TB of new data can be processed per day. In a post-process implementation a heavily utilized system may never catch up with new data written!

Inline operation also provides for the predictable, consistent performance required by many primary storage applications. In this case, deduplication and compression occur at the time of data write and are balanced with the available system resources by design. This means that performance will not fluctuate wildly as with post-process operation, where a 50% impact (or more) can be seen on I/O performance, as optimization occurs long after the data is written. Additionally, optimization at the time of data write means that the effective size of DRAM or flash caches can be greatly increased, meaning that more workloads can fit in these caching layers and accelerate application performance.

A less obvious advantage of inline efficiency is the ability for a primary storage system to deliver faster data protection. Because data is reduced immediately, it can be replicated immediately in its reduced form for disaster recovery. This greatly shrinks recovery point objectives (RPOs) as well as bandwidth costs. In comparison, a post-process operation requires either waiting for deduplication to catch up with new data (which could take days to weeks), or replicating data in its full form (which could also take days to weeks of additional time).

Capacity and Scalability
Capacity and scalability of a data efficiency solution should seem to be obvious requirements, but they're not apparent in the products in the market today. As we've seen, a storage system incorporating deduplication and compression must be a better product, not just a cheaper product. This means that it must support the same storage capacity and the performance scalability of the primary storage platforms that customers are deploying today.

Deduplication is a relative newcomer to the data efficiency portfolio, and this is largely because the system resources required, in terms of CPU and memory, are much greater than older technologies like compression. The amount of CPU and DRAM in modern platforms means that even relatively simple deduplication algorithms can now be implemented without substantial hardware cost, but they're still quite limited in the amount of storage that they can address, or the data rate that they can accommodate.

For example, even the largest systems from all-flash array vendors like Pure and XtremIO support well under 100 TB of storage capacity, far smaller than the primary storage arrays being broadly deployed today. NetApp, while they support large arrays, only identify duplicates within a very small window of history - perhaps 2 TB or smaller. To deliver effective savings, duplicates must be identified across the entire storage array, and the storage array must support the capacities that are being delivered and used in the real world. Smaller systems may be able to peel off individual applications like VDI, but they'll be lost in the noise of the primary storage data efficiency tipping point to come.

Shifting the Primary Storage Market to Greater Efficiency
A lower cost product is not sufficient to substantially change customers' buying habits, as we saw from the example of the backup market. Rather, a superior product is required to drive rapid, revolutionary change. Just as the backup appliance market is unrecognizable from a decade ago, the primary storage market is on the cusp of a similar transformation. A small number of storage platforms are now delivering limited data efficiency capabilities with some of the features required for success: space savings, high performance, inline deduplication and compression, and capacity and throughput scalability. No clear winner has yet emerged. As the remaining vendors implement data efficiency, we will see who will play the role of Data Domain in the primary storage efficiency transformation.

More Stories By Jered Floyd

Jered Floyd, Chief Technology Officer and Founder of Permabit Technology Corporation, is responsible for exploring strategic future directions for Permabit’s products, and providing thought leadership to guide the company’s data optimization initiatives. He has previously deployed Permabit’s effective software development methodologies and was responsible for developing Permabit product’s core protocol and initial server and system architectures.

Prior to Permabit, Floyd was a Research Scientist on the Microbial Engineering project at the MIT Artificial Intelligence Laboratory, working to bridge the gap between biological and computational systems. Earlier at Turbine, he developed a robust integration language for managing active objects in a massively distributed online virtual environment. Floyd holds Bachelor’s and Master’s degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...