Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: @CloudExpo, Containers Expo Blog, @DXWorldExpo

@CloudExpo: Blog Post

Trends in Hyper-Scale Storage for Growing Data Center Workloads

Companies are faced with the challenges of how to store their ever growing data efficiently

Storage has finally become an interesting field, full of innovation and change, addressing growing new requirements for storage flexibility, density and performance. The falling prices of flash, the introduction of various flavors of storage class memory, combined with increasing appetite for commoditization of the data center infrastructure, has helped fuel the innovation in how data is stored and accessed.

Companies are faced with the challenges of how to store their ever growing data efficiently, at a cost point that is palatable to CTOs and CFOs, while keeping the right levels of performance and SLAs in order to provide storage services to end users and applications. At the same time, internal IT organizations are facing the challenge of competing with flexibility and price points offered externally through public clouds.

Two new trends have emerged in architecting solutions for next-generation "hyper-scale" data centers and storage infrastructures that must grow on demand to meet compute, memory, and storage requirements. These requirements include on-demand provisioning, instant capacity management, and flexibility to scale each individual component independently, driving cost efficiency and a direct impact on the TCO.

First, as outlined in the table below, there are legacy SAN environments running transactional OLTP workloads, primarily based on Fibre Channel and NFS, with high-performance (greater than ~500K IOPS, less than ~5ms response time to the application) SLA targets. This environment is built on storage appliances and SAN installations with complete HA capabilities that provide data protection and service resiliency to the applications. The growth rate of the traditional SAN environment compared to other storage infrastructures is relatively low, and its impact on revenue is high enough to justify paying the premium for brand-name storage technologies that come with all the HA and data protection capabilities. Understandably, many companies are unwilling to try groundbreaking technologies within an OLTP infrastructure as stability; security and availability are the primary goals for this environment.

The second architecture described in the three columns to the right of the table, and arguably the fastest growing segment of every data center, is the scale-out environment running No-SQL, cloud and big data workloads. From the storage perspective, these environments usually run in a direct-attached storage (DAS) or disaggregated storage model based on various protocols such as ISCSI, PCIe, or NVMe. The scale of the storage infrastructure, especially for big data analytics, can reach hundreds of petabytes, which makes them extremely TCO driven.

Many applications running in these environments have built-in resiliency, anticipate hardware failures, and can self-heal at the application layer. The document and key value stores, as well as analytics applications, feature server- and rack-aware replication-based data resiliency to guard data from hardware failures. When data protection and self-healing features are handled at the app layer, the need for building HA features at the storage layer is eliminated, which opens the door to utilizing consumer-grade, commodity hardware that can fail without impact to the service availability.

More Stories By Farid Yavari

Farid Yavari is the Vice President of Technology at FalconStor software. Farid's decades of experience in high-tech industry includes technology leadership in hyper-scale storage solutions and developing strategy and vision for enterprise class data center deployments at scale. Prior to FalconStor, Farid was a senior member of eBay’s data center infrastructure team working closely with the storage industry to drive innovation, and worked with industry leaders to shape the future of the storage technology. Over the years, Farid has been actively sharing his views and experience with his peers and the high tech industry by participating in various speaking engagements at universities and industry forums.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...