Welcome!

Containers Expo Blog Authors: Pat Romanski, Elizabeth White, PagerDuty Blog, XebiaLabs Blog, Automic Blog

Related Topics: Containers Expo Blog, Java IoT, @CloudExpo

Containers Expo Blog: Blog Post

Optimizing VMware Environments for Peak SQL Server Performance | @CloudExpo #Cloud #Analytics #MachineLearning

What if it were possible to have both high availability and high performance without the high cost and complexity?

Optimizing VMware Environments for Peak SQL Server Performance

VMware configurations designed to provide high availability often make it difficult to achieve satisfactory performance required by mission-critical SQL Server applications. But what if it were possible to have both high availability and high performance without the high cost and complexity normally required?

This article explores two requirements for getting both for SQL applications, while reducing capital and operational expenditures. The first is to implement a storage architecture within VMware environments designed for both high availability and high performance; the second involves tuning the high availability (HA) and high performance (HP) HA/HP architecture for peak performance.

Building the Foundation with an HA/HP Architecture
SQL Server administrators have many options for implementing HA in a VMware environment. VMware offers vSphere HA, Microsoft offers Windows Server Failover Clustering as a general-purpose HA solution, and SQL Server has its own HA capabilities with AlwaysOn Failover Clusters and AlwaysOn Availability Groups. Then there are the many third party vendors that offer solutions purpose-built for HA and disaster recovery.

The problem is: many of these HA solutions lack full application availability protection, reduce operational flexibility or have an adverse impact on performance. Performance overhead is caused by the layers of abstraction in virtualized servers complicating the way virtual machines (VMs) interface with physical devices, including in a Storage Area Network (SAN) where the storage is also virtualized. Both VMware HA and AlwaysOn Availability Groups fall short in protecting the entire application stack and all application data during failover. And while Windows Server Failover Clustering is the ideal solution to fully address these issues, VMware imposes certain restrictions that reduces IT flexibility, highest performing configurations and the mobility of VMs configured in the cluster. Let's look at the issues.

To enable compatibility with certain SAN and other shared-storage features, such as I/O fencing and SCSI reservations, vSphere utilizes a technology called Raw Device Mapping (RDM) to create a direct link through the hypervisor between the VM and the external storage system. The requirement for using RDM with shared storage exists for layering any HA clustering technology on a VMware environment, including a SQL Server Failover Cluster using Windows Server Failover Clustering (WSFC).

RDM makes the storage appear to the guest operating system as if it were a virtual disk file in a VMware Virtual Machine File System (VMFS) volume. For this reason, mapping is able to maintain 100 percent compatibility with all SAN commands, making virtualized storage access seamless to both the operating system and applications.

RDM can be made to work effectively, but achieving the desired result is not always easy, and may not even be possible. For example, RDM does not support disk partitions, so it is necessary to use "raw" or whole LUNs (logical unit numbers), and mapping is not available for direct-attached block storage and certain RAID devices. And because RDM interferes with VMware features that employ virtual machine disk (VMDK) files, SQL Server administrators may be unable to fully utilize desirable features like snapshots, Virtual Consolidated Backups (VCBs), templates and vMotion.

But the real problem for transaction-intensive applications like SQL Server is the inability to utilize performance-enhancing Flash Read Cache when RDM is configured. Achieving both HA and HP for SQL Server applications in a VMware environment is best achieved using a SANless configuration that eliminates the need for shared SAN storage.  In SANless configurations both the compute and storage resources are fully redundant (with no single points of failure and automatic failover) and provide the additional flexibility to achieve disaster protection by geographically dispersing redundant resources.

SANless HA/HP architectures make it possible to create a shared-nothing, hardware-agnostic, single-site or multi-site cluster. Some solutions also make it possible to implement LAN/WAN-optimized, real-time block-level replication in either a synchronous or asynchronous manner. In effect, these solutions are capable of creating a RAID 1 mirror across the network, automatically changing the direction of the data replication (source and target) as needed after failover and failback.

Just as importantly, a SANless cluster is often easier to implement and operate with both physical and virtual servers. For example, for solutions that are integrated with WSFC, administrators are able to configure high-availability clusters using a familiar feature in a way that avoids the use of shared storage as a potential single point of failure. Once configured, most solutions then automatically synchronize the local storage in two or more servers (in one or more data centers), making them appear to WSFC as a local or shared storage device.

A well-designed SANless HA/HP solution can actually be less expensive than traditional HA configurations owing to savings in two areas. The first involves avoiding the high cost associated with creating a fully redundant SAN across the LAN and WAN. Simply put: HA configurations using local storage with hard disk drives (HDDs) and/or solid state drives (SSDs) are able to deliver superior performance at a lower cost. The second area involves licensing. Because these solutions are designed to deliver carrier-class HA for AlwaysOn Failover Clusters in SQL Server Standard Edition, there is no need for using the AlwaysOn Availability Groups in the more expensive Enterprise Edition.

The performance advantage of a SANless HA/HP solution is shown in diagram. Benchmark testing reveals the 60-70 percent performance penalty associated with using SQL Server AlwaysOn Availability Groups to replicate data in a SAN environment. These test results also show how the use of local storage in an HA configuration is able to perform nearly as well as an unprotected application. To provide an accurate comparison, each alternative utilized identically-performing HDDs. The use of SSDs can deliver an even more significant performance advantage over the SAN-based AlwaysOn Availability Group configuration.

Benchmark tests comparing SQL Server's AlwaysOn Availability Groups with SANless clusters shows the throughput advantage possible with replication techniques designed for HA/HP.

The SANless cluster tested is able to deliver this impressive performance with complete application and data transparency because its advanced architecture implements a low-level, high efficiency driver that sits immediately below NTFS. As writes occur on the primary server, the driver writes one copy of the block to the local VMDK and another copy simultaneously across the network to the VMDK on the remote secondary server.

Beyond performance, SANless clusters have many other advantages. For example, those that use block-level replication technology that is fully integrated with WSFC are able to protect the entire SQL Server application instance, including all databases, logons and agent jobs-all in an integrated fashion. Contrast this approach with AlwaysOn Availability Groups, which protects only the SQL databases, not including other disk resident data that may be application specific.

Tuning the Configuration for Peak Performance
Just as virtualization's layers of abstraction make accessing storage more complex, so too do they obscure how the physical resources are performing. This can make optimizing resources for peak performance a never-ending exercise in trial-and-error.

The trial-and-error process is nearly impossible to avoid with traditional application performance management tools that utilize thresholds of discrete events to isolate performance issues. But individual thresholds are unable account for the interrelated nature of resources in virtualized environments, where a change to one often has a significant impact on another. So even when these tools alert IT to a performance issue, they are incapable of providing meaningful insight into the issue or provide guidance for resolution.

Advanced machine learning analytics (MLA) software overcomes these and other limitations by automatically and continuously learning the many complex behaviors and interactions among all interrelated resources. Self-learning and automatic adaptation is what makes it possible for MLA-based solutions to provide a more accurate means of identifying the root cause(s) of performance issues and providing actionable recommendations for resolving these.

Most machine learning analytics systems work by aggregating, normalizing, and then correlating and analyzing hundreds of thousands of data points from numerous resources across network, storage, compute and application layers. While gathering and analyzing this wealth of data, the MLA system learns what constitutes normal behavior patterns, thereby establishing a baseline for detecting anomalies and finding root causes. Some MLA systems also enable human supervision to accelerate the learning process and improve results.

In addition to identifying root causes, some MLA systems are able to simulate and predict the impact of making changes in resources and configurations. This is key for anticipating and avoiding performance or reliability issues and avoiding real-time reaction to problems that occur. In contrast traditional monitoring tools are reactive by design and primarily designed to deliver alerts on current events within the infrastructure. These tools are manually intensive and involve time-consuming and error-prone approaches. They require IT administrators to run multiple reports, and then manually compare the results to find and fix under- and over-provisioning of vCPU and vMemory resources.

MLA systems can identify a wide range of performance issues, involving compute  or storage contention, or incorrectly configured VMs as well as problems arising from migrated VMs, newly provisioned VMs, "noisy neighbors," misconfigured applications or hardware degradation. Most MLA systems also help improve efficient use of resources by identifying idle VM's or wasted storage.

SQL administrators often employ host-based caching (HBC), all-flash arrays and/or hybrid storage to improve performance. In SAN environments, HBC normally delivers the greatest improvements in throughput performance by maximizing I/O operations per second (IOPS) for some, but not all applications. And therein lies the challenge.

The improvement in performance is best when the cache is able to contain sufficient "hot" data to have a meaningful increase in IOPS. But testing every application that might fit such criteria with different HBC configurations in an attempt to quantify the improvement is an arduous endeavor in organizations running hundreds or thousands of applications.

Because machine learning is able to evaluate the many variables involved, MLA systems make it possible to identify those applications that would benefit the most from host-based caching. Most systems are able to recommend a cost-effective HBC configuration, and some are even able to estimate the likely increase in IOPS, enabling SQL administrators to prioritize the implementation effort.

Conclusion
Peak performance is impossible to achieve on a shaky foundation, so it is critically important to make certain the infrastructure's architecture is designed for both high availability and high performance. But as with most things, the SQL Server performance devil is in the details of the many physical resource configurations throughout the HA/HP infrastructure. By taking the guesswork out of performance tuning, machine learning analytics makes it easier than ever to achieve peak performance.

Is your VMware infrastructure delivering satisfactory performance for all of your SQL Server applications? You're among good company if the answer is no. The recommendations made here are easy to implement in a development or pilot environment, so there is little to lose and much to gain by giving them a try. And because most vendors today offer free trials of their performance-tuning tools, there is also zero financial risk to trying.

More Stories By Tony Tomarchio

Tony Tomarchio is the Director of Field Engineering for SIOS Technology. He is responsible for defining and delivering technical pre-sales services, support and best practices to SIOS customers, prospects and partners. He has more than a decade of experience providing systems management and high availability solutions to enterprise customers. Prior to joining SIOS, he served as the Global Sales Engineering lead for the Oracle systems management practice. Tony joined Oracle through the acquisitions of Sun Microsystems and Aduva, Inc., where he served as the lead Sales Engineer / Technical Account Manager and played a critical role in product adoption and evolution.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...
Intelligent Automation is now one of the key business imperatives for CIOs and CISOs impacting all areas of business today. In his session at 21st Cloud Expo, Brian Boeggeman, VP Alliances & Partnerships at Ayehu, will talk about how business value is created and delivered through intelligent automation to today’s enterprises. The open ecosystem platform approach toward Intelligent Automation that Ayehu delivers to the market is core to enabling the creation of the self-driving enterprise.
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
Consumers increasingly expect their electronic "things" to be connected to smart phones, tablets and the Internet. When that thing happens to be a medical device, the risks and benefits of connectivity must be carefully weighed. Once the decision is made that connecting the device is beneficial, medical device manufacturers must design their products to maintain patient safety and prevent compromised personal health information in the face of cybersecurity threats. In his session at @ThingsExpo...
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Massive Networks will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Massive Networks mission is simple. To help your business operate seamlessly with fast, reliable, and secure internet and network solutions. Improve your customer's experience with outstanding connections to your cloud.
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...
Because IoT devices are deployed in mission-critical environments more than ever before, it’s increasingly imperative they be truly smart. IoT sensors simply stockpiling data isn’t useful. IoT must be artificially and naturally intelligent in order to provide more value In his session at @ThingsExpo, John Crupi, Vice President and Engineering System Architect at Greenwave Systems, will discuss how IoT artificial intelligence (AI) can be carried out via edge analytics and machine learning techn...
Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution and join Akvelon expert and IoT industry leader, Sergey Grebnov, in his session at @ThingsExpo, for an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, will examine the regulations and provide insight on how it affects technology, challenges the established rules and will usher in new levels of diligence a...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics ...
In the enterprise today, connected IoT devices are everywhere – both inside and outside corporate environments. The need to identify, manage, control and secure a quickly growing web of connections and outside devices is making the already challenging task of security even more important, and onerous. In his session at @ThingsExpo, Rich Boyer, CISO and Chief Architect for Security at NTT i3, discussed new ways of thinking and the approaches needed to address the emerging challenges of security i...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
There is only one world-class Cloud event on earth, and that is Cloud Expo – which returns to Silicon Valley for the 21st Cloud Expo at the Santa Clara Convention Center, October 31 - November 2, 2017. Every Global 2000 enterprise in the world is now integrating cloud computing in some form into its IT development and operations. Midsize and small businesses are also migrating to the cloud in increasing numbers. Companies are each developing their unique mix of cloud technologies and service...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that Datera, that offers a radically new data management architecture, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera is transforming the traditional datacenter model through modern cloud simplicity. The technology industry is at another major inflection point. The rise of mobile, the Internet of Things, data storage and Big...
SYS-CON Events announced today that Akvelon will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Akvelon is a business and technology consulting firm that specializes in applying cutting-edge technology to problems in fields as diverse as mobile technology, sports technology, finance, and healthcare.