Welcome!

Containers Expo Blog Authors: Zakia Bouachraoui, Pat Romanski, Yeshim Deniz, Elizabeth White, Liz McMillan

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Site Reliability Engineering: DevOps 2.0 | @DevOpsSummit #Cloud #DevOps #DataCenter #Monitoring

The dream that brought DevOps together is for someone who can be half dev and half ops

Site Reliability Engineering: DevOps 2.0
By Saba Anees

Has there ever been a better time to be in DevOps? TV shows like “Person of Interest” and “Mr. Robot” are getting better at showing what developers actually do, using chunks of working code. Movies like Michael Mann’s “Blackhat” (2015) won praise from Google’s security team for its DevOps accuracy in a few scenes. Look around and you’ll discover elements of DevOps culture filtering out into wider society, such as people in all walks of life discussing their uptime or fast approaching code lock.

On the other hand, perhaps the biggest thorn in the side of DevOps is that developers and operations teams don’t normally get along well. Developers want to rush ahead and compile some groundbreaking code under extremely tight schedules, while operations teams try to slow everyone down to identify systemic risks from accidents or malicious actors. Both teams want to end up with a better user experience, but getting there becomes a power struggle over what that experience truly means.

The dream that brought DevOps together is for someone who can be half dev and half ops. That split desire is exactly the point of the SRE (site reliability engineer).

Defining the SRE
In
introducing the term SRE, Google’s VP of Engineering, Ben Treynor, stated,

“It’s what happens when you ask a software engineer to design an operations function…. The SRE is fundamentally doing work that has historically been done by an operations team, but using engineers with software expertise, and banking on the fact that these engineers are inherently both predisposed to, and have the ability to, substitute automation for human labor.”

Way back in 2010, Facebook SRE Mark Schonbach explained what he did this way:

“I’m part of a small team of Site Reliability Engineers (SRE) that works day and night to ensure that you and the other 400+ million users around the world are able to access Facebook, that the site loads quickly, and all of the features are working…. We regularly hack tools on the fly that help us manage and perform complex maintenance procedures on one of the largest, if not the largest memcached footprints in the world. We develop automated tools to provision new servers, reallocate existing ones, and detect and repair applications or servers that are misbehaving.”

Where Did SREs Come From?
Reliability engineering is a concept that grew out of the operations world and has been around for more than 100 years. It became more closely connected with electronic systems after World War II, when the IEEE created the
Reliability Society. In the past 10 years, five 9s (99.999) became the golden standard for application performance management. That standard led to the creation of a class of operations experts who knew enough code to recover the site and put the last stable release back into production as fast as possible.

Treynor explained the impetus for creating this new category at Google with his typical deadpan humor: “One of the things you normally see in operations roles as opposed to engineering roles is that there’s a chasm not only with respect to duty, but also of background and of vocabulary, and eventually, of respect. This, to me, is a pathology.”

Which Toolsets Do SREs Use?
For SREs, stability and uptime top priorities. However, they should be able to take responsibility and code their own way out of hazards, instead of adding to the to-do lists of the development team. In the case of Google, SREs are often software engineers with a layer of network training on top. Typically,
Google software engineers must demonstrate proficiency in:

  1. Google’s own Golang and OO languages such as C++, Python or Java

  2. A secondary language like JavaScript, CSS & HTML, PHP, Ruby, Scheme, Perl, etc

  3. Advanced fields like AI research, cryptography, compilers, UX design, etc

  4. Getting along with other coders

On top of those proficiencies, Google’s SREs must have experience in network engineering, Unix sys admin or more general networking/ops skills such as LDAP and DNS.

The Critical Role of SRE
Downtime is costing businesses around
$300,000 per hour, according to a report from Emerson Network Power. The most obvious impact is when traffic spikes bring down e-commerce sites, which was covered in a recent AppDynamics white paper. However, Treynor also pointed out how standard dev vs. ops friction can be costly to businesses in other ways. The classic conflict starts with the support checklist that ops presents to dev before feature updates are released. Developers win when users like newly developed features, the sooner the better. Meanwhile, operations wins when there are the maximum 9s in their uptime reports. All change brings instability; how do you align their interests?

Treynor’s answer is a relief for those with compensation tied to user satisfaction metrics, but not so much for those with heart conditions. He said,

“100% is the wrong reliability target for basically everything. Perhaps a pacemaker is a good exception! But, in general, for any software service or system you can think of, 100% is not the right reliability target because no user can tell the difference between a system being 100% available and, let’s say, 99.999% available. Because typically there are so many other things that sit in between the user and the software service that you’re running that the marginal difference is lost in the noise of everything else that can go wrong.”

This response shifts the focus from specific uptime metrics, which may not act as accurate proxies for user expectations, to a reliability index based on market realities. Treynor explained,

“If 100% is the wrong reliability target for a system, what, then, is the right reliability target for the system? I propose that’s a product question. It’s not a technical question at all. It’s a question of what will the users be happy with, given how much they’re paying, whether it’s direct or indirect, and what their alternatives are.”

Who Is Hiring SREs?
The simple answer is “Everyone.” From software/hardware giants like
Apple to financial portals like Morningstar to non-profit institutions like the Lawrence Berkeley National Laboratory. Berkeley is a great example of an organization that’s both at the cutting edge of energy research, yet also maintains some very old legacy systems. Assuring reliability across several generations of technologies can be an enormous challenge. Here’s a look into what their SREs at Berkeley Labs are responsible for:

  • Linux system administration skills to monitor and manage the reliability of the systems under the responsibility of the Control Room Bridge.

  • Develop and maintain monitoring tools used to support the HPC community within NERSC using programming languages like C, C++, Python, Java or Perl.

  • Provide input in the design of software, workflows and processes that improve the monitoring capability of the group to ensure the high availability of the HPC services provided by NERSC and ESnet.

  • Support in the testing and implementation of new monitoring tools, workflows and new capabilities for providing high availability for the systems in production.

  • Assist in direct hardware support of data clusters through managing component upgrades and replacements (dimms, hard drives, cards, cables, etc) to ensure the efficient return of nodes to production service.

  • Help in investigating and evaluating new technologies and solutions to push the group’s capabilities forward, getting ahead of users’ needs, and convincing staff who are incentivized to transform, innovate and continually improve.

Contrast that skill profile with an online company like Wikipedia, where an SRE assignment tends to be less technical and more diplomatic:

  • Improve automation, tooling and processes to support development and deployment

  • Form deep partnership with engineering teams to work on improving user site experience

  • Participate in sprint planning meetings, and support intra-department coordination

  • Troubleshoot site outages and performance issues, including on-call response

  • Help with the provisioning of systems and services, including configuration management

  • Support capacity planning, profiling of site performance, and other analysis

  • Help with general ops issues, including tickets and other ongoing maintenance tasks

Within the past year, there has been a marked shift to a more strategic level of decision-making that reflects the increasing automation of customer requests and failover procedures. Even at traditional companies like IBM, SREs work with some of the newest platforms available due to the advance of IoT agendas. For example, one opening for an SRE at IBM in Ireland requires experience in OpenStack Heat, Urban Code Deploy, Chef, Jenkins, ELK, Splunk, Collect D and Graphite.

How SREs Are Changing
The online world is quite different now than when SREs entered the scene nearly a decade ago. Since then, mobile has redefined development cycles, and easy access to cloud-based data centers has brought
microservices into the mainstream IT infrastructure. Startups regularly come out of the gates using Rest and JSON as their preferred protocol for mobile apps. In accordance with the principles of Lean Startups, DevOps are often smaller, more focused teams that function as collective SREs.

You’ll find there’s a great deal more collaboration and less conflict between development and operations, simply because the continuous delivery model has collapsed the responsibilities of development and operations into a single cycle. The term DevOps is likely to disappear as the two distinct divisions merge in the new world, where UX is everything and updates may be pushed out weekly. Regardless of how many 9s are in any given SREs job description, this career path appears to offer you maximum reliability with job security.

The post Site Reliability Engineering: DevOps 2.0 appeared first on Application Performance Monitoring Blog | AppDynamics.

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

IoT & Smart Cities Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to great conferences, helping you discover new conferences and increase your return on investment.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
SYS-CON Events announced today that Silicon India has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Published in Silicon Valley, Silicon India magazine is the premiere platform for CIOs to discuss their innovative enterprise solutions and allows IT vendors to learn about new solutions that can help grow their business.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.
DXWorldEXPO LLC announced today that "IoT Now" was named media sponsor of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.