Welcome!

Virtualization Authors: Liz McMillan, Vormetric Blog, Carmen Gonzalez, Elizabeth White, Trevor Parsons

Blog Feed Post

AppDynamics Pumps up the Jam in San Francisco

It’s been a week since we hosted AppJam Americas, our first North American user conference in San Francisco. With myself as master of ceremonies, and a minor wardrobe malfunction at the start (see video at the end of this post), the entire day was a huge success for us and our customers. One thing that stuck in my mind was that applications today have become way more complex to manage—and strategic monitoring has become key to mastering that complexity. Simply put, SOA+Virtualization+Big Data+Cloud+Agile != Easy.

The day started with Jyoti Bansal, our CEO and Founder outlining his vision to be the world’s #1 solution for managing modern web applications. The simple facts are that applications have become more dynamic, distributed and virtual. All of these factors have increased their operational complexity, and log files and legacy monitoring solutions are ill-suited to the task.

Jyoti then outlined our core design principles around Business Transaction Monitoring, Self-learning, intelligence and the need to keep app management simple.  He then suggested what the audience could expect from AppJam: “AppJam is about sharing knowledge, learning best practices, guiding our direction and Jamming.” (We’re pretty sure by “jamming” he meant “partying.”)

With the intro from Jyoti done, it was time for me to nose dive the stage and introduce our first customer speaker – Ariel Tsetlin from Netflix.

How Netflix Operates & Monitors in the Cloud

With 27 million customers around the world, Neflix’s growth over the past three years has been meteoric. In fact, they found that they couldn’t build data centers fast enough. Hence, they moved to the public cloud in AWS for better agility.

In his session, Ariel talked about Netflix’s architecture in the cloud and how they built their own PaaS in terms of apps and clusters on top of Amazons IaaS. One unique thing Netflix does is bake their OS, middleware, apps and monitoring agents into a single image rather than using a tool like Chef or Puppet to manage application configuration and deployment separately from the underlying OS, middleware and tools. Everything is automated and managed at the instance level, with developers given the freedom and responsibility to deploy whenever they want to. That’s pretty cool stuff when you consider that developers now manage their own capacity and auto-scaling within the Cloud.

Ariel then talked about the assumption that failure is inevitable in the Cloud, with the need to plan and design around the fact that every part of the application can and will fail at some point. Testing for failure through “monkey theory” and Netflix’s “Simian Army” allows them to simulate failure at every level of the application, from randomly killing instances to taking out entire availability zones in AWS.

From a monitoring perspective, Netflix uses internally developed tools and AppDynamics, which are also baked into their AWS images. Doing so allows developers to live and die by monitoring in production through automated alerts and problem discovery. What’s perhaps different is that Netflix focuses their monitoring at the service level (e.g. app cluster), rather than at the infrastructure level–so they’re really not interested in CPU or memory unless it’s impacting their end users or business transactions.

Finally, Ariel spoke about AppDynamics at Netflix, touching on the fact they monitor over 1 million metrics per minute across 400+ business transactions and 300+ application services, giving them proactive alerts with URL drill-down into business transaction latency and errors from self-learned baselines. Overall, it was a great session for those looking to migrate and operate their application in the Cloud.

When Big Data Meets SOA

Next up was Bob Hartley, development manager from Family Search, who gave an excellent talk about managing SOA and Big Data behind the world largest genealogy architecture. With almost 3 billion names indexed and 550+ million high resolution digital images, FamilySearch has over 20 petabytes of data which needs to be managed by their Java and Node.JS distributed architecture spanning 5,000 servers. What’s scary is that this architecture and data is growing at a rapid pace, meaning application performance and scalability is fundamental to the success of Family Search.

After a brief intro, Bob started to talk about his Big Data architecture in terms of what technologies they were using to manage search queries, images, and people records. Clusters of Apache Lucene, SOLR, and custom map-reduce combined with traditional relational database technology such as Oracle, MySQL, and Postgres.

Bob then talked about his team’s mission – to enable business agility through visibility, responsiveness, standardization, and vendor independence. At the top of this list was to provide joy for customers and stakeholders through delivering features that matter faster.

Bob also emphasized the need for repeatable, reliable and automated processes, as well as the need to monitor everything so his team could manage the performance of their SOA and Big Data application through continuous agile release cycles. Family Search has gone from a 3-month release cycle to a continuous delivery model in which changes can be deployed in just 40 minutes. That’s pretty mind blowing stuff when you consider the size and complexity of their environment!

What’s interesting is that Release != Deploy at FamilySearch; they incrementally roll out out new features to different sets of users using flags, allowing them to test and tease features before making them available to everyone. Monitoring is at the heart of their continuous release cycle, with Dev and Ops using baselines and trending to determine the impact of new features on application performance and scalability.

In terms of the evaluation process, the company looked at 20 different APM vendors over a 6 month period before finally settled on AppDynamics due to our dynamic discovery, baselining, trending, and alerting of business transactions. As Bob said, “AppDynamics gave us valuable performance data in less than one day. The closest competitors took over 2 weeks just to install their tools.”

Today, a single AppDynamics management server is used in production to monitor over 5,000 servers, 40+ application services, and 10 million business transactions a day. Since deployment, Family Search has managed to find dozens of problems they’ve had for years, and have managed to scale their application by 10x without increasing server resources. They’ve also seen MTTD drop from days to minutes and MTTR drop from months to hours and minutes.

Bob finished his talk with his lessons learned for managing SOA, Big Data and Agile applications: “Keep Architecture Simple,” “Speed of delivery is essential,” “Systems will eventually fail,” and “Working with SOA, Big Data and Agile is hard.”

How AppDynamics is accelerating DevOps culture at Edmunds.com

After lunch, John Martin, Senior Director of Production Engineering, spoke about DevOps culture at Edmunds.com and how AppDynamics has become central to driving team collaboration. After a brief architecture overview outlining his SOA environment of 30 application services, John outlined what DevOps meant to him and his team – “DevOps is really about Collaboration – the most challenging issues we faced were communication.” Openly honest and deeply passionate throughout his session, John talked about three key challenges his team faced over the years that were responsible for the move to DevOps:

1. Infrastructure Growth

2. Communication Failure

3. Go Faster & Be Efficient

In 2005 Edmunds.com had just 30 servers; by the end of this year that figure will have risen to 2,500. Through release automation using tools such as Bladelogic and Chef, John and his team are now able to perform a release in minutes versus the 8 hours it took back in 2005.

John gave an example on communication failure in which development was preparing for a major release at Edmunds.com using a new CMS platform. This release was performance-tested just two weeks prior to go-live. Unfortunately the new platform showed massive scalability limitations, causing Ops to work around the clock to over-provision resources as a tactical fix. Fortunately the release was delivered on time and the business was happy. However, they suffered as a technology organization due to finding architecture flaws so late in the game – “We needed a clear picture of what went wrong and how we were going to prevent such breakdown in future.”

Another mistake with a release in 2010 which forced a major re-think between development and operations. It was this occurrence that caused Edmunds.com to get really serious about DevOps. In fact, the technical leads got together and reorganized specialized teams within Dev and Ops to resolve deployment issues and shed pre-conceptions on who should do what.  The result was improved relationships, better tooling, and a clearer perspective on how future projects could work.

John then touched on the tools that were accelerating DevOps culture, specifically Splunk for log files and AppDynamics for application monitoring. “AppDynamics provides a way for Dev and Ops to speak the same language. We’ve saved hundreds of hours in pre-release tests and discovered many new hotspots like the performance of our inventory business transaction which increased by 111%.” In fact, within the first year, AppDynamics generated a ROI of $795,166 with year 2 savings estimated at a further $420k. John laughed, “As you can see, AppDynamics wasn’t a bad investment.”

John ended his session with 5 tips for ensuring that DevOps succeeds in an organization: Be honest, communicate early and often, educate, criticize constructively, and create champions. Overall, a great session on why DevOps is needed in today’s IT teams.

Zero to Production APM in 30 days (while sending half a billion messages per day)

The final customer session of the day came from Kevin Siminski, Director of Infrastructure Operations at ExactTarget and it was definitely worth waiting for. Kevin actually kicked off his talk by describing a weekly product tech sync meeting which he had with his COO. The meeting was full with different stakeholders from development and operations who were discussing a problem that they were currently experiencing in production.

“I literally got my laptop out, brought up the AppDynamics UI and in one minute we’d found the root cause of the problem,” Kevin said. Not a bad way to get his point across of why the value of Application Performance Management (APM) in 2012 is so important.

Kevin then gave a brief intro to ExactTarget and the challenges of powering some of the world’s top brands like Nike, BestBuy and Priceline.com. ExactTarget’s .NET messaging environment is highly virtualized with over 5,000 machines that generate north of 500 million messages per day across multiple Terabytes of databases.

Kevin then touched on the role of his global operations team and how his team’s responsibility had shifted over the last four years. “My team went from just triaging system alerts to taking a more proactive approach on how we managed emails and our business. Today my team actively collaborates with development, infrastructure and support teams.” All these teams are now focused and aligned on innovation, stability, performance and high availability.

Kevin then outlined his 30-day implementation plan for deploying AppDynamics across his entire environment using a single dedicated systems engineer and an AppDynamics SaaS management server for production. Week 1 was spent on boarding the IT-security team, reviewing config mgmt and testing agent deployment to validate network and security paths. Week 2 involved deploying agents to a few of the production IIS pools and validating data collection on the AppDynamics management server. Week 3 saw all agents pushed to every IIS pool with collection mechanism sent to disabled. The config mgmt team then took over and “owned” the deployment process for go live. Week 4 saw all services and AppDynamics agents enabled during a production change window with all metrics closely monitored throughout the week to ensure no impact or unacceptable overhead.

AppDynamics’ first mission was to monitor the ExactTarget application as it underwent an upgrade to its mission-critical database from SQL Server 2003 to 2008. It was a high-risk migration as Kevin’s team were unable to assess the full risk due to legacy application components, so with all hands on the deck they watched AppDynamics as the migration happened in real-time. As the switch was made, application calls per minute and response time remained constant but application errors began to spike. By drilling down on these errors in AppDynamics, the dev team was quickly able to locate where they were coming from and resolve the application exceptions.

Today, AppDynamics is used for DevOps collaboration and feedback loops so engineers get to see the true impact of their releases in a production environment, a process that was requested by a product VP outside of Kevin’s global operations team. Overall, Kevin relayed an incredible story of how APM can be deployed rapidly across the enterprise to achieve tangible results in just 30 days.

A nice surprising statistic that I later realized in the evening was that the total number of servers being monitored by AppDynamics across our four customer speakers was well over 20,000 nodes. Having been in the APM market for almost 10 years I’m struggling to think of another vendor with such successful large scale production deployments.

Here’s a link to the photo gallery of AppJam 2012 Americas. A big thank you to our customers for attending and we’ll see you all next year!

For those keen to see my stage nosedive here you go:

Appman.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally. DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@ThingsExpo Stories
In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect at GE, and Ibrahim Gokcen, who leads GE's advanced IoT analytics, focused on the Internet of Things / Industrial Internet and how to make it operational for business end-users. Learn about the challenges posed by machine and sensor data and how to marry it with enterprise data. They also discussed the tips and tricks to provide the Industrial Internet as an end-user consumable service using Big Data Analytics and Industrial Cloud.
Things are being built upon cloud foundations to transform organizations. This CEO Power Panel at 15th Cloud Expo, moderated by Roger Strukhoff, Cloud Expo and @ThingsExpo conference chair, addressed the big issues involving these technologies and, more important, the results they will achieve. Rodney Rogers, chairman and CEO of Virtustream; Brendan O'Brien, co-founder of Aria Systems, Bart Copeland, president and CEO of ActiveState Software; Jim Cowie, chief scientist at Dyn; Dave Wagstaff, VP and chief architect at BSQUARE Corporation; Seth Proctor, CTO of NuoDB, Inc.; and Andris Gailitis, C...
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the "Internet of Things" (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his General Session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of ...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, discussed how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence.
In this Women in Technology Power Panel at 15th Cloud Expo, moderated by Anne Plese, Senior Consultant, Cloud Product Marketing at Verizon Enterprise, Esmeralda Swartz, CMO at MetraTech; Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems; Seema Jethani, Director of Product Management at Basho Technologies; Victoria Livschitz, CEO of Qubell Inc.; Anne Hungate, Senior Director of Software Quality at DIRECTV, discussed what path they took to find their spot within the technology industry and how do they see opportunities for other women in their area of expertise.
DevOps Summit 2015 New York, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho...
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
“With easy-to-use SDKs for Atmel’s platforms, IoT developers can now reap the benefits of realtime communication, and bypass the security pitfalls and configuration complexities that put IoT deployments at risk,” said Todd Greene, founder & CEO of PubNub. PubNub will team with Atmel at CES 2015 to launch full SDK support for Atmel’s MCU, MPU, and Wireless SoC platforms. Atmel developers now have access to PubNub’s secure Publish/Subscribe messaging with guaranteed ¼ second latencies across PubNub’s 14 global points-of-presence. PubNub delivers secure communication through firewalls, proxy ser...
We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate. In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise behind this presentation will provide attendees with a leading edge view of rapidly emerging IoT oppor...
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrategies, will examine why IT must finally fulfill its role in support of its SBUs or face a new round of...
The BPM world is going through some evolution or changes where traditional business process management solutions really have nowhere to go in terms of development of the road map. In this demo at 15th Cloud Expo, Kyle Hansen, Director of Professional Services at AgilePoint, shows AgilePoint’s unique approach to dealing with this market circumstance by developing a rapid application composition or development framework.
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!