Containers Expo Blog Authors: Carmen Gonzalez, Elizabeth White, Yeshim Deniz, Liz McMillan, David Paquette

Related Topics: Java IoT, Industrial IoT, Microservices Expo, IoT User Interface, @CloudExpo, Apache

Java IoT: Blog Feed Post

Deploying APM in the Enterprise | Part 4

The Path of the Rockstar

Deploying APM in the Enterprise. In the last installment we covered how you find, test, and justify purchasing an APM solution. This blog will focus on what to do after you’ve made a purchase and started down the path of deploying your coveted APM tool (ahem, ahem, AppDynamics, ahem). Just clearing my throat, let’s jump right in…

APM RockstarWelcome to Part 4 of my series It’s time for a celebration, time to break out the champagne, time to spike the football and do your end zone dance (easy there Michael Jackson, don’t hurt yourself). All of the hours you spent turning data into meaningful information, dealing with software vendors, writing requirements, testing solutions, documenting your findings, writing business justifications, and generally bending over backwards to ensure that no objection would stand in your way has culminated in management approving your purchase of APM software. Now the real work begins…

The 7 Ps
A co-worker of mine shared some words of wisdom with me a long time ago which have served me well over the years. It’s a little saying called the 7 P’s and goes something like this… Piss Poor Planning Promotes Piss Poor Performance. Deploying and using APM software is not a time for spontaneity or just winging it. If you want to make mistakes and derive little value from the investment you just put your reputation behind then by all mean just jump in with little or no planning. If you want to be a rockstar you need a solid plan for deploying, configuring, verifying, operationalizing, using, and evangelizing your APM tool (ahem, ahem, AppDynamics, ahem). Just clearing my throat again, I think there’s a bug going around ;-P

This blog post is a great general outline for planning your implementation. Everything covered in this post should be part of your planning process and should be considered the bare minimum for APM deployment planning within your organization.

Best Practices
The planning stage is a perfect time to ask your APM vendor for documentation on best practices related to deploying their software. Your vendor (AppDynamics, wink) has seen their software deployed in many situations across many industry verticals. They will have important advice for you on how to make the deployment and operation of their product as successful as possible. Use your vendors depth and breadth of information to your advantage, you’re paying them so it’s the least they can do.

Controller: The Brain, Narf!

The first major decision will be an easy one. You probably already covered this during the evaluation, vendor selection, and negotiation phases but we will recap here. You need to decide if you will host your own controller or use the vendors SaaS environment. In case you don’t already know, a controller is the server component that collects, stores, analyzes, etc… the monitoring data from the agents. Basically the controller is the brains behind the operation. There are many factors that you need to consider when deciding to use a SaaS or On-Premise model and we will not cover them in this post. Your vendor of choice (ahem, ahem, AppDynamics, ahem) will help you decide which option is right for your business circumstances.

Easy peasy lemon squeazy! I have just embedded those words in your head for potentially days, weeks or years to come. Sorry about that but it really describes the SaaS option well. You don’t have to get a server racked, VM allocated, disk space configured, solve a Rubiks Cube in 3 minutes or less, or whatever other convoluted deployment process your company may have in order to host your own software. All you really need to do is point your agents at the SaaS controller and you are off and running. Your chosen APM vendor (AppDynamics of course) will handle the server sizing, capacity, maintenance, etc… for you. Nice!


So you’ve decided to host your own controller(s). We have many clients that choose this route for one reason or another and we make every effort to support you just as well as using the SaaS option. In this case we wont be doing all of the work for you so you need to get cracking on your server deployment process. I hope it’s super easy and streamlined and you can have a new host set up and ready to load software in an hour or less. In reality it may take you a few weeks or even months so you need to be familiar with the lead time so that you can appropriately plan the rest of the deployment. You NEED a controller so there is no point in deploying agents without one. Use this lead time to generate the most awesome plan ever!

Agents: Deploying and Configuring
Agents need applications to do anything meaningful so it’s a requirement that you figure out what applications you want (or will be allowed) to monitor. You most likely had at least one problematic yet important application in mind when you started your search for an APM tool. Create a list of the applications that need monitoring and prioritize that list. I personally prefer creating a top 10 list (you could also call it a “next 10” list) that is an equal mix of application I suspect will be difficult to instrument as well as applications I think will be really easy. I do this because you usually don’t work at deploying agents to application components in a serial manner. It’s typically a parallel process where you can jump from one deployment to the next while you are waiting for approvals, personnel, or anything else that gets in your way of doing actual work.

Deploying APM agents should be easy. Add a very small amount of software to the server you want to monitor, reference the agent software in your application configuration and restart your application. It’s basically that easy to deploy an agent. It should also be really easy to configure. In fact, the agent should automatically detect what it needs to monitor and simply just work. This is how AppDynamics works but the same does not hold true for most other APM vendors. Hopefully you saw this when you ran each vendor solution through your POC environment. In the interest of full disclosure I will admit that there are circumstances where NO APM solution can automatically detect your application properly and there is more configuration work to do. This is a problem that every APM vendor has to deal with but thankfully AppDynamics sees this condition with only a very small subset of its customer base. Usually you plug in our agent and we show you what you need to see. It just works!

Awesome, now that we just saved you 80% of the configuration time versus deploying “the other guys” what’s next?

After you deploy agents (whether it be straight to production or advancing though pre-production environments) and you have used the monitored application a bit, you want to look at the user interface to see if the information contained within looks correct.

  • Look at your application flow map to see if you are missing any application components.
  • Check the business transactions to see if the expected transactions are there and reporting metrics.
  • Do you have end user experience metrics showing up?
  • Do you have transaction snapshots showing your custom code executing in the run time?
  • Send out test alerts to see if they make it to their destination. (Alerting is important so we will cover it in another blog post)

If things don’t look right you need to figure out why. It might be that your application really is different than you thought (we see this quite often), or it could be a problem with the monitoring. Resolve any issues you see before declaring deployment and configuration victory.

Production Load Cannot Be Simulated Exactly!!!
To realize the most value from your APM purchase you MUST run it in production. No matter how good your Quality Engineering team is they cannot code all of the crazy things your users will try to do in production. It can also be very difficult to duplicate your application environment in production. Example, you have 5000 JVMs spread across multiple cloud provider data centers. Replicating that environment would be time consuming and really expensive.

Beyond the technology aspects of running in production you also need to consider your existing processes. Your shiny new APM tool will provide incredible insight into application issues as long as you have it integrated into your processes. Here are some points to consider:

  • Are alerts configured so that they are routed to the proper people?
  • Does the operations center know about the new alerts that will be coming from your new APM product?
  • Is there a process that application owners can follow to request monitoring by your new tool?
  • Is there a process to smoothly and efficiently on-board a new application?
  • Is the APM tool integrated with other corporate systems? (LDAP, Events Aggregations, Business Intelligence, etc…)

What I am trying to say is; Give your company every opportunity to use the hell out of your new tool!

Teach Them Well
Educate and evangelize, this will pay dividends ten fold.

Create a short training curriculum for anyone who will need to work with your APM tool. You should have training material for basic usage, advanced concepts (memory leaks, policies, dashboard creation, etc…), and operations (alerts/events) training. You need to make sure the people who will touch the product or consume the data have the information they need to be successful. Their success drives your success.

Tell everyone you can about the success you are having with your new tool. Don’t be annoying to the point where people run the other way when they see you coming but make sure they know what you are working on and how much of an impact it is having on the business.
For every problem you solve with your new APM tool take 30 minutes to put together a 3–5 slide presentation. Include the following information on each presentation you create:

  • Problem Description: Describe the application, problem, and impact level.
  • Resolution: Describe the resolution steps and root cause. Use screenshots from your APM tool.
  • Business Impact: Describe how long it took to resolve the issue, how long it normally takes without APM, and quantify the impact to the business of this outage for both scenarios (with and without APM).

These short presentations will equip you with the information you need to defend your decision to purchase APM, justify a larger investment, and propel yourself to rockstar status within your organization.

There is a lot of work that needs to be done to successfully deploy, configure and use an APM tool in the enterprise but the potential rewards are staggering. Think about how much lost revenue can be avoided by ensuring your revenue generating applications don’t go down at peak times. People notice when the decisions you make and the work you do directly impact the bottom line. Put in the effort and get noticed!

Join me next week for the next installment in this series. It will be a blog post dedicated to alerts, yes they are that important.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@ThingsExpo Stories
@ThingsExpo has been named the Top 5 Most Influential Internet of Things Brand by Onalytica in the ‘The Internet of Things Landscape 2015: Top 100 Individuals and Brands.' Onalytica analyzed Twitter conversations around the #IoT debate to uncover the most influential brands and individuals driving the conversation. Onalytica captured data from 56,224 users. The PageRank based methodology they use to extract influencers on a particular topic (tweets mentioning #InternetofThings or #IoT in this ...
@ThingsExpo has been named the Top 5 Most Influential M2M Brand by Onalytica in the ‘Machine to Machine: Top 100 Influencers and Brands.' Onalytica analyzed the online debate on M2M by looking at over 85,000 tweets to provide the most influential individuals and brands that drive the discussion. According to Onalytica the "analysis showed a very engaged community with a lot of interactive tweets. The M2M discussion seems to be more fragmented and driven by some of the major brands present in the...
In the next forty months – just over three years – businesses will undergo extraordinary changes. The exponential growth of digitization and machine learning will see a step function change in how businesses create value, satisfy customers, and outperform their competition. In the next forty months companies will take the actions that will see them get to the next level of the game called Capitalism. Or they won’t – game over. The winners of today and tomorrow think differently, follow different...
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and ...
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
What happens when the different parts of a vehicle become smarter than the vehicle itself? As we move toward the era of smart everything, hundreds of entities in a vehicle that communicate with each other, the vehicle and external systems create a need for identity orchestration so that all entities work as a conglomerate. Much like an orchestra without a conductor, without the ability to secure, control, and connect the link between a vehicle’s head unit, devices, and systems and to manage the ...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to impr...
For basic one-to-one voice or video calling solutions, WebRTC has proven to be a very powerful technology. Although WebRTC’s core functionality is to provide secure, real-time p2p media streaming, leveraging native platform features and server-side components brings up new communication capabilities for web and native mobile applications, allowing for advanced multi-user use cases such as video broadcasting, conferencing, and media recording.
Amazon has gradually rolled out parts of its IoT offerings, but these are just the tip of the iceberg. In addition to optimizing their backend AWS offerings, Amazon is laying the ground work to be a major force in IoT - especially in the connected home and office. In his session at @ThingsExpo, Chris Kocher, founder and managing director of Grey Heron, explained how Amazon is extending its reach to become a major force in IoT by building on its dominant cloud IoT platform, its Dash Button strat...
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
One of biggest questions about Big Data is “How do we harness all that information for business use quickly and effectively?” Geographic Information Systems (GIS) or spatial technology is about more than making maps, but adding critical context and meaning to data of all types, coming from all different channels – even sensors. In his session at @ThingsExpo, William (Bill) Meehan, director of utility solutions for Esri, will take a closer look at the current state of spatial technology and ar...
Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, will discuss why and how ReadyTalk diverted from healthy revenue an...
SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...