Click here to close now.


Containers Expo Blog Authors: Jason Bloomberg, Pat Romanski, Liz McMillan, Tim Hinds, Blue Box Blog

Related Topics: Java IoT, Industrial IoT, Microservices Expo, IoT User Interface, @CloudExpo, Apache

Java IoT: Blog Feed Post

Deploying APM in the Enterprise | Part 4

The Path of the Rockstar

Deploying APM in the Enterprise. In the last installment we covered how you find, test, and justify purchasing an APM solution. This blog will focus on what to do after you’ve made a purchase and started down the path of deploying your coveted APM tool (ahem, ahem, AppDynamics, ahem). Just clearing my throat, let’s jump right in…

APM RockstarWelcome to Part 4 of my series It’s time for a celebration, time to break out the champagne, time to spike the football and do your end zone dance (easy there Michael Jackson, don’t hurt yourself). All of the hours you spent turning data into meaningful information, dealing with software vendors, writing requirements, testing solutions, documenting your findings, writing business justifications, and generally bending over backwards to ensure that no objection would stand in your way has culminated in management approving your purchase of APM software. Now the real work begins…

The 7 Ps
A co-worker of mine shared some words of wisdom with me a long time ago which have served me well over the years. It’s a little saying called the 7 P’s and goes something like this… Piss Poor Planning Promotes Piss Poor Performance. Deploying and using APM software is not a time for spontaneity or just winging it. If you want to make mistakes and derive little value from the investment you just put your reputation behind then by all mean just jump in with little or no planning. If you want to be a rockstar you need a solid plan for deploying, configuring, verifying, operationalizing, using, and evangelizing your APM tool (ahem, ahem, AppDynamics, ahem). Just clearing my throat again, I think there’s a bug going around ;-P

This blog post is a great general outline for planning your implementation. Everything covered in this post should be part of your planning process and should be considered the bare minimum for APM deployment planning within your organization.

Best Practices
The planning stage is a perfect time to ask your APM vendor for documentation on best practices related to deploying their software. Your vendor (AppDynamics, wink) has seen their software deployed in many situations across many industry verticals. They will have important advice for you on how to make the deployment and operation of their product as successful as possible. Use your vendors depth and breadth of information to your advantage, you’re paying them so it’s the least they can do.

Controller: The Brain, Narf!

The first major decision will be an easy one. You probably already covered this during the evaluation, vendor selection, and negotiation phases but we will recap here. You need to decide if you will host your own controller or use the vendors SaaS environment. In case you don’t already know, a controller is the server component that collects, stores, analyzes, etc… the monitoring data from the agents. Basically the controller is the brains behind the operation. There are many factors that you need to consider when deciding to use a SaaS or On-Premise model and we will not cover them in this post. Your vendor of choice (ahem, ahem, AppDynamics, ahem) will help you decide which option is right for your business circumstances.

Easy peasy lemon squeazy! I have just embedded those words in your head for potentially days, weeks or years to come. Sorry about that but it really describes the SaaS option well. You don’t have to get a server racked, VM allocated, disk space configured, solve a Rubiks Cube in 3 minutes or less, or whatever other convoluted deployment process your company may have in order to host your own software. All you really need to do is point your agents at the SaaS controller and you are off and running. Your chosen APM vendor (AppDynamics of course) will handle the server sizing, capacity, maintenance, etc… for you. Nice!


So you’ve decided to host your own controller(s). We have many clients that choose this route for one reason or another and we make every effort to support you just as well as using the SaaS option. In this case we wont be doing all of the work for you so you need to get cracking on your server deployment process. I hope it’s super easy and streamlined and you can have a new host set up and ready to load software in an hour or less. In reality it may take you a few weeks or even months so you need to be familiar with the lead time so that you can appropriately plan the rest of the deployment. You NEED a controller so there is no point in deploying agents without one. Use this lead time to generate the most awesome plan ever!

Agents: Deploying and Configuring
Agents need applications to do anything meaningful so it’s a requirement that you figure out what applications you want (or will be allowed) to monitor. You most likely had at least one problematic yet important application in mind when you started your search for an APM tool. Create a list of the applications that need monitoring and prioritize that list. I personally prefer creating a top 10 list (you could also call it a “next 10” list) that is an equal mix of application I suspect will be difficult to instrument as well as applications I think will be really easy. I do this because you usually don’t work at deploying agents to application components in a serial manner. It’s typically a parallel process where you can jump from one deployment to the next while you are waiting for approvals, personnel, or anything else that gets in your way of doing actual work.

Deploying APM agents should be easy. Add a very small amount of software to the server you want to monitor, reference the agent software in your application configuration and restart your application. It’s basically that easy to deploy an agent. It should also be really easy to configure. In fact, the agent should automatically detect what it needs to monitor and simply just work. This is how AppDynamics works but the same does not hold true for most other APM vendors. Hopefully you saw this when you ran each vendor solution through your POC environment. In the interest of full disclosure I will admit that there are circumstances where NO APM solution can automatically detect your application properly and there is more configuration work to do. This is a problem that every APM vendor has to deal with but thankfully AppDynamics sees this condition with only a very small subset of its customer base. Usually you plug in our agent and we show you what you need to see. It just works!

Awesome, now that we just saved you 80% of the configuration time versus deploying “the other guys” what’s next?

After you deploy agents (whether it be straight to production or advancing though pre-production environments) and you have used the monitored application a bit, you want to look at the user interface to see if the information contained within looks correct.

  • Look at your application flow map to see if you are missing any application components.
  • Check the business transactions to see if the expected transactions are there and reporting metrics.
  • Do you have end user experience metrics showing up?
  • Do you have transaction snapshots showing your custom code executing in the run time?
  • Send out test alerts to see if they make it to their destination. (Alerting is important so we will cover it in another blog post)

If things don’t look right you need to figure out why. It might be that your application really is different than you thought (we see this quite often), or it could be a problem with the monitoring. Resolve any issues you see before declaring deployment and configuration victory.

Production Load Cannot Be Simulated Exactly!!!
To realize the most value from your APM purchase you MUST run it in production. No matter how good your Quality Engineering team is they cannot code all of the crazy things your users will try to do in production. It can also be very difficult to duplicate your application environment in production. Example, you have 5000 JVMs spread across multiple cloud provider data centers. Replicating that environment would be time consuming and really expensive.

Beyond the technology aspects of running in production you also need to consider your existing processes. Your shiny new APM tool will provide incredible insight into application issues as long as you have it integrated into your processes. Here are some points to consider:

  • Are alerts configured so that they are routed to the proper people?
  • Does the operations center know about the new alerts that will be coming from your new APM product?
  • Is there a process that application owners can follow to request monitoring by your new tool?
  • Is there a process to smoothly and efficiently on-board a new application?
  • Is the APM tool integrated with other corporate systems? (LDAP, Events Aggregations, Business Intelligence, etc…)

What I am trying to say is; Give your company every opportunity to use the hell out of your new tool!

Teach Them Well
Educate and evangelize, this will pay dividends ten fold.

Create a short training curriculum for anyone who will need to work with your APM tool. You should have training material for basic usage, advanced concepts (memory leaks, policies, dashboard creation, etc…), and operations (alerts/events) training. You need to make sure the people who will touch the product or consume the data have the information they need to be successful. Their success drives your success.

Tell everyone you can about the success you are having with your new tool. Don’t be annoying to the point where people run the other way when they see you coming but make sure they know what you are working on and how much of an impact it is having on the business.
For every problem you solve with your new APM tool take 30 minutes to put together a 3–5 slide presentation. Include the following information on each presentation you create:

  • Problem Description: Describe the application, problem, and impact level.
  • Resolution: Describe the resolution steps and root cause. Use screenshots from your APM tool.
  • Business Impact: Describe how long it took to resolve the issue, how long it normally takes without APM, and quantify the impact to the business of this outage for both scenarios (with and without APM).

These short presentations will equip you with the information you need to defend your decision to purchase APM, justify a larger investment, and propel yourself to rockstar status within your organization.

There is a lot of work that needs to be done to successfully deploy, configure and use an APM tool in the enterprise but the potential rewards are staggering. Think about how much lost revenue can be avoided by ensuring your revenue generating applications don’t go down at peak times. People notice when the decisions you make and the work you do directly impact the bottom line. Put in the effort and get noticed!

Join me next week for the next installment in this series. It will be a blog post dedicated to alerts, yes they are that important.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@ThingsExpo Stories
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.
The broad selection of hardware, the rapid evolution of operating systems and the time-to-market for mobile apps has been so rapid that new challenges for developers and engineers arise every day. Security, testing, hosting, and other metrics have to be considered through the process. In his session at Big Data Expo, Walter Maguire, Chief Field Technologist, HP Big Data Group, at Hewlett-Packard, will discuss the challenges faced by developers and a composite Big Data applications builder, focusing on how to help solve the problems that developers are continuously battling.
There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, will show how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants will get the download information, scripts, and complete end-to-end walkthrough of the analysis from start to finish. Participants will also be given the pract...
WebRTC: together these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at WebRTC Summit, Cary Bran, VP of Innovation and New Ventures at Plantronics and PLT Labs, will provide an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it may enable, complement or entirely transform.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas ...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context w...
Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it. In his session at @ThingsExpo, Ben Klang, Founder & President of Mojo Lingo, will discuss the impact of technology on identity. Should we federate, or not? How should identity be secured? Who owns the identity? How is identity ...
The IoT market is on track to hit $7.1 trillion in 2020. The reality is that only a handful of companies are ready for this massive demand. There are a lot of barriers, paint points, traps, and hidden roadblocks. How can we deal with these issues and challenges? The paradigm has changed. Old-style ad-hoc trial-and-error ways will certainly lead you to the dead end. What is mandatory is an overarching and adaptive approach to effectively handle the rapid changes and exponential growth.
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new data-driven world, marketplaces reign supreme while interoperability, APIs and applications deliver un...
Electric power utilities face relentless pressure on their financial performance, and reducing distribution grid losses is one of the last untapped opportunities to meet their business goals. Combining IoT-enabled sensors and cloud-based data analytics, utilities now are able to find, quantify and reduce losses faster – and with a smaller IT footprint. Solutions exist using Internet-enabled sensors deployed temporarily at strategic locations within the distribution grid to measure actual line loads.
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, will explore the current state of IoT connectivity and review key trends and technology requirements that will drive the Internet of Things from hype to reality.
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
You have your devices and your data, but what about the rest of your Internet of Things story? Two popular classes of technologies that nicely handle the Big Data analytics for Internet of Things are Apache Hadoop and NoSQL. Hadoop is designed for parallelizing analytical work across many servers and is ideal for the massive data volumes you create with IoT devices. NoSQL databases such as Apache HBase are ideal for storing and retrieving IoT data as “time series data.”
Today’s connected world is moving from devices towards things, what this means is that by using increasingly low cost sensors embedded in devices we can create many new use cases. These span across use cases in cities, vehicles, home, offices, factories, retail environments, worksites, health, logistics, and health. These use cases rely on ubiquitous connectivity and generate massive amounts of data at scale. These technologies enable new business opportunities, ways to optimize and automate, along with new ways to engage with users.
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll share tips on how to speed up business initiatives, harness Big Data and remain one step ahead by apply...
There will be 20 billion IoT devices connected to the Internet soon. What if we could control these devices with our voice, mind, or gestures? What if we could teach these devices how to talk to each other? What if these devices could learn how to interact with us (and each other) to make our lives better? What if Jarvis was real? How can I gain these super powers? In his session at 17th Cloud Expo, Chris Matthieu, co-founder and CTO of Octoblu, will show you!