Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: @CloudExpo, Containers Expo Blog, @DevOpsSummit

@CloudExpo: Blog Feed Post

Docker Cloud Monitoring and Logging By @Sematext | @CloudExpo #Cloud #Containers

Docker Cloud is a container management service that supports multiple cloud providers

Docker Cloud Monitoring and Logging
By Stefan Thies

Docker Cloud is a hosted service for Docker Container Management, originally based on Tutum Cloud, which was acquired by Docker in October 2015. Sematext supported the deployment of Sematext Docker Agent on Tutum Cloud from the get-go, so naturally we were quick to add support for Docker Cloud as well.

What is Docker Cloud?
Docker Cloud is a container management service that supports multiple cloud providers such as Amazon, DigitalOcean, IBM Softlayer, MS Azure and Packet.net. This makes it much easier to switch Docker deployments to different cloud providers or use a mix of providers including on-premises nodes for hybrid cloud applications. The user interface in Docker Cloud makes it easy to manage nodes on all supported cloud platforms and is able to deploy application stacks in containers, defined in a "Stack YAML" file. This Stack files are very similar to Docker Compose files, but with additional options, e.g. to define deployment strategies for the containers. The graphical user interface helps to view and modify container configurations.

Docker Cloud Metrics & Logs
Once containers are deployed you can get a very basic real-time log stream view per container (see below). This is helpful for a quick glance at the most recent logs of a specific container.

Real-time log view in Docker Cloud

Real-time log view in Docker Cloud

There are currently no Docker metrics exposed anywhere in Docker Cloud, though that will surely be added with time. Docker Cloud does an excellent job for the "Build, Ship and Run" containers paradigm. But if you've ever built a production system you know there is more to it. There is this little wrinkle called Operations. So let's talk about the more realistic scenario - "Build, Ship, Run and Monitor"

Thanks to the Docker API, it is possible to add this functionality to Docker Cloud. Sematext Docker Agent is a small container that collects all Docker metrics, all app and Docker logs and all Docker events from Docker Cloud and together with SPM for Performance Monitoring and Logsene for Log Management and Analytics it provides advanced Performance Monitoring and Log Management functionality for stacks deployed in Docker Cloud:

  1. Detailed Metrics with a long retention time. Having detailed metrics helps optimize resource usage of applications. Detailed metrics let you set application-specific alerts for any critical resources your applications depends on. Metrics are aggregated for all hosts, images and containers and filterable by hosts, images, and containers. This lets you drill down from a cluster view down to a single container while troubleshooting or simply understand operations details. Long retention times for metrics make it possible to compare resources before and after different deployments and releases or troubleshoot problems that appear only when a service has been running over several days or weeks!
  2. Full-text search, filtering, and analytics across all containers. Logs are collected, parsed and shipped by Sematext Docker Agent. The integrated charting functions in Logsene and integrations for Kibana and Grafana make it easy to analyze logs collected in Docker Cloud. In short, you can use Logsene as a "super grep" for your Docker and application logs, but also as amuch more affordable Splunk or any other BI tool, or a managed Elastic stack (aka ELK).
  3. Long retention time for logs, metrics and events. Comparing metrics and logs during deployments or watching the performance under different workloads requires one to store logs and metrics for a reasonable time. We have seen cases where memory leaks started to get serious after a few weeks of stable operations, although initially they were not detected. In such a case all context information like logs, events and metrics could be very valuable in identifying the root cause of such problems.
  4. Tracking of all Docker Events. Tracking of all Docker Events gives you a clear view of your containers' life cycle. For example, by collecting Events you gain insight into what happens with your containers during (re)deployments or the re-scheduling of containers to different nodes. Some containers might be configured for automatic restarts and the events could indicate if container processes crash frequently. In case of out-of-memory events, it might be wise to modify the memory limits or check with the developers, why this event happened.
  5. Anomaly detection and alerts for all logs and metrics. Who wants to watch metrics and logs all day long? Not me! Let the monitoring system watch outliers for metrics or query your logs! Anomaly detection can help reduce the noise and alert fatigue often caused by classic threshold-based alerts. Even log-alerting is possible with Logsene e.g. to detect anomalies in the log frequency of a specific query. For example, a search for "error" in the system might normally return a dozen non-critical errors, which could be ignored. A growth in the log frequency of error messages indicates that something might be going wrong. Another type of alerts is the Heartbeat alert for all cluster nodes. Disk Space alerts are very useful for Docker nodes, because Docker images might consume a lot of disk space. Docker Cloud runs some cleanup agents to remove unused containers and images; nevertheless the default disk-space alert created by SPM gives you an early warning before the capacity limit is reached.

Here's a short video about Log Management and Monitoring for Docker. It gives a general overview of Monitoring and Log Management in Docker context, plus shows you how to use SPM and Logsene as a single pane of glass for your Docker metrics and logs.

Having all this operational insights, and having it in a single pane of glass makes everyone's work (and that means life, too) simpler. We all want that, no? With that in mind, we've made sure the Sematext Docker Agent setup is super quick and easy:

  1. Get a free account at apps.sematext.com, if you don't have one already
  2. Create an SPM App of type "Docker" to obtain the SPM Application Token and/or
    Create a Logsene App to obtain the Logsene Application Token
  3. Click the "Deploy to Cloud" button in Sematext UI and copy the generated token into the Stackfile text field in Docker Cloud

    Create SPM app and deploy to Docker Cloud

    Create SPM app and deploy to Docker Cloud

  4. As soon you click "Create and deploy" in Docker Cloud the Sematext Docker Agent will be pulled from Docker Hub and it will start on all nodes managed by Docker Cloud. A few seconds later you should see Events, Logs & Metrics in SPM & Logsene.

Docker Cloud Metrics Overview in SPM

Docker Events and Metrics in SPM

Structured Docker Logs in Logsene / Kibana discover view

Everything mentioned above could take you just 10-15 minutes, meaning that in 10-15 you could be looking at charts with all your Docker operations data in one place, accessible by your whole team!

If you have feedback for monitoring and logging on Docker Cloud get in touch with us via @sematext or email us at [email protected] - we love to talk about monitoring and logging and appreciate user feedback that helps us improve our services and make them better serve your needs. If you want to try SPM or Logsene, start here.

Read the original blog entry...

More Stories By Sematext Blog

Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), and search analytics (SSA). We also provide Search and Big Data consulting services and offer 24/7 production support for Solr and Elasticsearch.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...