Welcome!

Containers Expo Blog Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: @CloudExpo, Java IoT, Linux Containers, @DXWorldExpo

@CloudExpo: Article

Monitoring and Analyzing AWS CloudTrail Data

Monitoring and Analyzing AWS CloudTrail data from multiple AWS regions

We recently released AWS CloudTrail integration with Logentries - and not surprisingly we've seen a significant uptick in adoption as one of our most popular integrations. My job as director of customer success is to make things as simple for our customers as possible. One question that consistently pops up, is how to collect AWS CloudTrail logs from multiple AWS regions.

We follow Amazon's best practices when it comes to integrating with, and receiving information from, CloudTrail. In short, this works as follows:

  • When configuring CloudTrail, it will write events to a S3 bucket.
  • You can configure Cloudtrail to send notifications to an Amazon SNS topic whenever new log events are recorded.
  • You can get updates sent to an Amazon Simple Queue Service (Amazon SQS) queue, which enables you to handle these notifications programmatically.
  • To configure Logentries to consume your Cloudtrail logs, simply add the URL of the SQS queue to the Logentries/Cloudtrail setup page.

Logentries speaks directly to the SQS queue inside of your AWS account, so an obvious question that presents itself is: If I'm running in multiple AWS regions, how do I get Logentries to pull from all of the regions?

The simple answer: you don't. Make AWS do the work for you!

Following the steps outlined below, you'll be able to monitor and analyze CloudTrail logs from any number of AWS regions all within one Logentries account.

Create an S3 Bucket
If you're new to the CloudTrail setup, the first requirement of CloudTrail logging is that the logs must go "somewhere." In AWS, this somewhere happens to be a S3 bucket which you should create. Simply navigate to the S3 service and select ‘Create Bucket'. By default, all permissions required are given to the bucket - i.e. there is no extra permissions/configuration necessary to configure CloudTrail logging with Logentries.
Screen Shot 2014-08-19 at 12.55.03 PM

Create an SQS Queue in a Primary Region
Next up, we need to create an SQS Queue to allow Logentries to consume your CloudTrail data. Create a new Queue and provide a ‘Queue Name' - default options are fine.
SQS_Management_Console_and_LogentriesCloudtrail_docx

Add permissions to the SQS Queue
Once the queue has been created, the correct permissions must be applied. When adding permissions to the SQS queue, you need to add your full account number/name (officially called the AWS User ARN).

To get the User ARN navigate to the IAM Service, select the user that you want to utilize and click ‘Summary'. When the user is created within the IAM section, make sure that the user has at least ‘Read-Only' access - so that the user has the relevant permissions to read the bucket. The string you need is available under User ARN in the ‘Summary' section and follows this format:

arn:aws:iam::<account_code>:user/

Next add ‘Receive', ‘Send', and ‘Delete' Actions to the SQS Queue(see below):
SQS_Management_Console
Enable CloudTrail in any region, and publish to an SNS topic

Once the above three steps have been complete, it's time to enable CloudTrail in the relevant regions. Navigate to the CloudTrail Service in your AWS Console and turn on CloudTrail. Do not create a new S3 bucket, but instead select the S3 Bucket created in step one above from the drop down menu. Once you've done this, click the Advanced link. For the first region you enable CloudTrail for, remember to include Global Services under Advanced options - this record API calls from any global AWS services such as IAM or AWS STS. Make sure that "SNS Notification for every log file delivery" is checked, and finally, specify a SNS Topic to publish to. A new SNS topic name should be given - and will be created by CloudTrail.

Follow these above steps for each region that you want to collect CloudTrail logs from. NOTE: when adding subsequent regions you will want to exclude Global Services to avoid duplicate log events being recorded for your Global Services.

Screen Shot 2014-08-19 at 2.48.02 PM
Subscribe the SQS Queue to the multiple SNS topics
Once each region has been setup, the last step in AWS is to subscribe your SQS Queue to each newly created SNS topic. Navigate to the SQS Service in your AWS Console and highlight the queue created in step 2 above. Under the ‘Queue Actions' menu at the top select ‘Subscribe Queue to SNS Topic'. Use the ‘Topic Region' drop down to select the region and the ‘Choose a Topic' drop down to select the topic created in the previous step. Hit the ‘Subscribe' button and wait for the confirmation that the queue has subscribed to that topic.

After selecting subscribe make sure to copy the SQS URL from the ‘Details' section on the page.
SQS_Management_Console1

Setup Logentries to Pull data from the SQS Queue
Login to your Logentries account and navigate to your AWS settings area (My Account -> AWS). Select enable CloudTrail, supply your IAM access key, secret key, and SQS URL that you have copied above. Hit Save! Note: your IAM access key and secret key are made available to your when you create a new IAM user and should be stored safely.

Log Data from CloudTrail will begin to stream in within approximately 15 minutes.

Account___Logentries
Sit back and let Logentries do it's magic!
Visit our CloudTrail documentation to see some of the other cool things you can do - in particular we provide out of the box tags and alerts for important CloudTrail events. Have questions or ideas how we can make our CloudTrail integration better? Reach out to me directly at [email protected].

 

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...