Libcloudforensics and Cloud Logs

In an earlier blog post we introduced Libcloudforensics, a multi-cloud forensic library that can be used to make disk copies and set up virtual machines on GCP and AWS for analysis. We now have added functionality that allows you to query logs from AWS and GCP to aid in your investigations. This blogpost describes how to do that, using the library and accompanying CLI tool.

Cloud logging


Cloud logs are one of the relevant artefacts when investigating cloud incidents. Cloud logs not only provide you with a central place to query your virtual machines logs but also provide audit logs of every (administrative) action taken in the cloud management environment. 


A full set of cloud management and resource logs can quickly show you a timeline of:

  • the actors activity in the actual cloud resources, for example, when an actor started a cronjob in a virtual machine.

  • the cloud project activities, for example, where an actor creates 15 extra virtual machines or adds an admin scoped cloud project user.


Cloud providers each have their own log infrastructure, query (filter) logic and APIs. GCP’s Cloud Logging is their one stop shop for all GCP logs. AWS has multiple cloud logging offerings, namely CloudTrail (for AWS account activity logs) and CloudWatch (for application logs) contain information that is useful for forensic investigations. 


GCP Cloud logging and AWS CloudTrail are currently supported by libcloudforensics. Libcloudforensics provides you with an API and command line interface (CLI) tool to easily integrate, query and export these logs for further analysis. AWS CloudWatch will be supported in the future. 

GCP Cloud Logging

GCP offers a rich query language to filter and fetch all logs in a cloud project. The query language is defined here and can be used within libcloudforensics.


As a simple example, let’s query what kind of activity we see in our virtual machine logs that match a particular indicator (in this case the 'bad' IP address 1.2.3.4).


from libcloudforensics.providers.gcp.internal import log as gcp_log

logs = gcp_log.GoogleCloudLog(project_id='my-gcp-project')

results = logs.ExecuteQuery(qfilter='resource.type="gce_instance" AND

    "1.2.3.4"')



We can also easily do this with the libcloudforensics CLI tool.


$ cloudforensics gcp my-gcp-project querylogs --filter    

    'resource.type="gce_instance" AND "1.2.3.4"'


AWS CloudTrail

AWS CloudTrail does not not contain virtual machine logs but does contain the logs that show what project actions have been performed. Examples are the creation or alteration of AWS services, AWS console activity and API logins and actions. CloudTrail logs are very useful in investigating malicious AWS project activity.


CloudTrail filtering is limited as AWS only supports one key-value pair per filter. See the CloudTrail event documentation for supported key values.


As an example, let’s see if any access keys were created. Access keys can be used to access the AWS services and depending on their scope can be used to control all resources. 


from libcloudforensics.providers.aws.internal import account
from libcloudforensics.providers.aws.internal import log as aws_log
 
ct = aws_log.AWSCloudTrail(account.AWSAccount(
                         default_availability_zone='eu-central-1a'))
result = ct.LookupEvents('EventName,CreateAccessKey')


We can perform the same query using the libcloudforensics CLI tool including a start and end time in the filter.


$ cloudforensics aws eu-central-1a querylogs --start '2020-05-12 11:30:00' --end '2020-05-13 21:00:00' --filter 'EventName,CreateAccessKey'


Conclusion

We hope this blog post has given you inspiration on how to further automate your cloud incident response. If you have questions, reach out on the Open Source DFIR Slack, you can find us in the  libcloudforensics channel.

Resources

Comments

Popular posts from this blog

Parsing the $MFT NTFS metadata file

Incident Response in the Cloud

Container Forensics with Docker Explorer