Forensic Disk Copies in GCP & AWS

In a previous blog post, we briefly presented how libcloudforensics facilitates digital forensics investigation in the cloud. We recently added support for Amazon Web Services (AWS). In this blog post we will highlight solutions to the main challenges we faced while adding support for this cloud provider.

Architecture and nomenclature

One of the biggest differences between GCP and AWS lies in the way different cloud resources are structured. In GCP, a single account may work with multiple projects, with each project providing its own grouping of resources. These projects may in turn be grouped together within folders that all fall under a single organization. Identity and Access Management (IAM) policies can be applied at each of these levels: organization-wide, folder-wide, or project-wide. IAM policies can also be set individually for resources within a project.


In AWS, things are different. There are no “projects”. Instead, AWS recommends setting up multiple accounts for each team, and then assigning permissions and policies under each account. This affects how resources are shared: in GCP, using resources from one project in another (like creating a disk in project A from a snapshot that is in project B) is straightforward so long as the user has permissions to access both projects. In AWS, sharing resources happens at the account level, and specific permissions must be granted to the resources to allow them to be shared with another user account.

Snapshotting disks in the cloud

In libcloudforensics we have automated the creation of forensic copies. This significantly simplifies forensic acquisition. Libcloudforensics also provides an API to bootstrap an on-the-cloud forensic environment. The following code snippets give examples of these processes for both GCP and AWS using the libcloudforensics API. Note that these are examples intended to showcase how resources may be shared across projects / accounts in GCP and AWS respectively.

Making copies in GCP

The following snippet shows how libcloudforensics makes a forensic copy of the disk 'disk1' that is attached to the instance 'instance1' in the GCP project 'blogpost-src'. It then attaches that copy to a VM that it creates ('vm-forensics') in a different project, 'blogpost-analysis'.



from libcloudforensics.providers.gcp import forensics


# Create a forensic copy of the disk 'disk1'

copy = forensics.CreateDiskCopy(

    src_project='blogpost-src', dst_project='blogpost-analysis',    

    instance_name='instance1', zone='us-central-1', disk_name='disk1')

# Start an analysis VM 'vm-forensics' for investigation in the project 

# 'blogpost-analysis', and attach the copy created in the previous step.

analysis_vm, _ = forensics.StartAnalysisVm(

    project='blogpost-analysis', vm_name='vm-forensics', zone='us-central-1'

    boot_disk_size=50, boot_disk_type='pd-ssd', cpu_cores=4, attach_disk=copy)



Libcloudforensics also provides a command line interface (CLI) to accomplish the same. It currently supports general cloud functionality (list instances, disks, logs) and disk copying functionality. 



cloudforensics gcp blogpost-src copydisk blogpost-analysis instance1 us-central-1 --disk_name=disk1



Functionality to start an analysis VM will be added soon.

Making copies in AWS

We now detail the process of copying a volume from one AWS account to another. In the following sections, source profile (src_profile) denotes the AWS profile configured for the account that has access to the original volume, while analysis profile (analysis_profile) denotes the AWS profile configured for the target account, to which the volume should be copied.

Non-encrypted Elastic Block Store (EBS) resources

Copying standard, non-encrypted resources between accounts involves the following steps:


  1. Create a snapshot of the volume to be copied.

  2. Change the permissions of the snapshot and add the AWS account ID of the destination account.

  3. Connect to the AWS console of the destination account and create a new volume. In the snapshot section, select the snapshot ID corresponding to the snapshot that was shared with this account in the previous step.


With libcloudforensics you can automate the steps described above:



from libcloudforensics.providers.aws import forensics


# Create a forensic copy of the volume 'vol1' from an account to another 

copy = forensics.CreateVolumeCopy(default_availability_zone='us-east-2b'

    volume_id='vol1', src_profile='src_profile', dst_profile='analysis_profile')

# Start an analysis VM 'vm-forensics' for investigation in the AWS account 

# analysis_profile, and attach the copy created in the previous step.

analysis_vm, _ = forensics.StartAnalysisVm(vm_name='vm-forensics'

    default_availability_zone='us-east-2b', boot_volume_size=50, cpu_cores=4

    attach_volumes=[(copy, '/dev/sdf')], dst_profile='analysis_profile')



The CLI equivalent for copying the volume is:



cloudforensics aws us-east-2b copydisk --volume_id=vol1 --src_profile=src_profile --dst_profile=analysis_profile


Encrypted Elastic Block Store (EBS) resources

Making copies of encrypted resources is a bit more involved. By default, encrypted Elastic Block Store (EBS) resources use the aws/ebs key that is associated with the AWS account. Snapshots that are encrypted with this key cannot be shared directly. Instead, you must first make a copy of the snapshot, and re-encrypt it with a Customer Master Key (CMK) key. Both the CMK key and the snapshot copy need to then be shared with the destination account.


To reduce key management hassle, libcloudforensics allows you to transfer encrypted EBS resources between accounts by generating a one-time use CMK key. This key is deleted once the process completes. Sharing encrypted EBS resources with libcloudforensics is therefore transparent, and is done the same way as sharing regular resources. The process is depicted below:




The snippet of code provided for non-encrypted EBS resources can be applied as-is for encrypted EBS resources.

Summary

In this post we have presented how GCP and AWS structure cloud resources, and in particular how they handle resources isolation and Identity and Access Management permissions. We outlined the differences that AWS makes when it comes to sharing non-encrypted versus encrypted EBS resources, and showed how can libcloudforensics help reduce the overhead created by these differences.


We hope this blog post has given you inspiration on how to further automate your cloud incident response. If you have questions, reach out on the Open Source DFIR Slack, you can find us in the  libcloudforensics channel.

Resources

Comments

Popular posts from this blog

Parsing the $MFT NTFS metadata file

Incident Response in the Cloud

Container Forensics with Docker Explorer