RSS Feed

⛅️ AWS Evidence Collection

Created: 12.10.2020

Are there any Shadow Cloud Accounts? Could be the first place to look when investigating.

A ‘without-reboot’ snapshot is equivalent to a live acquisition, and a snapshot with a reboot is more like a traditional powered-off. Sheward, Mike. Hands-on Incident Response and Digital Forensics (p. 175). BCS Learning & Development Limited. Kindle Edition.

EC2 instance metadata

Expand …

Some sensitive information can be stored in IMDS if it’s not configured properly. T1522 (MITRE). Not the case with service-managed accounts.

curl –s ""

⛔️ If you get curl: (6) Could not resolve host: xn–s-5gn, ✍🏻 Refer to this issue https://stackoverflow.com/questions/43734502/curl-command-could-not-resolve-xn-x-5gn-post-on-ubuntu. Try typing all dashes manually.

Case #1. Capital One In 2019, Capital One, there was a SSRF + IMDS. See here. CLOUD SECURITY - ATTACKING THE METADATA SERVICE https://pumasecurity.io/resources/blog/cloud-security-instance-metadata/. IMDSv2 has several protections in place to ensure SSRF is not possible: TTL=1, require PUT request (most WAF don’t support it), deny all requests with X-Forwarded-For, X-aws-ec2-metadata-token-ttl-seconds and X- aws-ec2-metadata-token custom headers are required. One only needs to make sure they have IMDSv2 instead of version 1.

Amazon EBS disk snapshots

Expand …


  • Separate account for forensic acquisition and analysis was created (you’d need a root access to your AWS organization). - Source Account – The IAM user or role in the source account needs to be able to call the ModifySnapshotAttribute function and to perform the DescribeKey and ReEncypt operations on the key associated with the original snapshot. - Target Account – The IAM user or role in the target account needs to be able perform the DescribeKey, CreateGrant, and Decrypt operations on the key associated with the original snapshot. The user or role must also be able to perform the CreateGrant, Encrypt, Decrypt, DescribeKey, and GenerateDataKeyWithoutPlaintext operations on the key associated with the call to CopySnapshot.
  • One has root privileges to perform EBS snapshotting.
  • Recycling is enabled in case of accidental deletion.

Acquisition Steps

A very high-level: an investigator needs to create a snapshot of a EBS volume and then share it with the forensic account and make a copy of this EBS volume from the forensic account.

  1. EC2 → Volumes → Check the volume for snapshotting. Click Actions → Create Snapshot from the drop-down menu in the top-right corner.

  2. In the description field put the name that follows this convention: incnum-forensic-copy-YYYY-MM-DD-HH-MM.

  3. Share KMS keys with the forensic account (if these are not shared already).

  4. Go to EC2 → Snapshots → choose the snapshot and click Actions → Go to EC2 → Snapshots → choose the snapshot and click Actions → Modify Permissions from the drop-down menu in the top-right corner.

  5. Enter the forensic account’s number.

  6. In the forensic account go to Snapshots → Private Snapshots.

  7. Locate the snapshot shared, check it and click Actions → Copy snapshot from the drop-down menu in the top-right corner.

    Copy snapshot functionality, AWS

  8. Select an encryption key for the copy of the snapshot and create the copy.

Retainment Steps

In order to save space and money and preserve the evidence until it’s agreed that the evidence is no longer needed, the snapshot can be copied to an archive until it’s needed.

  1. From the forensic account (where the snapshot was copied to) go to EC2 → Snapshots.

  2. Choose the snapshot to archive.

  3. Check it and click Actions → Archive snapshot from the drop-down menu in the top-right corner.

EBS disks streamed to S3

How to stream logs/data to forensic account?

Memory dumps

Memory hibernation

Memory hibernation captured through hibernation on the root EBS volume

CloudTrail logs

Stores most of the information about the things that happened in the Cloud (loggin, creating instances etc). Full logs are only seen in json format. How to stream logs/data to forensic account?

AWS Config rule findings

Amazon Route 53


DNS resolver query logs


VPC Flow Logs


AWS Security Hub findings


Elastic Load Balancing access logs


AWS WAF logs


Custom application logs

System logs

Security logs

Any third-party logs

EC2 snapshots


S3 Buckets. Download bucket contents

$ aws s3api list-buckets
# choose the bucket you want to and ... 
# ... download bucket contents
$ aws s3 sync s3://juicy-staff /tmp/juicy-staff-on-attackers-pc

Connect to EC2


aws ssm start-session --target [enter instance id]

SSH via Identity

ssh -i "~/.ssh/identity.pem" ec2-user@

SSH via user/password

ssh ec2-user@
> enter the password


It’s the main built-in Threat hunting tool that we have.

Cloud Hygiene

  • What settings create publicly accessible objects?
  • How does IAM integrate with the Storage service to create least privilege access control list?
  • Can private objects be shared with temp signed URL?
  • Can versioning be enabled on objects for auditing?
  • Do data retention and object lifecycle policies prevent data loss?
  • What monitoring and audit logging options are available for forensics and IR?

S3 Bucket

  • No public access for the buckets that contain anything sensitive aws s3control get-public-access-block --profile Dev-SA --account-id 760140307285. If it returns an error, blocking is performed on a per bucket basis.
  • Monitor Object URL Signatures (monitor calls to presigned API)
  • KMS enabled by default for all buckets
  • Deny access if HTTPS is not being used

Example of a blocking config:

    "PublicAccessBlockConfiguration": {
        "BlockPublicAcls": true,
        "IgnorePublicAcls": true,
        "BlockPublicPolicy": true,
        "RestrictPublicBuckets": true


# get current settings
aws s3control get-public-access-block --profile ProfileName --account-id $(aws sts get-caller-identity --profile ProfileName | jq -r '.Account')

# Block all public access
aws s3control put-bublic-access-block --profile ProfileName --public-access-block-configuration "BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true" --account-id $(aws sts get-caller-identity --profile ProfileName | jq -r '.Account')



Assessing the environment:

  • OS running the functions
  • User
  • Where is the source code (AWS /var/task)
  • Where are the creds
  • Can you persist data to disk?
  • How long does the enivironment live for?

Function assessment:

  • Hardcoded creds (passwords, tokens etc)
  • Authenticate requests to publicly accessible functions
  • Use unique service accounts per function
  • Regularly audit function permissions for least privilege
  • Enable function audit and network controls (if available)

Temprorary access keys are saved in a containerised environment as an enivornment variable.

❗️Temp creds for lambda live for 12 hours. This is not customisable. Possible vulnerabilities: LFI and command injection that could potentially expose these credentials. ❗️Ingress traffic for Lamda functions are disabled by default. ❗️Egress traffic is allowed by deault. That’s why there is a risk with a reverse shell. ❗️No logging in the function’s network. ❗️No private routes, only over the Internet (including requesrts to Secrets Manager).

ALB or API gateway (advances authentication for functions triggered by HTTP requests). Role policies with least privileges. Lambda VPC network integration.

Those functions are running in a container. This container is not spawned/distroyed on each function execution. There are several concerns here: temp files that contain confidential data and DoS attacks on disk space. Only tmp directory is the only folder with write permissions. No AV on the EC2. Tested with Serverless Prey, 11 mins of inactivity distroys the container.


There is also a framework for infrastructure as code. Terraform analogue but just for the servless stuff - Serverless.


Cloud Application Service.


It’s a DB service that has eliminated the need of an application server. Changes on the frontend result in the changed in the DB and vice versa. The risk here: improper auth settings. So it was meant to be when it was developed. Now it’s a whole platform with different services.


# dump the contents of the DB
curl -s https://dbname.firebaseio.com/.json | jq

# overwrite the contents of the firebase
curl -s https://dbname.firebaseio.com/.json --data '{"foo": "bar"}'

Data Exfiltration

It bad if either of them has public access:

  • EC2 (public and cross-account)
    • VM images aws ec2 modify-snapshot-attribute --snapshot-id id --attribute createVolumePermission operation-type add --group-names all. Sharing with ALL group.
    • Disk snapshots. aws ec2 modify-image-attribute --image-id id --launch-permission "Add=[{Group=all}]"
  • Container image repos
  • DB Backups
  • Big datasets
  • Functions
  • Secrets
  • Signed URLs
  • KMS keys
  • S3

Ways to share:

  1. S3 bucket public access enabled (by default)
  2. Sharing API (aws ec2 modify-snapshot-attribute --snapshot-id id --attribute createVolumePermission --operation-type add --group-names all). That means that this particular resource is shared with the group “All”
  3. Resource policy. When Prinicipal is set to *. Applies to those resources that have this in the policy - see here.


Expand …

[1] Threat Hunting in the Cloud, Chris Peiris, Binil Pillai, Abbas Kudrati [2] GuardDuty article [3] How to automate the import of third-party threat intelligence feeds into Amazon GuardDuty [4] Amazon GuardDuty FAQ [5] IR in AWS

[0] Forensic investigation environment strategies in the AWS Cloud

[1] Cross-account copy of EBS snapshot

[2] Copy EBS snapshot

[3] Create EBS snapshot