Threat Hunting in AWS


Created: 09.09.2020

This article is all about AWS Threat Hunting. There will be many Cloud-wide observations, but for now I am only focusing on AWS.

GuardDuty

It’s the main built-in Threat hunting tool that we have.

Cloud Hygene

  • What settings create publicly accessible objects?
  • How does IAM integrate with the Storage service to create least privilege access control list?
  • Can private objects be shared with temp signed URL?
  • Can versioning be enabled on objects for auditing?
  • Do data retention and object lifecycle policies prevent data loss?
  • What monitoring and audit logging options are available for forensics and IR?

S3 Bucket

  • No public access for the buckets that contain anything sensitive aws s3control get-public-access-block --profile Dev-SA --account-id 760140307285. If it returns an error, blocking is performed on a per bucket basis.
  • Monitor Object URL Signatures (monitor calls to presigned API)
  • KMS enabled by default for all buckets
  • Deny access if HTTPS is not being used

Example of a blocking config:

{
    "PublicAccessBlockConfiguration": {
        "BlockPublicAcls": true,
        "IgnorePublicAcls": true,
        "BlockPublicPolicy": true,
        "RestrictPublicBuckets": true
    }
}

πŸ“˜ BTFM

# get current settings
aws s3control get-public-access-block --profile ProfileName --account-id $(aws sts get-caller-identity --profile ProfileName | jq -r '.Account')

# Block all public access
aws s3control put-bublic-access-block --profile ProfileName --public-access-block-configuration "BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true" --account-id $(aws sts get-caller-identity --profile ProfileName | jq -r '.Account')

Serveless

https://github.com/pumasecurity/serverless-prey/blob/main/panther/docs/NOTES.md

Assessing the environment:

  • OS running the functions
  • User
  • Where is the source code (AWS /var/task)
  • Where are the creds
  • Can you persist data to disk?
  • How long does the enivironment live for?

Function assessment:

  • Hardcoded creds (passwords, tokens etc)
  • Authenticate requests to publicly accessible functions
  • Use unique service accounts per function
  • Regularly audit function permissions for least privilege
  • Enable function audit and network controls (if available)

Temprorary access keys are saved in a containerised environment as an enivornment variable.

❗️Temp creds for lambda live for 12 hours. This is not customisable. Possible vulnerabilities: LFI and command injection that could potentially expose these credentials. ❗️Ingress traffic for Lamda functions are disabled by default. ❗️Egress traffic is allowed by deault. That’s why there is a risk with a reverse shell. ❗️No logging in the function’s network. ❗️No private routes, only over the Internet (including requesrts to Secrets Manager).

ALB or API gateway (advances authentication for functions triggered by HTTP requests). Role policies with least privileges. Lambda VPC network integration.

Those functions are running in a container. This container is not spawned/distroyed on each function execution. There are several concerns here: temp files that contain confidential data and DoS attacks on disk space. Only tmp directory is the only folder with write permissions. No AV on the EC2. Tested with Serverless Prey, 11 mins of inactivity distroys the container.

πŸ“˜ BTFM

There is also a framework for infrastructure as code. Terraform analogue but just for the servless stuff - Serverless.

Beanstalk

Cloud Application Service.

Firebase

It’s a DB service that has eliminated the need of an application server. Changes on the frontend result in the changed in the DB and vice versa. The risk here: improper auth settings. So it was meant to be when it was developed. Now it’s a whole platform with different services.

πŸ“• RTFM

# dump the contents of the DB
curl -s https://dbname.firebaseio.com/.json | jq

# overwrite the contents of the firebase
curl -s https://dbname.firebaseio.com/.json --data '{"foo": "bar"}'

Data Exfiltration

It bad if either of them has public access:

  • EC2 (public and cross-account)
    • VM images aws ec2 modify-snapshot-attribute --snapshot-id id --attribute createVolumePermission operation-type add --group-names all. Sharing with ALL group.
    • Disk snapshots. aws ec2 modify-image-attribute --image-id id --launch-permission "Add=[{Group=all}]"
  • Container image repos
  • DB Backups
  • Big datasets
  • Functions
  • Secrets
  • Signed URLs
  • KMS keys
  • S3

Ways to share:

  1. S3 bucket public access enabled (by default)
  2. Sharing API (aws ec2 modify-snapshot-attribute --snapshot-id id --attribute createVolumePermission --operation-type add --group-names all). That means that this particular resource is shared with the group “All”
  3. Resource policy. When Prinicipal is set to *. Applies to those resources that have this in the policy - see here.

References

Expand … [1] Threat Hunting in the Cloud, Chris Peiris, Binil Pillai, Abbas Kudrati [2] GuardDuty article [3] How to automate the import of third-party threat intelligence feeds into Amazon GuardDuty [4] Amazon GuardDuty FAQ [5] IR in AWS