In this blog post I am exploring AWS services (EC2). I am currently using a Free Tier license and I am trying to learn how to use. Why? Well, knowing AWS seems to be a very useful skill on the market. And the second reason for that is that I used to have different VMs that cosumed too much memory (HDD and RAM). Now I am also using M1 MacBook Pro and that’s why having so many VMs locally is impossible since there are only some Linux and Windows 10 Machines available for ARM platform. Besides, I’ve notices that I don’t really use them much so having 60 GB of space occupied for occasional usage is weird. Everything speaks in the favour of AWS.
📆 07/03/2021
⏰ 9:13 PM
This story began a while ago when I was reading The Hacker Playbook 3 by Peter Kim and the author suggested using some cloud services for VMs instead of local ones. Getting a M1 MacBook forced me to reconsider my lab setup and I’ve decided to give it a try. Anyway, since everything is in the cloud ☁️ now, it’s an essential skill anyway. Also, such configuration doesn’t require looking for images, keeping multiple VMs locally and lets running many VMs at once. I am only keeping a Parallels Win10 machine locally for some minor tasks. The prices for AWS EC2 are quite reasonable, given that I don’t use VMs for a long time and everyday.
And so I got started with EC2. Amazon provides with a 12-month free tier offer which is a very good thing for someone like me, who’s just getting started and doesn’t require to much usage time and performance for her machines.
📆 07/03/2021
⏰ 12:30 PM
Step 1. Create AWS account
Step 2. Get you creds and configure. Click username in the upper-right corner, select My Security Credentials from the drop down list. On the page expand Access keys (access key ID and secret access key) and press Create New Access Key (if none were already created some time ago). You will be given a file to save containing ID and a private key. Keep them save. Submit it after running aws configure
when promted for AWS Access Key ID
and AWS Secret Access Key
. For output leave json as default (unless you want something else), and enter Default region name
for the nearest time zone.
Step 3. Create trust policy and role policy files.
Step 4. Create a role in your account. ❓ What’s that and what’s that for?
Step 5. Create a role policy as well. ❓ What’s that and what’s that for?
Step 6. Image uploading. Upload ova/vhd/vmdk/raw image into the S3 bucket.
Step 7. Image import. Import uploaded image to be converted to AMI (Amazon VM format).
AWS commands for uploading images and checking status:
# configure basic stuff like credentials, output format and region
aws configure
# create a role on AWS
aws iam create-role --role-name vmimport --assume-role-policy-document "file:///full/path/to/trust-policy.json"
# create a policy
aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document "file:///full/path/to/role-policy.json"
# import uploaded image to make an AMI file
# for importinng OVA
aws ec2 import-image \
--disk-containers Format=ova,UserBucket="{S3Bucket=bucketname,S3Key=path/to/image/in/the/bucket/image.ova}"
# for importinng VMDK
aws ec2 import-image \
--disk-containers Format=vmdk,UserBucket="{S3Bucket=bucketname,S3Key=path/to/image/in/the/bucket/image.vmdk}"
# note the import-ami to use in the below command
# check task status
aws ec2 describe-import-image-tasks --import-task-ids import-ami-XXXXXXXXXXXXXX
Failed to get any image that way, got kernel version error all the time 😢. Oh, kernel, what are you doing with me, babe.
So, I’ve decided to use another way to get a SIFT workstation: get AWS Ubuntu 16.04 instance and install sift manually. Failed for the reason unknown. Then I’ve tried with AWS Ubuntu 18.04. Still waiting.
I then opened the book again and notices, that not EC2 but some AWS Lightsail was recommended. So, I had to dig up ⛏️ the difference between the two. I still don’t know which is better for me at the moment, but I suppose I’ll use EC2 for the upcoming 12 months and learn the more complex solution. Moving to the Lightsail version (should the need arise) should be easier than vice versa .
https://ubuntu.com/tutorials/ubuntu-desktop-aws#1-overview
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-2-install-gui/
To install SIFT manually:
#!/bin/bash
wget https://github.com/sans-dfir/sift-cli/releases/download/v1.9.2/sift-cli-linux
wget https://github.com/sans-dfir/sift-cli/releases/download/v1.9.2/sift-cli-linux.sha256.asc
gpg --keyserver hkp://pool.sks-keyservers.net:80 --recv-keys 22598A94
gpg --verify sift-cli-linux.sha256.asc
shasum -a 256 -c sift-cli-linux.sha256.asc
sudo mv sift-cli-linux /usr/local/bin/sift
chmod 755 /usr/local/bin/sift
sudo sift install
sift install --mode=server
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/
Managed to import Ubuntu 16.04, but failed to launch or connect over ssh.
After all this time I managed to find https://github.com/teamdfir/sift/issues/465. Select us-west-2 region and search for ami-0e18b3270d2ce667d
. Eligible for free tier use. So much time waisted, but there is a result finally. One thing to note - log in with sansforensics
username. Also, install aws console to use s3 bucket.
Getting data from the bucket (install and configure):
aws s3 cp s3://my_bucket/my_folder/my_file.ext my_copied_file.ext
aws s3 --recursive cp s3://my_bucket/my_folderw .
# or
wget https://my_bucket.s3.amazonaws.com/path-to-file