Posts in Category: aws csa associate

AWS – Shared (Security) Responsibility Model

Ensuring that your aws infrastructure is secure is a responsibility that’s shared between you and Amazon.

Amazon is responsible for mainly:

  • Ensuring physical hardware that your resources (e.g. EC2 instances are running on). E.g. limit access to who is allowed to walk into AWS’s AZs (data centres)
  • Ensuring that internal data transfers are secure, e.g. data transfers between S3 buckets and EC2 instances. Also data transfers between physical hardware

We are responsible for:

  • Ensuring we use AMIs that are secure, i.e. don’t have api keys or ssh keys hardcoded in them.
  • Performing OS software updates and security patches
  • Keeping “Data at rest” secure – e.g. persistant data on our EBS. We can select the ebs encrypt option when creating our instances, also encrypt our filesystems using luksformat.
  • OS configurations, e.g. firewalld and selinux
  • software configurations, e.g. httpd

AWS – Natively available AWS features for enhancing security

AWS offers a bunch of natively security features that we can use to enhance security:

  • AWS API access security – via api keys
  • buitin vpc firewalls – private and public subnets. Encourages us to use private subnets whenever possible
  • IAM – only authenticated users and apps are granted access privileges
  • MFA – multifactor authentication – must use android phone as part of login process
  • Encrypt data stores – e.g. the Encrypted EBS feature, also s3 encryption
  • AWS direct connect – ISP’s routes AWS traffic straight to AWS AZs, without going through the rest of the internet
  • Monitoring aws api usage – i.e. cloudtrail. Keeps track of user activities
  • AWS config – Lets you compare point-in-time snapshot view of how your infrastructure has changed over time. This is useful for example if you want to see what ec2 instances

AWS – Minimizing impact of DDOS attacks

We can limit DDOS attacks in the following ways:

  • identify ip range of ddos attacks and block it at the Network ACL level. Alternatively could do this at the Security Group Level, but it’s quicker at the Network ACL level.
  • Install DDOS prevention software on our EC2 instances that will monitor for DDOS attacks and filter them out.


AWS minimizes impact by:

  • use of CloudFront, which can absorb most of the impact. Hence the edge collections takes on the main brunt of the attack
  • If ddos against a static website that’s hosted on S3, then S3 will absorb this impact.
  • port scanning (using the nmap command) is disabled by default in AWS (even port scanning between EC2 instances that are inside the same VPC). If you want to enable port scanning, then you need to contact

AWS – Encryption features Overview

You can encrypt the content of your resources. This basically means that the content can’t be viewable by an AWS employee. The only way to decrypt the content is via logging into the AWS Account that created the encrypted data in the first place, and also you need to login with the appropriate account privileges.

There are 3 main resource types, whose data you can encrypt:

  • S3 Buckets
    • Uses AES-256 encription to encrypt data at rest. It is decrypted only when it receives a valid request, by a valid IAM user or ec2 instance.
  • EBS volumes –
    • When enabled, this essentially means that the EC2 instance will first always encrypt the data before sending it to the EBS volume for storage.
    • EBS snapshot therefore only stores encrypted data. only A user with the right

AWS – Granular User/Application/Resource Access Controls

If there is a particular file in an S3 bucket that is available to access, then there are three things that may want to download it:

  • An AWS user – This request can be granted via IAM roles
  • An AD user, who doesn’t have an AWS account – this user might want temporary access, in which this can be done via a token that allows them to temporarily assume a role.
  • An Application that’s running on top of a resource (e.g. ec2 instance) – this request can be granted via tokens that have expiry dates
  • Resources – e.g. an ec2 instance itself might want to download the file. In which case this can be done by assigning the necessary permissions to the EC2 instance’s IAM role.


You can set up IAM roles that are specify

AWS – Cloudwatch related security features

Cloudwatch related API requests are signed with HMAC-SHA1signature from the request and the the user’s private key

Cloudwatch’s (sdk) API is only accessible via https, not http, i.e. it is encrypted with ssl

An IAM user can only access cloudwatch if they are given access via IAM

You can configure cloudtrail to send notifications to SNS, which in turn sends notifications to cloudwatch, to take particular actions when something happens, e.g. restrict a certain user’s permission if they do something unwanted.


You can also get cloudwatch to send all of your EC2 instance system logs to your cloudwatch in real time:


AWS – CloudHSM

CloudHSM (Hardware Security Module): This is essentially the name of a dedicated physical machine that is seperate from all the other AWS hardware, and it is used to store encryption keys. If an outside party gains access to these keys, then your AWS infrastructure is compromised. Hence even AWS employees don’t have physical access to CloudHSM since they are locked in specially controlled rooms that is seperate from the rest of the AWS AZ’s hardware.

These keys are only used from inside the CloudHSM device itself. Because of this, the CloudHSM is responsible for decrypting data it receives, and decrypting data it sends out. CloudHSM has an API that all your other AWS resources can interact with. All the AWS resources that can interact with CloudHSM are referred to as “CloudHSM

AWS – Route 53 routing policy types

In route53 you have multiple entries with the same url (aka url). In fact you have to create multiple entries with the same name in order to take advantage of the various routing policies. Here are the available routing policies:

  • Simple
  • Weighted
  • Latency
  • Failover
  • Geolocation

We have already covered Failover.


Latency based routing

One thing you can do is set up the exact same VPC in 2 different regions.

You can then configure route 53 to route traffic to the VPC that is going to be the first responder to a given source’s request. This usually means that the vpc that is geographically closer to the requester’s location, will end up handling the request.

This indirectly means that if one vpc goes down, all traffic will failover to the vpc that is still active, since it essentially becomes the first