Posts in Category: aws

AWS – EC2 Default Limits

In the EC2 section of your AWS web console, there is a section called “limits”. This sets arbitariry limits on things like how many running EC2 instance you are allowed to have at any given time. This limits are in place mainly for AWS own benefit to help them plan for the future. However you can put in a request to have these limits increased as and when needed.


AWS – Creating new users (in IAM)

Once you have logged into the dashboard, you can create new AWS login accounts by clicking on the “Identity and Access Management” link.

When creating a new user, you will get prompted on whether you want to have an “API access key”, if you did then you will get the following pair info generated:

  • Access Key  ID – a string of characters
  • Secret Access Key – a really long string of characters

The  “Secret Access Key” is  only displayed the one time, and are viewable on an other setting page. Hence it gives you the option to download them into a text file, while they are displayed on screen. If you lose this info, then you would need to regenerate your API keys again, which you can do without needing to recreate the user


AWS – IAM Groups and Roles

Groups and Roles

Roles are universal. I.e. if you create it in one region, it becomes in all other regions.

 

Lets say you    want to grant a specific permission (e.g. create ec2 instance) to a 100 users. Then you can do this by adding it to each user one at a time. Now let’s say a couple of weeks later, you want to grant another permission to these 100 users. Once again you would need to assign each user one at a time.  This can get tedious.

A better way is to create a group (with an appropriate name), and then add the 100 users to this group. After that you can then assign permissions to the group as a whole, rather then individually. You can also assign even more permissions


AWS – Create a new EC2 instance

to create a new EC2 instance, you need to  login into your dashboard, and click on  “EC2”, here you’ll find that you are already in a default VPC. Next you:

 

click on “Launch instance” to create a new instance -> Select OS type (windows, rhel, suse…etc) ->  select vm size (e.g. t2.micro).  

 

On the next screen, you can select the role you want to allocate to your ec2 instance. Remember this can’t be removed/changed in anyway after the machine has been created.

On this screen you should also set “Auto-assign Public IP” to “enable” from the dropdown list. This is so that you can ssh into your ec2 instance  using putty.

 

The next screen let’s you attach additional EBS devices to your isntance. These ebs devices appears as /dev/xvda,  /dev/xvdb,  /dev/xvdc,  /dev/xvdd….etc


AWS – Testing what access permissions an EC2 instance

In the previous article we cover how to create and log into your instance. During this process, lets say we assigned a role called “test-role” to this instance.

Next we want to see what other resources our EC2 can access. To do this we first log into our ssh instance from our laptop:

[ec2-user@LinuxLaptop ~] $ ssh {ipnumber}

Next we need to install the aws cli utilities into our instance:

$ yum install python-pip
$ pip install awscli

After that we could run the following to see if your machine has access to any s3 buckets:

$ aws ls s3

This command will list all the buckets that exists under your aws account, assuming that the role that has been assigned to this instance has a mininmum of s3 read only access. The cool thing here is


AWS – Connecting your local desktop to your AWS account using API Keys

Everything you can do via your aws console can also be done on your local Linux desktop, via the command line. First you need to install the awscli:

$ yum install python-pip
$ pip install awscli

Next via the web console, via IAM, create a new user, during the creation process you will generate your IAM keys.

Next on local desktop’s command line, run the “aws configure” command:

$ aws configure
AWS Access Key ID [None]: AESDRFKSDGJSKDF
AWS Secret Access Key ID [None]: ljlJLJljGjGLafa5454fasdf6sd5sd6sd5f4a
Default region name [None]: eu-west-1
Default output format [None]:

Note: This ends up creating the ~/.aws folder. the info

These API keys are unique across the entire AWS worldwide. This means AWS can determine which aws account these API keys is associated to.

After that we could run the following to see


AWS – Monitor all user AWS activities (Cloudtrail)

Logging activities and events are important. There are two kinds of things you want to create logs for:

  • logging of capacity of EC2 instance, e.g. cpu utilisation – This is done by the  Cloudwatch service.
  • AWS interaction logging. E.g. which deleted which EC2 instance – this kind of logging is done by the Cloudtrail service.

It’s good practice to log all the activities authorized AWS users are doing on AWS. The users can interact with AWS either via the web console or the aws cli, in both cases all activities will get logged by Cloudtrail, if you enable it.

Cloudtrail tightly integrates with other aws services. In particular Cloudtrail stores all the logs that it generates, in its own S3 bucket. You get to choose the name of your S3 bucket as part of


AWS – Amazon Elastic Beanstalk

Elastic Beanstalk service automates the building of a middleware servers (e.g. httpd servers, nginx servers,…etc) and then deploys your app into it. I.e. it is perfect for setting up a vanilla httpd server, or Rails server,…etc. This is ideal if you have a relatively simple application you want to deploy, that doesn’t require a lot of middleware configurations.

Elastic Beanstalk basically does the following:

  • deploys  – install middleware and deploys apps into it.
  • manages – e.g. OS patching, configuring firewalld, ..etc.
  • auto scale up/down – add/removes instance to manage demand.

 

Elastic Beanstalk can install middleware in order to deploy the following types of applications:

  • .net
  • java
  • php
  • python
  • ruby
  • docker containers
  • node.js

some of the middleware that Elastic Beanstalk can install+configure are:

  • httpd server
  • nginx server
  • passenger server
  • apache tomcat
  • Microsoft IIS

 

Elastic Beanstalk achieves its job by making use of:

  • EC2
  • Simple Notication Service SNS
  • S3
  • ELB
  • Auto-Scaling

 

All you need to tell


AWS – CloudFormation (Infrastructure As Code)

You can manage all your AWS stuff using the AWS gui web console.  However everything that you can do using the gui console, can also be done by using either the:

  • AWS API, which are accessible via a choice of SDKs.
  • AWS CLI (Command Line Interface) – Linux or Powershell

CloudFormation on the other hand lets you document your entire AWS infrastructure (i.e. a vpc) in the form of a json file.

In other words,  CloudFormation is the AWS equivalent of writing a vagrantfile for vagrant. The only difference being that your are writing about aws stuff and it is written using json syntax.

You can then store this json file in github and quickly build/rebuild new vpc instances from it. The json file can specify only part of a vpc environment


AWS – Managing AWS activities

Identity Access Management (IAM)

IAM is a service that lets you set user and group permissions on what they are allowed/denied to do.

 

It lets you set permissions for resources that belong to the following service categories:

  • computing
  • storage
  • databases
  • applications

These permissions are set specified against particular API-calls/CLI-options/web-gui-console

Here are some examples:

  • give a user (or group) permission to create new EC2 instances
  • deny user (or group) permssions to delete an particular EC2 instance

 

IAM is very granular and let’s you set all kinds of permissions.

http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html

 

CloudTrail

This is a service that logs all aws-console/cli/api activities and who performed them.

It is a logging solution to help identify any security issues.

https://aws.amazon.com/cloudtrail/

 

Cloudwatch

This is a monitoring service that monitors various service and resources. It can collect and track metrics. It can collect logs for various resources, e.g. cpu utilisation on a given EC2 instance, network