Browsed by
Category: AWS

AWS get the current users ARN – BASH

AWS get the current users ARN – BASH

I have already posted on how to get the AWS username from the users arn for both Ruby and Python. But what about BASH?

I love to use Bash to quickly whip something together and the awscli makes it super easy. However the get-caller-identity method was not introduced until version 1.10 of the cli. So you may need to upgrade your cli first. On a Mac/Linux desktop/server this is easy.

pip install --upgrade awscli

This should upgrade you to the latest version. At the time of this article it is/was 1.11.2.

Now the sts get-caller-identity should be working

aws sts get-caller-identity

It will return something like this:

{
"Account": "123456789012",
"UserId": "Abcdefg123456789XYZ01",
"Arn": "arn:aws:iam::123456789012:user/bob"
}

Now we can parse that, we can either use jq, or –query. I will show both.

First for jq, which is a favorite tool around my shop:

aws sts get-caller-identity --output json | jq -r '.Arn' | cut -f 2 -d '/'

It looks a little messy but works as long as you have jq installed, but what about –query:

aws sts get-caller-identity --output text --query 'Arn' | cut -f 2 -d '/'

I use –output text to eliminate the double quotes .

As you can see bash, IMHO, is a much easier tool to work with using the aws cli to quickly build small shell scripts.

 

AWS get the current users ARN – Ruby

AWS get the current users ARN – Ruby

I already wrote a post on how to do this using Python. But here is how to do the same thing in Ruby:

I write a lot of automation scripts. I switch back and forth using both Ruby and Python. So when using the aws sdk in a ruby script I want to know who is running the script. I like to use the aws profile(s) for key rather than having the keys stored in yet another place on the users machine. I normally either ask the user to enter the profile name or have it passed on the command line. The sdk offers a couple of ways to get at the AWS user associated with they kepair in the profile.

First there is the IAM get-user class, however this class requires that the profile have IAM access which most profiles should not have. So this is not a good way to get at this information.

The second way is using STS or Security Token Service . This API offers a method called GetCallerIdentity.  This method returns the Account, ARN and UserId for the aws credentials used to make the request. So let’s see how to do this in Ruby.

First we use the SharedCredentials API to create a new SharedCredentials object for the profile information:

myCredentials = Aws::SharedCredentials.new(profile_name: myProfile)

Next we create a new STS Client object using those credentials:

myStsClient = Aws::STS::Client.new(credentials: myCredentials)

And finally call the get_caller_identity method:

mySts = myStsClient.get_caller_identity()

That object then returns the following elements:

puts "My Account #{myStsClient.account}"
puts "My ARN #{myStsClient.arn"
puts "My User id #{myStsClient.user_id}"

Now if you use this you will see that the UserID is not what you were expecting. It returns the unique identifier for the profile. That is well and good, but I want to get the username. Fortunately it is part of the ARN. so we can split it out like so:

puts "My User #{myStsClient.arn.split('/')[-1]}"

Now we have something we can use.

Here it is all put together:

#!/usr/bin/env ruby
require 'aws-sdk'
require 'optparse'

options = {:myProfile => nil }

parser = OptionParser.new do|opts|
  opts.banner = "Sample STS Script [options]"
  opts.on('-m', '--my-profile myProfile', 'myProfile') do |myProfile|
    options[:myProfile] = myProfile;
  end

  opts.on('-h', '--help', 'Displays Help') do
    puts opts
    exit
  end
end

parser.parse!

myProfile = options[:myProfile]

myCredentials = Aws::SharedCredentials.new(profile_name: myProfile)
myStsClient = Aws::STS::Client.new(credentials: myCredentials)
mySts = myStsClient.get_caller_identity()

puts "My Account #{mySts.account}"
puts "My ARN #{mySts.arn}"
puts "My User id #{mySts.user_id}"
puts "My User #{mySts.arn.split('/')[-1]}"
AWS get the current users ARN – Python

AWS get the current users ARN – Python

In writing scripts it is good to know who is running them. I create a lot of AWS python scripts. These examples are using python3 and boto3. I prefer to use the aws config profile for creating the sessions. It is easier on users and allows for multiple key pairs to be used and switched out easily.

devAWSSession = boto3.Session(profile_name=args.devProfile)

That works great for setting up a session, but what if you want to log the user behind that profile? Sometimes the profile names are not helpful. (I am looking at you “default”) The aws iam command has a get_user function that works great, if you have iam access. What if you don’t want all of your profiles to have IAM access, and you shouldn’t.

AWS provides another set of classes called “sts” or “Security Token Service”. With this service you can call get_caller_identity. This will give you back the account number, arn, and userid. However that is not the userid is not the user name you would expect, it actually returns the unique AWS user id. But the username is part of the arn. First let’s get the data, using the session from above:

mySts = myAWSSession.client('sts').get_caller_identity()
myArn = mysts["Arn"]

Now we have the complete arn “arn:aws:iam::123456789012:user/Bob”. So now we can do a normal split and get Bob from the arn:

myUser = myArn.split('/')[-1]

Now myUser = Bob

Super simple and easy

import boto3
import argparse

parser = argparse.ArgumentParser()

parser.add_argument("-m", "--my-profile", dest = "myProfile", default = "default", help="My AWS Profile")

args = parser.parse_args()

myAWSSession = boto3.Session(profile_name=args.myProfile)

mySts = myAWSSession.client('sts').get_caller_identity()
myArn = mySts["Arn"]
myAccount = mySts["Account"]
myUser = myArn.split('/')[-1]

print("My profile user: {}".format(myUser))
How to increase the root volume size using the aws cli

How to increase the root volume size using the aws cli

I love using the AWS CLI. It makes everything completely recreatable. I can add the commands to a run book, and presto, next time we need to spin something up we have the commands. No fumbling through the GUI, wondering which security groups or subnets to use. When the gremlins attack you will be in a lot better place with the cli commands documented.

Recently when spinning up a server I immediately got a warning about disk space on the root partition. That is odd, it did not come up in dev, but then again we don’t have Datadog running for our dev instances. I checked it out and it turns out that we install a lot of packages in this recipe. Just enough to put us over the 80% mark. My standard root partition is 8G. So now what? I couldn’t just leave it so I needed to figure out how to increase the root volume. I terminated the instance, and went back into our dev environment.

I am used to doing block mappings, and add them all the time:

aws ec2 run-instances \
--subnet-id subnet-99999999 \
--security-group-ids sg-99999999 \
--image-id ami-99999999 \
--key-name mykey \
--iam-instance-profile Name=my-default-ec2-role \
--instance-type t2.micro \
--block-device-mappings '[{"DeviceName":"/dev/sdb","Ebs":{"VolumeSize":5}}]' \
--output json

With the key there being the block device mappings. All root volumes are ebs volumes now, so it should be easy enough to change that, right?

Not sure if this would work I tested this in our dev account. It turns out it really is that simple, Just add another block mapping  for xvda and boom. It works like a champ

--block-device-mappings '[{"DeviceName":"/dev/sdb","Ebs":{"VolumeSize":5}}, {"DeviceName": "/dev/xvda", "Ebs": { "VolumeSize": 12 }}]'

When the instance booted it took the new setting and made the root partition 12G instead of the 8G that is default.

How to find unused RDS instances in AWS

How to find unused RDS instances in AWS

It is not uncommon to have developers run their own development servers in AWS. Sometimes things are too large to run locally. The problem is that those servers sit idle a lot and cost money doing so. This becomes especially true if you have a large database running on RDS.

I have scripted the deployment of a dev RDS database. With our snapshots that can take up to ten minutes to deploy. IMHO that is fast enough that you can spin one up when you need it. The argument that I am facing is that they “always” need the RDS database. I disagree and set out to prove this.

I believe in public shaming to change behavior. So I started sending slack messages when I would see a database that was up for an extended period of time. But that approach did not work and took a lot of my time. The argument that came back was that they were “using” the database during that time. This was a daily manual process which was not fun for me. So I set out to automate this and get a little creative as well.

The bash script I am going to explain here can be found on my public gist. First off why is this bash? I am using the aws cli for all the heavy lifting so it made sense. Plus I love me some bash.

First get all the databases that are “available”:

aws rds describe-db-instances --output text --query 'DBInstances[*].{DBInstanceIdentifier:DBInstanceIdentifier,InstanceCreateTime:InstanceCreateTime,DBInstanceStatus:DBInstanceStatus}' | grep available

Now that we have the list, let’s loop through it and use some cloudwatch metrics. Not many people realize the wealth of data in cloudwatch. In this script we are going to grab the “DatabaseConnections” metric. In my gist we are looking for how many database connections in the last hour. In my cron script I am looking over a 12 hour period. Kinda hard to say you needed that server when you have not even connected to it in the past 12 hours. You can change that as you need.

for j in $servers
do
 server=$(echo "$j" | cut -f1 -d$'\t')
 update=$(echo "$j" | cut -f3 -d$'\t')
 connections=$(aws cloudwatch get-metric-statistics --metric-name DatabaseConnections --start-time $STARTDATE --end-time $UTCDATE --period $period --namespace AWS/RDS --statistics Maximum --dimensions Name=DBInstanceIdentifier,Value=$server --output text --query 'Datapoints[0].{Maximum:Maximum}')
 if [ "$connections" == "0.0" ]
 then
 echo "Server $server has been up since $update"
 echo "There have been $connections maximun connections in the last hour"
 echo "To terminate this instance run one of the following commands:"
 echo "aws rds delete-db-instance --db-instance-identifier $server --final-db-snapshot-identifier ${server}-final-${MYDATE}"
 echo "aws rds delete-db-instance --db-instance-identifier $server --skip-final-snapshot"
 echo "---------------------------------------------------------------------------------"
 fi
done

In my production script I send the message to slack, but you can change that as you need. Now when this gets put in a public channel it is hard to defend.