Auditing AWS IAM Users

Like any other company with sensitive data we go through audits pretty regularly. The latest one included some questions about accounts that have access to sensitive data, and the number of auth factors required to log into them.

As usual I started digging around in the AWS Powershell Tools to find a way to make this job easier than just manually looking through accounts, and I quickly found Request-IAMCredentialReport and Get-IAMCredentialReport.

These two commands are a pretty interesting combination. The first tells AWS to generate a credential report. Request-IAMCredentialReport is the step required to generate the report on AWS’s end. There’s some pretty good documentation on how that section works. The most interesting point to me is that AWS will only generate a report every 4 hours. This is important to note if you’re making changes, and re-running reports to double check they fixed an issue.
The second command, Get-IAMCredentialReport actually downloads the report that’s generated. From what I’ve seen, if you haven’t run Request-IAMCredentialReport in the last 4 hours to have a fresh report, this command will fail.
I found the output of this command to be the most useful when I included the -AsTextArray option. That returns an array of lines of the report with the columns separated by columns. I won’t include an sample output from our accounts for obvious reasons, but check the documentation for an example of what that looks like.
Now that we’ve got all of the components to download this report, it’s pretty trivial powershell work to do some parsing and logic.
The script example below creates an array of maps for each line of the report, letting you iterate over it and check for different conditions.
The one I’m testing for now is IAM accounts that have passwords enabled, but do not have an MFA device activated, but you can see it would be pretty easy to add additional criteria that would be tested against the report.
$awsProfiles = @("FirstProfileName","SecondProfileName");
set-DefaultAWSRegion us-east-1;
foreach ($awsProfile in $awsProfiles) {
write-host "Running audit on $awsProfile";
Set-AWSCredentials -ProfileName $awsProfile;

# Attempt to get an IAM credential report, if one does not exist, sleep to let one generate
$reportResult = Request-IAMCredentialReport -Force;

# Sleep for 15 seconds to allow the report to generate
start-sleep -s 15

try {

# Get IAM Credential report
$credReports = Get-IAMCredentialReport -AsTextArray;
} catch {
write-host "No credential report exists for this account, please run script again in a few minutes to let one generate";
exit;
}
# Empty list that will contain parsed, formatted credential reports
$credReportList = @();

# Get the headings from the report
$headings = $credReports[0].split(",");

# Start processing the report, starting after the headings
for ($i = 1; $i -lt $credReports.length; $i++) {

# Break up the line of the report by commas
$splitLine = $credReports[$i].split(",");
$lineMap = @{};

# Go through the line of the report and set a map key of the header for that column
for ($j = 0; $j -lt $headings.length; $j++) {
$lineMap[$headings[$j]] = $splitLine[$j];
}

# Add the formatted line to the final list
$credReportList += , $lineMap;
}

# Iterate over the report, using rules to evaluate the contents
foreach($credReport in $credReportList) {
# Check for users that have an active password, but not an active MFA device
if($credReport['password_enabled'] -eq "TRUE" -and $credReport['mfa_active'] -eq "FALSE") {
write-host "ALERT: User: $($credReport['user']) has a password enabled, but no MFA device"
}
}
write-host "";
}

This script assumes you have created AWS Powershell tools profiles that match the array on the first line.

And here is some example output of users I had to go have a chat with today to activate their MFA devices.

NOTE: You may need to run this script a couple times, if you haven’t generated an IAM Credential Report in a while.

AWS Codepipeline: Alert on Stage Failure

We’ve been using AWS Codepipeline for some time now and for the most part it’s a great managed service. Easy to get started with and pretty simple to use.

That being said, it does lack some features out of the box that most CICD systems have ready for you. The one I’ll be tackling today is alerting on a stage failure.

Out of the box Codepipeline won’t alert you when there’s a failure at a stage. Unless you go in and literally look at it in the console, you won’t know that anything is broken. For example when I started working on this blog entry, I checked one of the pipelines that delivers to our test environment, and found it in a failed state.

In this case the failure is because our Opsworks stacks are set to turn off test instances when not during business hours, but for almost any other failure I would want to alert the team responsible for making the change that failed.

For a solution, we’ll use these resources

  • AWS Lambda
  • Boto3
  • AWS SNS Topics
  • Cloudformation
First we’ll need a Lambda function that can get a list of pipelines that are in our account, scan their stages, detect failures, and produce alerts. Below is a basic example of what we’re using. I’m far from a python expert, so I understand that there are improvements that could be made for error handling.
import boto3
import logging
import os

def lambda_handler(event, context):
# Get a cloudwatch logger
logger = logging.getLogger('mvp-alert-on-cp-failure')
logger.setLevel(logging.DEBUG)

sns_topic_arn = os.environ['TOPIC_ARN']

# Obtain boto3 resources
logger.info('Getting boto 3 resources')
code_pipeline_client = boto3.client('codepipeline')
sns_client = boto3.client('sns')

logger.debug('Getting pipelines')
for pipeline in code_pipeline_client.list_pipelines()['pipelines']:
logger.debug('Checking pipeline ' + pipeline['name'] + ' for failures')
for stage in code_pipeline_client.get_pipeline_state(name=pipeline['name'])['stageStates']:
logger.debug('Checking stage ' + stage['stageName'] + ' for failures')
if 'latestExecution' in stage and stage['latestExecution']['status'] == 'Failed':
logger.debug('Stage failed! Sending SNS notification to ' + sns_topic_arn)
failed_actions = ''
for action in stage['actionStates']:
logger.debug(action)
logger.debug('Checking action ' + action['actionName'] + ' for failures')
if 'latestExecution' in action and action['latestExecution']['status'] == 'Failed':
logger.debug('Action failed!')
failed_actions += action['actionName']
logger.debug('Publishing failure alert: ' + pipeline['name'] + '|' + stage['stageName'] + '|' + action['actionName'])
logger.debug('Publishing failure alert: ' + pipeline['name'] + '|' + stage['stageName'] + '|' + failed_actions)
alert_subject = 'Codepipeline failure in ' + pipeline['name'] + ' at stage ' + stage['stageName']
alert_message = 'Codepipeline failure in ' + pipeline['name'] + ' at stage ' + stage['stageName'] + '. Failed actions: ' + failed_actions
logger.debug('Sending SNS notification')
sns_client.publish(TopicArn=sns_topic_arn,Subject=alert_subject,Message=alert_message)

return "And we're done!"

If you’re looking closely, you’re probably wondering what the environment variable named “TOPIC_ARN” is, which leads us to the next piece: A cloudformation template to create this lambda function.

The Cloudformation template needs to do a few things.

  1. Create the Lambda function. I’ve chosen to do this using AWS Serverless Application Model.
  2. Create an IAM Role for the Lambda function to execute under
  3. Create IAM policies that will give the IAM role read access to your pipelines, and publish access to your SNS topic
  4. Create an SNS topic with a list of individuals you want to get the email
The only really new fangled Cloudformation feature I’m using here is AWS SAM, the rest of these have existed for quite a while. In my opinion one of the main ideas behind AWS SAM is to package your entire Serverless Function in a single Cloudformation template, so the example below does all four of these steps.
#############################################
### Lambda function to alert on pipeline failures
#############################################

LambdaAlertCPTestFail:
Type: AWS::Serverless::Function
Properties:
Handler: mvp-alert-on-cp-failure.lambda_handler
Role: !GetAtt IAMRoleAlertOnCPTestFailure.Arn
Runtime: python2.7
Timeout: 300
Events:
CheckEvery30Minutes:
Type: Schedule
Properties:
Schedule: cron(0/30 12-23 ? * MON-FRI *)
Environment:
Variables:
STAGE_NAME: Test
TOPIC_ARN: !Ref CodePipelineTestStageFailureTopic
CodePipelineTestStageFailureTopic:
Type: "AWS::SNS::Topic"
Properties:
DisplayName: MvpPipelineFailure
Subscription:
-
Endpoint: 'pipelineCurator@example.com'
Protocol: 'email'
TopicName: MvpPipelineFailure
IAMPolicyPublishToTestFailureTopic:
Type: "AWS::IAM::Policy"
DependsOn: MoveToPHIIAMRole
Properties:
PolicyName: !Sub "Role=AlertOnCPTestFailure,Env=${AccountParameter},Service=SNS,Rights=Publish"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "sns:Publish"
Resource:
- !Ref CodePipelineTestStageFailureTopic
Roles:
- !Ref IAMRoleAlertOnCPTestFailure
IAMPolicyGetPipelineStatus:
Type: "AWS::IAM::Policy"
DependsOn: MoveToPHIIAMRole
Properties:
PolicyName: !Sub "Role=AlertOnCPTestFailure,Env=${AccountParameter},Service=CodePipeline,Rights=R"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "codepipeline:GetPipeline"
- "codepipeline:GetPipelineState"
- "codepipeline:ListPipelines"
Resource:
- "*"
Roles:
- !Ref IAMRoleAlertOnCPTestFailure
IAMRoleAlertOnCPTestFailure:
Type: "AWS::IAM::Role"
Properties:
RoleName: !Sub "Role=AlertOnCPTestFailure,Env=${AccountParameter},Service=Lambda"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"

#############################################
### End of pipeline failure alerting lambda function
#############################################

And that’s about it. A couple notes on the Cloudformation template:

Alert Frequency

I’m using a cron expression as my schedule, currently set to go off every half hour during business hours, because we don’t have over night staff that would be able to look at Pipeline failures. You can easily up the frequency with something like

cron(0/5 12-23 ? * MON-FRI *)

Lambda Environment Variables

One of the announcements from reInvent I was most excited about was AWS Lambda environment variables. This is a pretty magical feature that lets you pass in values to your Lambda functions. In this case, I’m using it to pass an SNS topic ARN that’s being created in the same Cloudformation template into the Lambda function.

Long story short, that means we can create resources in AWS and pass references to them into code without having to have a way to search for them or put their values into source code.

      Environment:
Variables:
STAGE_NAME: Test
TOPIC_ARN: !Ref CodePipelineTestStageFailureTopic

Flowerboxes

The CFT this example comes from contains multiple pipeline management functions, so the flower boxes (“###############”) at the beginning and end of the Lambda Function definition are our way of keeping resources for each lambda function separated.

SNS Notifications

When you create an SNS topic with an email, the user will have to register with the topic. They’ll get an email and have to click the link to allow the notifications.

Snippets

These are snippets I pulled out of our pipeline management Cloudformation stack. Obviously you’ll have to put them into a Cloudformation template that references the SAM Cloudformation Transform and has a valid header like the one below:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:......

Happy alerting!

Getting an AWS IAM Certificate ARN

I was recently working on a cloudformation template that needed an ELB with an HTTPS listener. My company already has a wildcard cert uploaded to IAM for use in staging environments, so I wanted to use that cert rather than create a new one.

The classic load balancer and the newer Application Load Balancer look a little different for creating HTTPS listeners, but both require you to include the certificate ARN in your template.

I spent some time poking around in the console, looking for how to find the ARN of a certificate you’ve uploaded with no success. As far as I can tell, there’s no where besides editing an ELB listener to see the certificates that you’ve uploaded.

Finally I turned to the AWS CLI and found “get-server-certificate” which returns the ARN of a certificate uploaded to IAM.

If you already have the AWS CLI setup with your secret keys, it’s pretty straightforward

aws iam get-server-certificate --server-certificate-name wildcard-****

And it will kick back the relevant data

As it turns out, the ARN of a certificate is just the combination of your account number and the name you gave it.

And lastly, because I insist on believing that lots of people use Powershell for AWS management when maybe none of you do, here’s the same command in good ol’ PS.

(Get-IAMServerCertificate -servercertificatename wildcard-***********).ServerCertificateMetadata

Interestingly, the powershell tools have a few different objects built in so you won’t get the metadata by default.