Graph Your RI Commitment Over Time (subtitle: HOW LONG AM I PAYING FOR THIS?!?!?)

In my last post I talked about distributing your committed RI spend over time. The goal being to avoid buying too many 1 year RIs (front loading your spend), and missing out on the savings of committing to 3 years, but not buying too many 3 year RIs (back loading your spend) and risking having a bill you have to foot if your organization has major changes.

Our solution for balancing this is a powershell snippet that graphs our RI commitment over time.

# Get RI entries from AWS console
$ri_entries = Get-EC2ReservedInstance -filter @(@{Name="state";Value="active"});

# Array to hold the relevant RI data
$ri_data = @();

# Calculate monthly cost for RIs
foreach ($ri_entry in $ri_entries) {
$ri = @{};
$hourly = $ri_entry.RecurringCharges.Amount;
$monthly = $hourly * 24 * 30 * $ri_entry.InstanceCount;
$ri.monthly = $monthly;
$ri.End = $ri_entry.End;
$ri_data += $ri;
}

# Three years into the future (maximum duration of RIs as of 1.22.2019)
$three_years_out = (get-date).addyears(3);

# Our current date iterator
$current = (get-date);

# Array to hold the commit by month
$monthly_commit = @();

# CSV file name to save output
$csv_name = "ri_commitment-$((get-date).tostring('ddMMyyyy')).csv";

# Remove the CSV if it already exists
if(test-path $csv_name) {
remove-item -force $csv_name;
}

# Insert CSV headers
"date,commitment" | out-file $csv_name -append -encoding ascii;

# Iterate from today to three years in the future
while($current -lt $three_years_out) {

# Find the sum of the RIs that are active on this date
# all RI data -> RIs that have expirations after current -> select the monthly measure -> get the sum -> select the sum
$commit = ($ri_data | ? {$_.End -gt $current} | % {$_.monthly} | measure -sum).sum;

# Build a row of the CSV
$output = "$($current),$($commit)";

# Print the output to standard out for quick review
write-host $output;

# Write out to the CSV for deeper analysis
$output | out-file $csv_name -append -encoding ascii;

# Increment to the next month and repeat
$current = $current.addmonths(1);
}

Ok, short’s not the right word. It’s a little lengthy, but at the end it kicks out a CSV in your working directory with months and your RI commit for them.

From there it’s easy to create a graph that shows your RI spend commit over time.

That gives you an idea of how much spend you’ve committed to, and for how long.

Our RI Purchase Guidelines

I’ve talked about it a couple of times, but AWS’s recommendation engine is free and borderline magic.

It’s a part of AWS Cost Explorer and ingests your AWS usage data, and spits back reserved instance recommendations

At first glance it feels a little suspect that a vendor has a built in engine to help you get insight into how to save money, but Amazon is playing the long game. If you’re use of AWS is more efficient (and you’re committing to spend with them) you’re more likely to stay a customer, and spend more in the long haul.
The Recommendation engine has a few parameters you can tweak. They default to the settings that will save you the most money (and have you commit to the longest term spend with Amazon), but that may not be the right fit for you.
For example our total workload fluctuates depending on new features that get released, performance improvements for our databases, etc., so we typically buy convertible instances so we have the option of changing instance types, size and OS if we need to.
As you click around in these options you’ll notice the total percent savings flies around like a kite. Depending on the options you select your savings can go up and down quite a bit.
Paying upfront and standard vs. convertible can give you a 2-3% difference (based on what I’ve seen), but buying a 3 year RI instead of a 1 year doubles you’re savings. That can be a big difference if you’re willing to commit to the spend.
Now, three years is almost forever in Amazon. Keep in mind Amazon releases a new instance type or family about every year, so a 3 standard RI feels a little risky to me. Here are the guidelines we’re trying out
  • Buy mostly convertible (the exception is services that will not change)
  • Stay below ~70% RI coverage (we have a couple efficiency projects in the works that will reduce our EC2 running hours)
  • Distribute your spend commit
My next post will cover how we distribute our spend commit.

Diving Into (and reducing) Your AWS Costs!

AWS uses a “pay as you go” model for most of it’s services. You can start using them at any time, you often get a runway of free usage to get up to speed on the service, then they charge you for what you use. No contract negotiations, no figuring out bulk discounts, and you don’t have to provision for max capacity.

This model is a double edge sword. It’s great when you’re

  • First getting started
  • Working with a predictable workload
  • Working with a modern technology stack (i.e. most of your resources are stateless and can be ephemeral
But it has some challenges when
  • Your workload is unpredictable
  • Your stack is not stateless (i.e. you have to provision for max capacity)
  • Your environment is complex with a lot of services being used by different teams

It’s easy to have your AWS costs run away from you and you can suddenly find yourself paying much more than you need or want to. We recently found ourselves in that scenario. Obviously I can’t show you our actual account costs, but I’ll walk you through the process we used to start digging into (and reducing our costs) with one of my personal accounts.

Step 1: AWS Cost Explorer

Cost Explorer is your first stop for understanding your AWS bill. You’ll navigate to your AWS Billing Dashboard, and then launch cost explorer. If you haven’t been in cost explorer it doesn’t hurt to look at some of the alerts on the home page, but the real interesting data is in Costs and Usage
My preference is to switch to “stack view”
I find this helps to view your costs in context. If you’re looking to cut costs the obvious place to start is the server that takes up the largest section of the bar. For this account it’s ElastiCache
ElastiCache is pretty straight forward to cut costs for – you either cut your nodes or node size – so let’s pick a more interesting service like S3.
Once you’ve picked a service to try to cut costs for add a service filter on the right hand side, and group by service type
Right away we can see that most of our costs are TimedStorage-ByteHrs which translates to S3 Standard Storage, so we’ll focus our cost savings on that storage class.

Next we’ll go to Cloudwatch to see where our storage in this class is. Open up Cloudwatch, open up metrics, and select S3.


Inside of S3 click on Storage Metrics and search for “StandardStorage” and select all buckets.


Then change your time window to something pretty long (say, 6 months) and your view type to Number

This will give you a view of specific buckets and how much they’re storage. You can quickly skim through to find the buckets storing the most data.

Once you have your largest storage points you can clean them up or apply s3 lifecycle policies to transition them to cheaper storage classes.

After you’re done with your largest cost areas, you rinse and repeat on other services.
This is a good exercise to do regularly. Even if you have good discipline around cleaning up old AWS resources costs can still crop up.
Happy cost savings!