AWS Powershell Tools Snippets: Powershell Pipes

I’m on another AWS Powershell tools rant. Hopefully after reading this blog post you’ll share my appreciation for how useful they are.

Powershell takes the idea of piping commands together (sending that output of one command directly to the input of another) to a whole new level of useful. If you aren’t familiar with the concept, it’s a great way to make your commands dynamic and intricate. Let’s walk through an example.

Let’s say you want to list all of your Opsworks instances in an AWS account. You’d probably start by going to the AWS Powershell Tools commandlet reference and find the Get-OPSInstances commandlet. But running it alone will give you the not-so-helpful output below
Looking a little more closely at the documentation you’ll find a “StackId” parameter that will give you instances for that stack. Obviously you could go to the console, find a stack, copy the ID, and paste it into the shell, but that feels like a lot of steps.
Instead you could use two commands, connected by a pipe to link they’re results together. For example “Get-OpsStacks” piped into 
“Get-OpsInstances”. In the command below I’m using the “count” property of powershell lists to find the number of instances returned, which is more than the zero by my first command.
Pretty cool! I was able to pull back instances without going to the console to look up a Stack ID.
This is a pretty basic example, but once you’re comfortable with the concept the possibilities are endless. Let’s say we wanted to find only Opsworks Stacks that contain the number “901” (we use the “900” range for test instances at my company). You could pipe “Get-OpsStacks” into the “Where” commandlet to filter the Opsworks stacks before you get the instances.
In the example below, I’ve also used “foreach” commandlet to only print the hostname to keep the output a little smaller.
It’s pretty clear that we’ve got instances from multiple stacks here, so we can make our where filter a little more sophisticated. Let’s find instances that are only part of the NodeJS webserver stack.
You can repeat this pattern of piping commands together as many times as you need to get the results you’re looking for. Eventually these commands can get long and complicated enough it’s better to put them into a file for some formatting and white space, but for quick and dirty work they’re pretty useful.

Building a Docker container for easy SSH into Opsworks Stacks

Part of the concept behind Opsworks is the ability to create and destroy instances dynamically. If your instances are configured by Chef recipes all the way from AMI to processing production workload, this is probably something you do pretty regularly.

But this probably means that the IP addresses behind your instances change regularly. At some point you might get tired of constantly going back to the Opsworks console to get an IP address, I know I did.

It turns out it’s not too difficult to generate an ssh config file using boto3 to pull down the instances IP addresses. I chose to do this in python, and an example script is below. In my case, our instances all have private IP addresses, so that’s the property I’m using.

import os
import boto3

ssh_config_filename = '/home/meuser/.ssh/config'
if os.path.exists(ssh_config_filename):
os.remove(ssh_config_filename)

if not os.path.exists('/home/meuser/.ssh/'):
os.mkdir('/home/meuser/.ssh/')

profiles = {'NoPHI':[{'StackName':'My-Dev-Stack','IdentityFile':'my-dev-private-key.pem', 'ShortName':'dev'}
],
'PHI':[{'StackName':'My-prod-stack','IdentityFile':'my-prod-private-key.pem', 'ShortName':'prod'}]
}

for profile in profiles.keys():
session = boto3.Session(profile_name=profile)

opsworks_client = session.client('opsworks')
opsworks_stacks = opsworks_client.describe_stacks()['Stacks']
for opsworks_stack in opsworks_stacks:
for stack in profiles[profile]:
if opsworks_stack['Name'] == stack['StackName']:
instances = opsworks_client.describe_instances(StackId=opsworks_stack['StackId'])
for instance in instances['Instances']:
with open(ssh_config_filename, "a") as ssh_config_file:
ssh_config_file.write("Host " + (stack['ShortName'] + '-' + instance['Hostname']).lower() + '\n')
ssh_config_file.write(" Hostname " + instance['PrivateIp'] + '\n')
ssh_config_file.write(" User ubuntu\n")
ssh_config_file.write(" IdentityFile " + '/home/meuser/keys/' + stack['IdentityFile'] + '\n')
ssh_config_file.write("\n")
ssh_config_file.write("\n")

This script will run through the different AWS account profiles you specify, find the instances in the stacks you specify, and let you ssh into them using

ssh dev-myinstance1

If you have an instance in your Opsworks stack named “myinstance1”. If you run linux as your working machine, you’re really done at this point. But if you’re on Windows like me, there’s another step that can make this even easier: running this script in a Docker linux container to make ssh’ing around easier.

First, you’ll need to install Docker for windows. It might be helpful to go through some of their walk throughs as well if you aren’t familiar with Docker.

Once you have the Docker daemon installed and running, you’ll need to create a Docker image from a Docker file that can run the python script we have above. I’ve got an example below of using the ubuntu:latest image, installing python, moving over your AWS secret keys and private keys for sshing to the image, and running the python script.

You will need to put the files being moved over (ssh_config_updater.py, my-prod-private-key.pem, my-dev-private-key.pem, and credentials) in the same directory as the docker file.

FROM ubuntu:latest
RUN useradd -d /home/meuser -m meuser
RUN apt-get update
RUN apt-get install -y python-pip
RUN pip install --upgrade pip
RUN apt-get install -y vim
RUN apt-get install -y ssh
RUN pip install --upgrade awscli
ADD my-dev-private-key.pem /home/meuser/keys/my-dev-private-key-dev.pem
ADD my-prod-private-key.pem /home/meuser/keys/my-prod-private-key.pem
RUN chmod 600 /home/meuser/keys/*
RUN chown bolson /home/meuser/keys/*
ADD ssh_config_updater.py /home/meuser/ssh_config_updater.py
ADD credentials /home/meuser/.aws/credentials
RUN pip install boto3
USER meuser
WORKDIR /home/meuser
RUN python /home/meuser/ssh_config_updater.py
CMD /bin/bash

Once you have your Dockerfile and build directory setup, you can run the command below with the docker daemon running.

docker build -t opsworks-manage .

Once that command finishes, you can ssh into your instances with


docker run -it --name opsworks-manage opsworks-manage ssh dev-myinstance1

This creates a running container with the name opsworks-manage. You can re-use this container to ssh into instances using

docker exec -it opsworks-manage ssh dev-myinstance1

A couple notes, I’m using the default “ubuntu” account AWS builds into Ubuntu instances for simplicity. This is a root account, and in practice you should create another account to use for normal management, either through an opsworks recipe or by using Opsworks to create the user account.

Another note, because this example copies over ssh keys and credentials files to the Docker container, you should never push this image to a container registry. If you plan on version controlling the Dockerfile, you should make sure to use a .gitignore file to keep that sensitive information out of source control.

Example Cloudformation template for Opsworks Stack

I recently spent some time struggling through the AWS documentation to create an Opsworks Stack using Cloudformation. The AWS documentation is comprehensive for the resources available, but a little lacking on examples for how to link them together.

I thought I’d share a sanitized example of what I found, so it’s below.

To implement this in your own environment, you will need to swap out the account specific information like Opsworks Service role, subnets, security groups, and current AMI IDs, and the Chef recipe to run on the Setup lifecycle event.

Lastly, this won’t be much use unless you have a repository of Chef recipes to run on these.

This template creates an Opsworks stack with a single layer and two Ubuntu 16.04 instances. The instances are dependent on each other so that they will get created at different times and get different hostnames.

The instances are time controlled and created with a schedule of on during Central time business hours.

https://gist.github.com/LenOtuye/9b62d9b8dbf65e33f84477b6a0f6e40d.js