Lambda Logging to Cloudwatch

If you’re an AWS user, either professionally or personally, I can’t encourage you enough to try it out. Lambda is the ability to run code in a few different languages (currently Python, Node, and Java) without worrying about the server environment it runs on.
Unfortunately (or fortunately, depending on your perspective) as with any new technology or paradigm, there are caveats to Lambda.
For example, one problem we’ve solved with Lambda is monitoring web services endpoints. Lambda allows us to make an HTTP call to a web service using the python httplib. But because the python script is being run on a server we don’t control or configure, it isn’t configured to point to our DNS servers by default. You can imagine our initial confusion when the lambda function said the web service was unavailable, but we never saw any traffic to the service.
The best way we found to gain insight into what lambda is actually doing is by logging from Lambda to a cloudwatch log stream. This allows you to output logs and put retention policies on them. Amazon has been helpful enough to tie the built in python logger into Cloudwatch. All you really have to do is to create a logging object similar to the example below

Below is an example

https://gist.github.com/LenOtuye/a7c14d8753d8268ab6b53c6a15535a70.js

Your logs will then be dumped into a log stream that is named “/aws/lambda/”

One thing to note is that from my experience you can’t control the name of the log stream. Even if you create the logger with “logger = logging.getLogger(‘LogName’)” the logstream will be named after the Lambda function.

To give your Lambda function permissions to log to Cloudwatch it will need to run under a role that has permissions. The IAM role should allow Lambda resources to run under it, for example

https://gist.github.com/LenOtuye/6e3216e129592b59327d1af9751ee1ee.jsAnd then you will need to give it permissions similar to the following (plus whatever rights your lambda function needs for it’s actual work) https://gist.github.com/LenOtuye/7bbf8922a547f937862055bb74791188.js

Describing AWS Instances

Amazon uses the idea of “describing” resources using filters quite a bit.

A filter in python terms is a list of maps, and each value in the map’s key-value pairs can be a list. The documentation is a little light on details, intentionally I believe, because the format is extremely flexible. A few examples of using this in Python and Powershell are below

If you wanted to get all of the instances that have a Name tag with the value webserver, you would use

filters = [{‘Name’:’tag:Name’, ‘Values’: [‘*Webserver*’]}]

Here is an example of using this concept in a python script:


https://gist.github.com/LenOtuye/ed4617b19217a3f903dc0161524dcd91.js

And here is the same thing in powershell. https://gist.github.com/LenOtuye/c7c36710be9a153ea1998045c64806a9.js

And just for extra fun, once you have the instance objects in Powershell it’s pretty easy to pipe them to other commandlets, like “where” to filter by instance type: https://gist.github.com/LenOtuye/71d800f6807bda0d501a157ec6a50757.js

Example Cloudformation template for Opsworks Stack

I recently spent some time struggling through the AWS documentation to create an Opsworks Stack using Cloudformation. The AWS documentation is comprehensive for the resources available, but a little lacking on examples for how to link them together.

I thought I’d share a sanitized example of what I found, so it’s below.

To implement this in your own environment, you will need to swap out the account specific information like Opsworks Service role, subnets, security groups, and current AMI IDs, and the Chef recipe to run on the Setup lifecycle event.

Lastly, this won’t be much use unless you have a repository of Chef recipes to run on these.

This template creates an Opsworks stack with a single layer and two Ubuntu 16.04 instances. The instances are dependent on each other so that they will get created at different times and get different hostnames.

The instances are time controlled and created with a schedule of on during Central time business hours.

https://gist.github.com/LenOtuye/9b62d9b8dbf65e33f84477b6a0f6e40d.js

AWS Cloudformation Template Parameters

An AWS Cloudformation template represents a stack (grouping) of resources. Most companies will want to deploy these stacks into multiple environments (“Development”, “Staging”, “Production”, etc).

Cloudformation presents an easy way to do so with a combination of template parameters and mappings.


The first step is to create a parameter for your Cloudformation template

https://gist.github.com/LenOtuye/b6273550d85bc0091b7c9c6437f7d21f.js
The next step is to create a mappings section. This will be used to hold different values to be used in the different environments.

https://gist.github.com/LenOtuye/e12c47b7ee95a43b78af2faea39be647.js Then you can combine the parameter and mappings to dynamically select the subnet like below


https://gist.github.com/LenOtuye/4fe4f0b041a890992159b5b7fb397510.js The advantage is that you can select dynamically select values in the CFT without changing the template at all. It also allows users who aren’t familiar with Cloudformation to deploy resources into different environments.