Get started with AWS Lambda

A first-hand, step-by-step look at the ease and simplicity of Amazon's "function as a service" platform

Why would a developer use AWS Lambda? In a word, simplicity. AWS Lambda—and other event-driven, “function-as-a-service” platforms such as Microsoft Azure Functions, Google Cloud Functions, and IBM OpenWhisk—simplify development by abstracting away everything in the stack below the code. Developers write functions that respond to certain events (a form submission, a webhook, a row added to a database, etc.), upload their code, and pay only when that code executes.

In “How serverless changes application development” I covered the nuts and bolts of how a function-as-a-service (FaaS) runtime works and how that enables a serverless software architecture. Here we’ll take a more hands-on approach by walking through the creation of a simple function in AWS Lambda and then discuss some common design patterns that make this technology so powerful.

AWS Lambda, the original FaaS runtime, was first announced at AWS re:Invent in 2014.  The most common example used to explain how the event-driven, compute-on-demand platform works remains this one, the resizing of an image uploaded to Amazon S3:

aws lambda 01 Amazon

A picture gets uploaded to an S3 bucket, triggering an event that executes a Lambda function.  Prior to the event being triggered, the function sits in a file on disk; no CPU resources are used (or billed) until the work arrives. Once the trigger fires, the function is loaded into the Lambda runtime and passed information about the event. In this example, the function reads the image file from S3 into memory and creates thumbnails of varying sizes, which it then writes out to a second S3 bucket.

Let’s take a closer look. We won’t go to the trouble of implementing the image resizing code, but we’ll create the skeleton of the Lambda code needed to implement this example, set up the trigger, and test our code. We’ll also dig into the CloudWatch logs to debug a little permissions issue I ran into.

Creating an AWS Lambda function and trigger

There are many ways to create a Lambda function including plug-ins for IDEs like Eclipse and tools like the Serverless Framework. But the easiest way to start is to use one of the blueprints provided by AWS. If we go to the AWS Lambda console and click on Create New Function, we get the following:

aws lambda 02 IDG

We’ll be using Node.js to create a function that reacts to an S3 event, so we’ll choose Node.js 6.10 from the Select Runtime menu and enter S3 into the Filter dialog:

aws lambda 03 IDG

Clicking the s3-get-object blueprint takes us to the Configure Triggers page:

aws lambda 04 IDG

Here we set the bucket we’ll use to generate the events (infoworld.walkthrough) and set the event type to trigger whenever a new object is created in that bucket. We could further filter the events to fire only when certain prefixes or suffixes in object names are present, but we’ll skip that and click the check box to enable the trigger before pushing the Next button.

That creates the skeleton of a function to be created based on the blueprint:

aws lambda 05 IDG

We’ve given our function the name infoworldWalkthrough. Although we’ll be looking at the code more closely in a moment, you can see that it automatically retrieves information about the object that caused the trigger.

Further down that same configuration page, we need to set some permissions:

aws lambda 06 IDG

Every function must have an IAM role assigned to it so that we can control its access to AWS resources. Here we’ve asked the system to create a new role called infoworldRole and given that role read-only permissions to S3. If we were going to implement the full canonical example and generate the thumbnails, we’d also want to add S3 write permissions. However, because we will only be reading information about the triggered S3 object, the read-only permission should be sufficient.

Finally, we need to pay close attention to some Advanced Settings:

aws lambda 07 IDG

The most important items here are in the top section where we set the amount of memory and the execution timeouts. Remember that the Lambda runtime draws on an assembly line of containers, which are preloaded with the various language runtimes. When an event triggers it will load our code into one of these containers and execute our function. The memory and timeout settings dictate how big that container will be and how much time our function will have to execute. For our purposes, the defaults of 128MB and 3 seconds will be fine. For other use cases, these settings are commonly changed.

Pressing Next takes us to a screen where we can review all of the settings we’ve entered so far:

aws lambda 08 IDG

Pressing the Create Function button will take our input and create our function in AWS Lambda.

Examining our AWS Lambda code

Here’s the default code that is created for us by the blueprint:

aws lambda 09 IDG

On lines 14 and 15, our Lambda function extracts the name of the bucket and the object name (also called the key) that caused the trigger. It then uses the S3 API to get more information about the object and (if that goes smoothly) outputs its content type. We haven’t done so here, but we could easily include the code that then reads in the object and generates the thumbnails accordingly.

Testing our AWS Lambda code

Now let’s go to the S3 console for the bucket in question, which in this case starts out completely empty:

aws lambda 10 IDG

And we’ll upload a PNG of the InfoWorld logo to that bucket:

aws lambda 11 IDG

And then . . . what exactly?

It’s not clear from the S3 console whether our function has executed, and if you go to the Lambda console, you’ll find a similar lack of information. However, every Lambda function logs information via CloudWatch, so if we check CloudWatch we’ll see that we now have a new log group for our function:

aws lambda 12 IDG

And examining this log reveals that access to the S3 bucket was denied:

aws lambda 13 IDG

For some mysterious reason, when our code tried to read information about the S3 object, it was denied access to that data. But why? Didn’t we set up the IAM role so that our function had read-only permissions on our S3 buckets? Let’s double-check that in the IAM console:

aws lambda 14 IDG

Yes, in fact the role has a policy. So let’s take a look at that policy:

aws lambda 15 IDG

Oddly, we have permissions to create logs in CloudWatch, but there’s no mention of S3 anywhere. Somehow our S3 read-only permissons policy didn’t take. Let’s fix that.

If we press the Attach Policy button, we’ll see this screen:

aws lambda 16 IDG

By selecting the AmazonS3FullAccess option and pressing the Attach Policy button, we should be giving our function all of the permissions it needs.

Instead of testing the function by manually adding a PNG file to the S3 bucket as we did before, this time we’ll use the test hooks built into Lambda. Back to the homepage for our function:

aws lambda 17 IDG

Now if we press the Test button, we’ll get a dialog that lets us choose from among many sample events. We want to test an S3 put. We’ll need to edit the values in the S3 key and bucket name fields to correspond to the names of our image file and bucket, respectively:

aws lambda 18 IDG

There are all kinds of other fields in the event that could be set here, but since we know our code only looks at the key and the bucket name, we can ignore the rest. Pressing the Save and Test button will trigger the event and cause our function to execute. Unlike last time, when we triggered the event through the S3 console, this time we see live feedback. We also get the relevant portion of the CloudWatch log right there in the Lambda UI:

aws lambda 19 IDG

You can see that our code executed and identified the content type as expected.

IDE integrations and command line tools like the Serverless Framework accelerate this process dramatically, but this walkthrough has shown the basic steps involved in creating a function with the right permissions, setting up the event, and debugging the code through CloudWatch, along with two different ways of triggering the event so that the function can be tested.

Let’s wrap up by looking at some common Lambda design patterns.

AWS Lambda design patterns

A number of design patterns that have emerged for serverless application architectures. Back in December at AWS re:Invent, a session titled Serverless Architectural Patterns and Best Practices highlighted four such patterns. Here I’ll introduce my two favorites because they represent low hanging fruit for any organization wanting to get started with serverless architectures.

First, it is easy to build web applications the use of S3 and CloudFront for static content and the use of API Gateway backed by Lambda and DynamoDB for dynamic needs:

aws lambda 20 Amazon

That basic pattern can be locked down tightly with security at multiple levels:

aws lambda 21 Amazon

The bulk of the content for a web application tends to be read-only for all users, and this model can be served cheaply from S3 and CloudWatch. Authorized data can take advantage of IAM hooks into API Gateway along with IAM roles for individual Lambda functions that interact with a DynamoDB.

My second favorite use case—one implemented by Capital One for its Cloud Custodian project—is to set up automation hooks using Lambda. In Capital One’s implementation, CloudWatch log events trigger Lambda functions to run checks against compliance and policy rules specific to Capital One. When potential issues have been found, the function generates notifications through Amazon SNS, which can be configured to send SMS messages, emails, and a number of other mechanisms to alert the right people to policy violations that require their attention.

aws lambda 22 Amazon

I like this automation pattern because it adds enormous value to an existing process without disturbing that process in any way. System compliance is automated without touching the systems being monitored. And like the previous pattern, it offers an easy way for an organization to get its feet wet with serverless.

Thinking outside the server

As we’ve seen, setting up a Lambda function, configuring an event, applying security policies, and testing the results is a snap—even without an IDE or command line tools. Microsoft, Google, and IBM offer similarly easy onboarding for their FaaS runtimes. Plus design patterns are emerging that will undoubtedly pave the way to even higher orders of tooling and reuse.

Serverless application architectures represent a very different mindset. The pieces of code are smaller, they execute only when triggered to reduce cost, and they are tied together through loosely coupled events instead of statically defined APIs. Serverless enables far more rapid development cycles than were possible previously, and with simple automation and web application design patterns to draw on, it is easy to get started with low risk.

Join the newsletter!

Error: Please check your email address.

More about AdvancedAmazonAWSCapital OneEclipseGatewayGoogleIBMIDGMicrosoftTest

Show Comments