Serverless Image resizing (AWS Lambda and AWS s3)

This post will walk you through how you can create different image sizes after you uploaded the origin image to the AWS s3 storage. The goal of this post is to show you how you make use of AWS lambda function which means no servers (EC2 instances) will be required to perform these image resizing tasks. When an image gets uploaded to the original image bucket, the lambda function will get triggered which in turn will resize the image in different sizes e.g  profile, cover and thumbnail sizes. Once Lambda is done with the resizing it will upload the different images to their corresponding buckets. 

Before we dive into it, let’s cover up s3 and Lambda services what are they and what do they do.

S3 storage


S3 stands for Simple Storage Service which is an internet storage (unlimited storage since you pay as you go). In a nutshell, S3 provides a simple web services interface which you can use to store and retrieve your data (objects) and more important those tasks can be done from anywhere on the web and at any time.  Your data so called objects are stored in a container called bucket in s3. When you are creating the bucket, you will have to specify in which AWS region you want the bucket to be created in. 

Objects are the data (files) you store in the bucket, and the object is broken down into the actual data you are storing and the metadata which consists of a collection of key-value describing your data. 

For more in-depth in s3, please the following link:

Lambda (Serverless)


As the name serverless indicates, no servers from your side are required. You will basically write your functions in your more preferred programming languages and upload the code to AWS lambda and the lambda service will run your code on your behalf using AWS infrastructure. In a nutshell, AWS Lambda is a compute service. Only these programming languages are supported Python, Java and Node.js and Lambda provides a standard runtime and environment when your code is being ran. Therefore you are only responsible for your code.

Let’s talk a bit about not having lambda in a case you want to resize your images when they get uploaded to s3. In this case for example you will want to have a profile, cover and thumbnail image out of the original image. Then you may have to provision a fleet of proxy servers to capture those uploads to the s3 bucket and for each uploads captured you may have a job put in a queue to be processed and maybe there is a second fleet of servers for reading and processing those jobs from the queues. On top of those, you will make sure how many servers you will need and that the servers are coping with the loads and more important you will need to setup the monitoring system for a 24/7 monitoring. On the other hand, with Lambda you only need the function responsible for resizing your image. By having the data trigger event on your original bucket s3, for any upload to the bucket, the original images will be resized in profile, cover and thumbnail image sizes by your lambda function and then they will be uploaded to the corresponding buckets. 

Please for more info on AWS Lambda, please visit the following link:

Let’s get-started by setting up the AWS resources required 

S3 Buckets using AWS console

Create four s3 buckets with the following names (You can use any name you want, as note AWS s3 bucket has a global unique name):

  1. ttrserverlessimages
  2. ttrserverlessimagescover
  3. ttrserverlessimagesprofile
  4. tttrserverlessimagesthumbnail

And for on how to create s3 buckets, please visit the following link:

SSH Key-Pair using AWS console 

Create an SSH key and you will the private key downloaded. Move the key to your ~/.ssh/ directory and set permission to 600  ( $ chmod 600 ~/.ssh/your_private_key ). In my case I have created the key pair called tutorial-serverless.pem 

Create an Amazon Linux Machine

Launch an EC2 instance (Amazon Linux AMI which I will be using for this tutorial), how to launch an EC2 instance please visit: Make sure you assign the SSH key-pair you’ve generated and choose the appropriate VpcID and subnetID (your subnet chosen should the flag auto-assign public IP  enabled).

Those steps above are pretty much manuals using the AWS console, but you may wanna use CloudFormation orTerraform to create all those resources(S3 buckets and EC2 instance) on your behalf.  I will be using CloudFormation to automatically create those resources for me. You can also useTerraform to achieve the same goal, for more on Terraform please visit my one of my previous post:

Make sure you get your default VpcID and subnetID (replace the ones in the python code below with yours) and the subnet flag  auto-assign public IP  should be enabled.

Before we start there will only one manual step, create the SSH Key-Pair using AWS console. Let’s now talk about CloudFormation which is service giving a developers or sysadmin to easily create and manage their collection of AWS resources, CloudFormation as service allow you to adopt the Infrastructure as code concept. All you need to do is to specify the resources in a JSON file called template and upload that to CloudFormation. But for me, I will skip writing the JSON file which can be a bit tedious to maintain or read. I will be using a python library called troposphere  which in turn will generate the template for me. These templates can be versioned as well, and just as note all these template developments will be done on your work station, in my case my Mac computer. 

Install troposphere 

Create your work directory 

Create a python  py file

And should content the following python code describing the AWS resources and generating the CloudFormation template

Now run the following command to generate the template

Output template.json which should content the following below

Now that the template is done, go to AWS console –> CloudFormation service and create the stack. After you’ve done with the stack creation, you should see something similar below

Screen Shot 2016-08-24 at 14.36.32


I assume everything is setup, awesome. And you should see the following resources created.

EC2 instance Amazon Linux Machine

Screen Shot 2016-08-24 at 14.11.07

Subnet, this was not created by CloudFormation but this is where you can get the subnetID subnet-d9ed25af 
Screen Shot 2016-08-24 at 14.12.14


VPC, this was not created by CloudFormation but this is where you can get the VpcID Vpc-cb0bf3af  

Screen Shot 2016-08-24 at 14.12.01


The SSH Key-Pair tutorial-serverless

Screen Shot 2016-08-24 at 14.11.38


The private key downloaded in my computer 

Screen Shot 2016-08-24 at 14.28.57


S3 Buckets


Let’s get-started by setting up the AWS Lambda function 

Now that we have everything we need setup, let’s start implementing our lambda function. Since it will be a python code, we will the Amazon linux machine (EC2 instance) where the function will be implemented and packaged and then uploaded to AWS Lambda.

SSH to the Amazon linux machine (EC2 instance using the public IP) 

First add the private key

Now SSH to theAmazon linux machine (EC2 instance)

Now you should be logged in on the machine

Install this RPM packages 

Create and activate a python environment 

Install the following python package dependencies

Now that the dependencies are installed, let’s write the image resizing function 

And should contain the following below

Now we will start packaging the code and then upload it to AWS Lambda 

Now you should have a zip file in the home folder $ ls ~/

In order to be able to upload the Lambda function, we will install awscli (aws command lines)

Before using the awscli, we need to create the credentials and config files under ~/.aws   since awscli and the python library boto3 will using those file for authentication with AWS for creating and managing resource on our behalf.

And should contain

And should contain 

Create a new role called lambda-image-resize-s3  in the AWS IAM, in the select role type choose AWS Service Roles  and then choose  AWS Lambda . And at last in the Attach Policy choose AWSLambdaExecute  and remember the Role ARN, you will need it later on.

Let’s now upload the zip file to AWS lambda using awscli (change the Role ARN arn:aws:iam::random_number:role/lambda-image-resize-s3  to the correct one from your AWS console in the IAM section)

You should be able now to see the Lambda function image_resize on AWS Lambda

Screen Shot 2016-08-24 at 16.25.22


In order to test, upload an image to s3 bucket ttrserverlessimages , I will upload an image kalilou.jpg 

Screen Shot 2016-08-24 at 16.29.29


Now we will test our Lambda function by triggering manually. Before that let’s have some test data. Create the test data file $ vim image-resize/input.txt and should contain the following.

Now trigger the Lambda function manually by passing this data above.

Now you should be able to see that the image has been resized in different formats and then uploaded to the corresponding s3 buckets (ttrserverlessimagescover, ttrserverlessimagesprofile and ttrserverlessimagesthumbnail) 

Let’s now put a data event trigger on the S3 bucket ttrserverlessimagescover which basically means whenever an image is being uploaded, the Lambda function will be triggered which in turn will resize the image in different formats (cover, profile and thumbnail sizes) and then upload them to the corresponding buckets (ttrserverlessimagescover, ttrserverlessimagesprofile and ttrserverlessimagesthumbnail) 

Now let’s add the permission to grant S3 to trigger the Lambda function.  Run the following command

Next, Configure Notification on the bucket ttrserverlessimages , go to the bucket on s3 and as for event type choose ObjectCreate (All). If that’s done, now you should be able to notice whenever you upload an image to the bucket ttrserverlessimages , images (cover, profile and thumbnail) are uploaded to the buckets  (ttrserverlessimagescover, ttrserverlessimagesprofile and ttrserverlessimagesthumbnail) .

Leave a Reply

Your email address will not be published.