Using AWS Lambda has become very popular. Most of the time using it is fast and just (but not always).

When you start having a lot of lambda functions the complexity of managing them can be hard. When a developer can just edit the code inline, it can become the worse manage service use ever chose.

So again, the most important thing to do when you start working with Lambda is to have a proper way to deliver the code from git to aws.

In this post, I will demonstrate how to use docker, awscli, Jenkins pipeline and some basic Makefile scripting to build a proper delivery pipeline for python based lambda. Assuming you’ve created the lambda structure in terraform as I usually do (and I promise to add a post about it in the future)

Boilerplate is in GitHub lambda-delivery

So let’s start with a docker image that we can use to create our lambda artifact zip file with this Dockerfile

FROM python:3.6.8-slim-stretch

RUN apt-get update \
  && apt-get install -y zip make \
  && pip install awscli

We will just build and push it to dockerhub for this example

docker build -t omerha/python:3.6.8-lambda .
docker push omerha/python:3.6.8-lambda

This image will have awscli, zip and make for running build and deploy to lambda.

The lambda function code will be in ./function/lambda_function.py and handler name will be lambda_handler for comfort. There is also a logger defined that is being controlled by an environment variable LOG_LEVEL

Building a zip lambda requires when you have a non-default libraries that in plain amazon lambda runtime does not have. this is why I prefer always to wrap my function with all the necessary libraries in the zip file.

The deployment can be easily made from the developer machine and from Jenkins, both using docker, or from the developer machine using virtualenv (the Makefile allows that). but in this post I will focus on deploying it using docker and Jenkins.

Make sure to update the Makefile variables properly:

PROJECT = lambda-delivery
VIRTUAL_ENV = venv
FUNCTION_NAME = delivery
AWS_REGION = eu-west-2
FUNCTION_HANDLER = lambda_handler
LAMBDA_ROLE = <your lambda arn role>
LAMBDA_RUNTIME = python3.6
LAMBDA_TIMEOUT = 3
LAMBDA_MEMORY_SIZE = 3000

In order to build the zip file with the libraries from requirements.txt file I will run

docker run -w /mnt \
  -v `pwd`:/mnt omerha/python:3.6.8-lambda make docker_build

Assuming everything is ok, in the root of the project there will a zip containing the code and all libraries. Note that the zip packed all files together in its root as well.

Let’s break down the make docker_build

  1. clean_package: clean previous package folder at ./package/*
  2. build_package_tmp: recreate ./package/*
  3. copy_lambda: copy all ./function/*.py to ./package/tmp directory
  4. docker_install_libs: install requirements.txt directly to ./package/tmp
  5. zip: zip the entire ./package/tmp to a zip and save it on the root directory of the repository

Now that we have a zipped lambda ready for deployment, we can run

docker run -w /mnt -v `pwd`:/mnt omerha/python:3.6.8-lambda make deploy

I’m assuming that this is running on an EC2 instance with the proper role to it. If the deploy process is running on a local machine, it will need to run with proper environment variables on the container for accessing lambda API.

docker run -e AWS_ACCESS_KEY_ID=<your aws key>\
  -e AWS_SECRET_ACCESS_KEY=<your aws secret> \
  -w /mnt -v `pwd`:/mnt omerha/python:3.6.8-lambda make deploy

And now for the interesting part, I usually use Jenkins shared pipelines and in the repo my Jenkinsfile will call the proper pipeline, but its for another post.

So Look at the Jenkinsfile (its in the repository)

 pipeline {
    agent any
    environment {
      REGISTRY='omerha'
      IMAGE='python'
      TAG='3.6.8-lambda'
    }
    stages {
      stage('Pull docker image'){
        steps {
          script {
            sh("sudo docker pull $REGISTRY/$IMAGE:$TAG")
          }
        }
      }
      stage('Build Lambda'){
        steps {
          script {
            sh("sudo docker run -w /mnt -v `pwd`:/mnt $REGISTRY/$IMAGE:$TAG make docker_build")
          }
        }
      }
      stage('Deploy Lambda'){
        steps {
          script {
            sh("sudo docker run -w /mnt -v `pwd`:/mnt $REGISTRY/$IMAGE:$TAG make lambda_update")
          }
        }
      }
    }
    post {
      success {
        cleanWs()
      }
    }
 }

There you have it, a boilerplate for doing continuance delivery for AWS lambda.

There is one missing step, if you want to have artifacts versioning you will need to upload them to S3 and tell lambda API to take from S3 instead.