Running Containerized Spring Boot on AWS Lambda

    During AWS Re:invent 2020 Container Image support for AWS Lambda was announced. This means that we can create container images and run them in a serverless fashion!

    The main use case for containers on Lambda is to make including dependencies external to your application easier. The maximum size for container images on AWS Lambda is 10gb – much more than the 250mb we had to work with before, which is another advantage when including external dependencies. Furthermore, while many languages are supported natively, and other runtimes can be included explicitly, including more than just a custom runtime has been hard to do.

    On first glance, this new deployment method sounds great – but there are still a number of unanswered questions. These relate to the ease of development with containers, and the performance of the application once it is deployed. In this post, we will focus on the former. The performance implications will get their own post later.

    In this post, we will create a very simple Spring Boot application that we will deploy both ‘traditionally’, and as a Docker container, using the AWS Cloud Development Kit as our Infrastructure-as-Code solution.

    The Spring Boot application

    AWSLabs has provided us some great examples on how to develop Spring Boot applications and run them on AWS Lambda. By following their Spring Boot 2 quickstart you get an application that behaves as expected from a Spring Boot application when placed behind an API Gateway. That is, the Spring Boot application will use its controller methods to handle requests that match the requested URL.

    Based on this template, we add one controller ( with a single RequestMapping:

    public class SimpleEndpointController {
      @RequestMapping(value = "${LAMBDA_RUN_METHOD:local}/slow")
      public double handleBenchmarkRequest() throws InterruptedException {
        return Math.tan(Math.atan(Math.tan(Math.atan(Math.tan(Math.atan(Math.random()))))));

    This endpoint will be used in the next post to analyze the performance of both deployment methods.

    With this, we have a Spring Boot application that can be run on our local machine like any Spring Boot application, or packaged and uploaded to AWS Lambda to be invoked by an API Gateway.

    The containerized application

    The new feature of AWS Lambda is that it can also run a containerized application. Any compatible container, such as Docker containers, can be run this way, as long as it includes the runtime interface client provided by AWS.

    A container must include 3 major components: our application code, the AWS Lambda Runtime API, and an entrypoint script. We will also add a 4th component: the Runtime Interface Emulator, which allows us to test the Lambda container locally.

    The architecture of a Lambda Container

    When the container is invoked, the entrypoint is run. If the environment variable AWS_LAMBDA_RUNTIME_API has not been set, we run the Runtime Interface Emulator, which will be a wrapper around the Runtime Interface Client. Next, the Runtime Interface Client is responsible for getting open requests from either the Runtime API (when running in a real environment) or the Runtime Interface Emulator (when running locally). The Runtime Interface Client invokes our application’s handler – which we pass to it when we start the Interface Client.

    To set this up, we have created the following Dockerfile to run our application:

    # We use a Java 12 image, but any image could serve as a base image.
    FROM openjdk:12
    # Add the lambda-runtime-interface-emulator to enable local testing.
    ADD /usr/bin/aws-lambda-rie
    RUN chmod +x /usr/bin/aws-lambda-rie
    # Add the entrypoint script.
    ADD container/ /
    RUN chmod +x /
    ENTRYPOINT ["/"]
    # Add the JAR to a known path.
    ENV JAR_DIR="/jar"
    ADD target/* $JAR_DIR/
    # Set our
    CMD ["nl.p4c.lambdacontainers.handlers.StreamLambdaHandler::handleRequest"]

    It uses a Java 12 base image, adds the Runtime Interface Emulator and an entrypoint script, and copies our packaged Java application to the path set in the environment variable $JAR_DIR.

    When this container starts, it will run the following entrypoint script:

    #!/usr/bin/env bash
    if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then
        exec /usr/bin/aws-lambda-rie /usr/bin/java -cp "$JAR_DIR/*" "" "$HANDLER"
        exec /usr/bin/java -cp "$JAR_DIR/*" "" "$HANDLER"

    According to the documentation, AWS will set the environment variable $AWS_LAMBDA_RUNTIME_API when running in a real Lambda environment. If this variable is not set, we execute to the runtime emulator we included in our Dockerfile (aws-lambda-rie). This will help us test this locally later.

    Next, the Java runtime interface client ( is run. This interface client is packaged in our application Jar by adding it to our project’s dependencies:


    When we build this Docker container, we end up with a container that can be deployed to AWS or run locally.

    Running it locally

    We can run the container locally with Docker to test that it works correctly.

    To do so, we first build the package and container, then run the container:

    mvn package
    docker build . -t p4c/lambdacontainers:local
    docker run -p 9000:8080 --rm --name lambdacontainers p4c/lambdacontainers:local

    After this, the container is reachable on port 9000. However, it exposes the Lambda API, not plain HTTP. Thus, to call our endpoint (/local/slow), we can’t simply navigate to http://localhost:9000/local/slow in our browsers. Instead, we need to send the correct payload to port 9000.

    Using the SAM CLI, we can generate event payloads as if they’re generated by various services.

    In our case, we want to simulate an API gateway event, so we generate a payload by running sam local generate-event apigateway aws-proxy. This generates a large amount of JSON, where we just modified the path and method fields to the endpoint we actually want to invoke: /local/slow and GET, respectively.

    We can then invoke the local endpoint like this:

    curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d "$(cat resources/local_test_payload.json)"

    Deployment to AWS

    At Profit4Cloud, we strongly prefer defining deployments as Infrastructure-as-Code, and for this project we will use the Cloud Development Kit (CDK). It enables us to define our infrastructure as Java code.

    We create a small number of resources for this project: one RestApi and two Functions. The Functions will create a Lambda function from our local code, and the RestApi creates an API Gateway to function as the HTTP interface to our functions.

    The RestApi definition is short, and will create a functional API Gateway when deployed:

    RestApi api = RestApi.Builder.create(this, "lambdacontainers-api")

    Much more interesting is how we define our Lambda functions. We will deploy our code in two ways: once as a ‘plain Jar’, and once as a Docker Container.

    First, the Function definition for the plain Java Lambda. The following is the standard way to create a Lambda Function for a Java project in AWS CDK. We define a runtime of Java 11 and add the code from our Spring Boot project. We also need to define a handler for our code, which is the default StreamLambdaHandler created from the AWSLabs example we saw earlier.

    Function function = Function.Builder.create(this, "lambdacontainers-lambda-plain")
        .environment(Map.of("LAMBDA_RUN_METHOD", "plain"))
    Integration plainIntegration = LambdaIntegration.Builder.create(function).build();
    Resource resource = api.getRoot()

    The other function uses our containerized version of the application. It is only passed a directory with a Dockerfile. When we deploy this stack, it will build this Docker file and upload the resulting container to AWS.

    Because our entrypoint, which is analogous to the handler in the previous function, was already defined in the Container definition, we do not need to define it here. Furthermore, as the runtime is defined by the Docker container, we also have no need to define a runtime in our stack. Our infrastructure can be blisfully unaware of the implementation of the application!

    Function function = DockerImageFunction.Builder.create(this, "lambdacontainers-lambda-container")
        .environment(Map.of("LAMBDA_RUN_METHOD", "container"))
    Integration integration = LambdaIntegration.Builder.create(function).build();

    The only real difference between both functions is in how the .code(...) is defined, and that we do not configure a runtime for the DockerImageFunction. Both Functions have the same timeout and memorySize, and are added to the api with a LambdaIntegration. After this, the api will have two resources: plain, which proxies to the plain Java Function, and container, which proxies to our Docker Function.

    Finally, both functions have an environment variable that describes their deployment method. This environment variable will come in useful in our next blog post, where we will analyze the performance of each deployment method. While there are clear advantages to using a containerized deployment, there might still be tradeoffs that would make it an unsuitable solution for some use cases.

    Wrapping up

    We have made a Spring Boot application that can be run locally or deployed to AWS, both as a plain Java project or as a Docker container. While a very simple project, it’s a great way to demonstrate the required changes to deploy Containers to AWS Lambda.

    In the next blog post, we will analyze the performance differences of these two deployment methods, so keep an eye on our LinkedIn to see when it is released!

    Erik Steenman

    Erik Steenman is sinds 2018 bij Profit4Cloud in dienst als Software Engineer met specialisatie AWS. Erik is AWS CSA/P, OCA en Azure Developer Associate gecertificeerd.