HOW TO: Extend AWS LocalStack

Last Modified: 2020-04-01

What is LocalStack?

LocalStack is a project that allows for simulating various Amazon Web Service (AWS) services on virtually any machine you choose to. Typically however, LocalStack is intended to aid with local development to help simulate a production like AWS environment. Additionally LocalStack can be used within a well-formed automation test pipeline to simulate AWS services.

Why Extend LocalStack?

First of all, let me state that LocalStack out of the box (or Docker container) is excellent as-is. In many smaller projects you might find that the entire ecosystem is encapsulated in a single docker-compose.yml. LocalStack is no different, you simply add a small YML section to your own docker-compose.yml like so (Please read the LocalStack documentation for more options):

...
  aws-localstack:
    image: localstack/localstack
    ports:
      - "4567-4599:4567-4599"
      - "7070:8080"
    environment:
      - SERVICES=apigateway,awslambda,s3
      - DATA_DIR=/tmp/localstack/data
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "~/localstack:/tmp/localstack"
...

What if, however, you are working on a larger project? In many cases there are certain shared AWS services/infrastructure that is just assumed to be in place when an application starts. A simple example follows:

In this scenario our application relies on several S3 buckets to be pre-existing. It is not this particular application's responsibility to create the buckets. One typical way to handle this would be for a developer to add LocalStack to their own docker-compose.yml as illustrated above. At this point they would need to execute an init script or manual commands to create the required buckets before this application can be expected to operate correctly. How might we accomplish this?

  1. We could simply add some documentation to the README.md file instructing the developers to issue a few one-time commands like so:
# Execute this command after starting LocalStack
awslocal --endpoint-url=http://localhost:4572 s3 mb s3://my-app-bucket-1
...
  1. We could go a step further and create a setup.sh script that contains all of these scripts. Then a developer simply needs to invoke it.
# Execute this command after starting LocalStack
./setup.sh

OK, that is a little better, at least all developers working with this won't have to keep up with changes, just call setup.sh however, the developers still must do something.

  1. We could perform this setup within the application codebase. This however requires the dreaded if (runningLocal) type conditional check with local only code to set things up. Although I've personally seen this approach, I'm not sure this is really ideal for larger projects.
  2. Finally, what if we simply extend LocalStack's Docker container to autoprovision some resources for us? How would this look? Well, my proposal here is to hijack the startup sequence of the container image. In this case we'll create a custom bash script to use on startup so that we can provision the required AWS resources within the LocalStack container startup.

Extending The LocalStack Docker Container

LocalStack's Docker image by default copies a custom docker-entrypoint.sh and simply sets that as the container's entrypoint: ENTRYPOINT ["docker-entrypoint.sh"]. Without boring you with the details, this script essentially starts any declared mock AWS services asyncronously. Once the message 'Ready.' appears in the log output everything is ready to go. It is at this point we want to invoke our initialization steps.

Given these factors, we can easily create our own startup sequence. First, Let's start by extending the LocalStack image. Let's create a file named Dockerfile:

FROM localstack/localstack:0.10.9

# Next, let's add the setups to rename the existing LocalStack `docker-entrypoint.sh`
RUN mv /usr/local/bin/docker-entrypoint.sh /usr/local/bin/localstack-entrypoint.sh
# Last, all we need to do is copy our own script file named `docker-entrypoint.sh`
ADD bin/docker-entrypoint.sh /usr/local/bin/

Since the base container is already specifying this as the ENTRYPOINT we do not need to do anything further with the Dockerfile. Now let's take a look at what our new bin/docker-entrypoint.sh we copy into the container might look like:

#!/bin/bash
set -eo pipefail
shopt -s nullglob

# Call the renamed entrypoint script to get things started as usual
localstack-entrypoint.sh &

# Wait until 'Ready.' appears in the log output
until grep -q "^Ready.$" /tmp/localstack_infra.log >/dev/null 2>&1 ; do
  sleep 7
done

# Perform initialization steps HERE
awslocal --endpoint-url=http://localhost:4572 s3 mb s3://my-app-bucket-1

# Finally - tail the output -> STDOUT as expected for a container (copied from localstack-entrypoint.sh)
tail -qF -n 0 /tmp/localstack_infra.log /tmp/localstack_infra.err

Most of this is pretty self explainatory. The important takeaway is that we invoke the existing start script; wait for 'Ready.'; then perform our initialization steps. This is a crude example of simply creating an S3 bucket, which is an idempotent action, therefore issuing this will work even when the bucket already exists.

At this point we have an extended LocalStack container that will automatically create a bucket named my_app_bucket_1. We could choose to push this container for others to simply pull and use. Or in this simple example we will include this as part of our shared docker-compose.yml this ecosystem uses across projects. Here is our resulting shared docker-compose.yml:

...
  aws-localstack:
    build:
      context: ./aws-localstack
      dockerfile: Dockerfile
    ports:
      - "4567-4599:4567-4599"
      - "7070:8080"
    environment:
      - SERVICES=apigateway,awslambda,s3
      - DATA_DIR=/tmp/localstack/data
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "~/localstack:/tmp/localstack"
...

Notice the slight change from earlier. Instead of defining the image to run we instead specify the build parameters to instruct docker-compose to build and run our new Dockerfile from a subdirectory named aws-localstack. At this point the developer simply needs to start the containers as usual and their S3 buckets will be ready for them.

In conclusion, This isn't the only way to handle this scenario of course, but just a simple example of how to modify the startup behavior of an existing Docker image. This LocalStack example could be expanded to automatically execute CloudFormation, Terraform, or other AWS automation tools rather than direct awslocal CLI commands. You could embed them in this project as IaC (Infrastructure As Code) or pull from an existing project. The possibilities are endless.

You can find this demo over at my GitHub repository