Context Link to heading
We are using AWS Auora to manager our databases and I want to setup automatic rotation of database passwords. I
start by storing them in AWS Secrets Manager. AWS actually releases
publicly usable rotations for PostgreSQL,
but I want to add the extra fields connstring
and host_ro
so clients can directly pull both the read only Auora DNS
and a full PostgreSQL connection string.
Idea Link to heading
I decided to fork AWS’s example code and deploy my own Lambda.
Problem Link to heading
I already have automation around Terraform/CircleCI and would prefer to continue to use them. I also would prefer to continue to use the pattern of using Terraform only for the AWS infrastructure, while using the lambda upload API for CI/CD. Unfortunately, this Lambda is more complicated than a single file: it includes compiled libraries and pip dependencies. I also want to test this lambda locally.
Terraform Solution Link to heading
There were some good starting points at this github repository. The only major change I needed to make was to include this trick so that my lambda is initially deployed empty, so I can take terraform out of my direct CI/CD path.
# From https://amido.com/blog/terraform-does-not-need-your-code-to-provision-a-lambda-function
data "archive_file" "dummy" {
type = "zip"
output_path = "${path.module}/lambda_function_payload.zip"
source {
content = "hello"
filename = "dummy.txt"
}
}
This is a dummy zip file I can reference in my Terraform config during provisioning only. You need this to init the lambda function. After that, I can use the aws CLI to update the lambda.
Lambda Code solution Link to heading
The lambda code now gets a lot trickier. There is a sample file here that I can easily fork and upload to lambda. When I upload this file I get an obvious error from lambda:
Unable to import module 'lambda_function': No module named pg
Obviously AWS doesn’t have the entire pip ecosystem in the lambda VM. AWS documents this part and managing it with virtualenv is a great way to manage python anyway.
virtualenv v-env
source v-env/bin/activate
pip install PyGreSQL
cd v-env/lib/python3.8/site-packages
zip -r9 ${OLDPWD}/function.zip .
This works great to install the pip package, but now I have another problem when I run my lambda.
Unable to find library libpq.so
Now it gets tricky. I need compiled libraries that I would normally get with apt-get
or yum install
if this was a
Docker container. I find
these libraries and put them in lib
. Eventually the lambda deploys, but now I need to
figure out a good way to package up this lambda.
Lambda packaging with Docker Link to heading
Lambdas aren’t docker files, but I can still use docker to create my lambda. This gives me the great advantage of not depending upon any libraries in the host system. I can even get the PostgreSQL libraries I need with docker!
# Used to build the lambda artifact
FROM postgres:10.12 as pg
FROM python:3.8.2-buster as py
RUN apt-get update && \
apt-get install --no-install-recommends -y \
ca-certificates=20* \
zip=3* \
unzip=6.* && \
rm -rf /var/lib/apt/lists/*
WORKDIR /package
# All of these appear required by libpq
COPY --from=pg /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 lib/
COPY --from=pg /usr/lib/x86_64-linux-gnu/libpq.so.5 lib/
COPY --from=pg /usr/lib/x86_64-linux-gnu/libssl.so.1.1 lib/
COPY --from=pg /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 lib/
COPY --from=pg /usr/lib/x86_64-linux-gnu/libldap-2.4.so.2 lib/
COPY --from=pg /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 lib/
COPY --from=pg /usr/lib/x86_64-linux-gnu/libsasl2.so.2 lib/
COPY --from=pg /usr/lib/x86_64-linux-gnu/libgnutls.so.30 lib/
COPY --from=pg /lib/x86_64-linux-gnu/libidn.so.11 lib/
COPY --from=pg /usr/lib/x86_64-linux-gnu/libnettle.so.6 lib/
COPY --from=pg /usr/lib/x86_64-linux-gnu/libhogweed.so.4 lib/``WORKDIR /pycode
RUN python -m venv venv
COPY requirements.txt .
RUN . venv/bin/activate && pip install -r requirements.txt && deactivate \
WORKDIR /package
RUN cp -r /pycode/venv/lib/python3.8/site-packages/* .
RUN ls -la /package
COPY lambda_function.py .
RUN zip -r /lambda.zip .
ENTRYPOINT ["cat", "/lambda.zip"]
I start FROM postgres:10.12
for the sole purpose of copying out the .so files I need. I then start
FROM python:3.8.2-buster
to build my lambda and package everything I need into the zip file I want. I install the
explicit dependencies I need from requirements.txt
. I end with ENTRYPOINT ["cat", "/lambda.zip"]
and I now have a
Dockerfile that if I run, will print to stdout the lambda! I can docker run mypackage > lambda.zip
and generate a
zip file from any environment.
Lambda running with Docker Link to heading
The only step left is running my lambda locally. Of course I could run the python file, but I want to run it as close to the real lambda as possible. Lucky for me, the docker image lambdaci/lambda lets me do this! From their site:
A sandboxed local environment that replicates the live AWS Lambda environment almost identically — including installed software and libraries, file structure and permissions, environment variables, context objects and behaviors — even the user and running process are the same.
ARG TAG
FROM my-package-name:${TAG} as source
FROM debian:stretch-slim as extractor
RUN apt-get update && \
apt-get install --no-install-recommends -y \
ca-certificates=20* \
unzip=6.* && \
rm -rf /var/lib/apt/lists/*
WORKDIR /package
COPY --from=source /lambda.zip /lambda.zip
RUN unzip /lambda.zip
FROM lambci/lambda:python3.8
COPY --from=extractor /package/* /var/task/
I start by taking the docker image tag as a parameter to the build. I use FROM debian
just to unzip the lambda,
then switch back to FROM lambci
to package my runnable. In my Makefile
I first package my lambda, then run this
docker image to copy the lambda into lambdaci/lambda and run it locally as close to the real lambda as I can get.
Lambda deploying with Docker Link to heading
Just like I build and run locally with Docker, I can also deploy locally with Docker. This removes the need to manage the correct AWS CLI.
# Used to run the lambda artifact like a lambda is normally run
ARG TAG
FROM my-package-name:${TAG} as source
FROM amazon/aws-cli:2.0.7
COPY --from=source /lambda.zip /aws/lambda.zip
My Makefile
again runs the package step first, then runs the deploy step taking a build argument of the tag. The only
trick is how I can give my AWS docker image the correct credentials. I can do this with both environment variables and
read mounting the ~/.aws directory.
This is what the command looks like in bash.
function aws_docker() {
docker run --rm -v "$HOME/.aws:/root/.aws:ro" \
-e AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_DEFAULT_REGION=us-west-2 \
"my-package-name:${TAG}" "$@"
}
The $HOME
mount is how I get credentials locally and the -e
variables are how I pass credentials in CI/CD. I can
then upload the lambda by passing my /lambda.zip
directory.
aws_docker lambda update-function-code \
--function-name my-lambda-name \
--zip-file fileb://lambda.zip
An alternative flow is to use the amazon/aws-cli
image directly and volume mounts to point to the lambda zip file.
This is reasonable, but I was trying to use the docker executor for CircleCI,
which does not support volume mounts.
Overall Link to heading
With this setup I have a repeatable, isolated environment to develop, test, and deploy lambdas that also works great for CircleCI and fits with our current Terraform/Deployment pipelines. It is also resilient to any on host changes or misconfiguration. So far, it’s worked pretty well. Let me know in the comments if you have any suggestions to this workflow.