Although containers are quite lightweight and provide various benefits, deciding how to deploy them can be difficult. There are many ways to deploy and run Docker containers. But some are best for orchestrating and managing containers, and may not suit the simple use case of running just one container.
In this article, I will teach you how you can deploy a Docker container using Serverless Service on AWS.
Table of Contents
Prerequisites/Requirements
The following tools and skills are required to follow along with this tutorial:
Knowledge of Docker, and having Docker installed locally.
An AWS account with credentials with administrative privileges to make API calls via the CLI. Best practice would be to limit privileges to exactly what needs to be done.
The AWS CLI is installed locally
Python virtual environment managers such as UV (optional)
Serverless with AWS Lambda
Containers provide a lightweight, consistent, and resource-friendly way of running applications. Serverless takes away the overhead of managing the underlying infrastructure on which the container runs. So as you can probably begin to see, combining these tools helps you deploy applications in a way that supports business logic, efficiency and gives your product a competitive edge/advantage.
An AWS tool that enables you to go serverless is Lambda. With Lambda, you’re only billed for the number of code runs the function runs, the memory you choose when providing the service, and the duration of each invocation of the function.
In addition to removing operational overhead, Lambda can also help you save money because you don’t have to deal with idle resources. A function is only alive when it is triggered by a sent request.
How to build, run and test containers locally
Docker is a tool that helps you package applications or software into portable, standardized and shareable units that require applications such as libraries, runtimes, system tools, application code, to run. These units are called containers.
In this section, I walk you through building the Docker image, running the container, and testing it after it’s running.
You can find the project you will use here GitHub repository.
Create a Docker image
To run a Docker container, you need to create an image first. The image becomes a template or class From which you make the container or instance of the class.
You can find the code to generate an image lambda_function.py.
def lambda_handler(event, context):
name = event("name")
message = f"Hello, {name}!"
try:
return {
"statusCode": 200,
"body": message
}
except Exception as e:
return {
"statusCode": 400,
"body": {"error": str(e)}
}
As you can see from the code above, this is a very basic Python application that expects POST HTTP request, with a JSON payload containing the key. name – and a similar value. The code then returns a greeting containing the name it received. The application has only one function, which also serves as its entry point.
To create a Docker image, you’ll need a Docker file to provide the blueprint for the image. For this particular case, the doc file you will use is also very basic. Each line in a Docker file is called a Directiveand it provides instructions to Docker when creating an image. So building a Docker image means creating a template for a container by following the instructions or directives in the Docker file.
# Dockerfile
FROM public.ecr.aws/lambda/python:3.12
# Copy function code... LAMBDA_TASK_ROOT is /var/task, the working directory set in the base image
COPY lambda_function.py ${LAMBDA_TASK_ROOT}
# Set the CMD to your handler - lambda_handler
CMD ("lambda_function.lambda_handler")
A doc file usually starts with a base image. To deploy an application as a Docker container in AWS Lambda, the base image must be of a specific type, depending on the application runtime. For that matter, you’ll need the Python runtime, so there’s the base image public.ecr.aws/lambda/python:3.12. It’s fine to use different Python versions.
The next instruction in the Docker file is to copy lambda_function.py File to a specific path in the base image. This path is referenced using an environment variable that is already defined in the base image and points to /var/task. This is the directory your code is running in.
The last directive is just a command to start the application when the container runs.
Now, you can run the build command from the root directory of the project:
docker build -t : .


Run the Docker container
Next, let’s create a running container from this image.
docker run -it --rm -p 8080:8080 lambda_docker:1.0.0
The above command will create a container and run it in interactive mode so that you can see the logs generated by the application in the container. Port 8080 is also exposed on the host where the container is running and mapped to the container port, which is also 8080 (defined by AWS). Once you kill the running process with CTRL + C, the container is automatically removed

Test the running container
Now verify that the application running inside the container can receive and process requests. To do this, in the code use test.py file:
import requests
url = "
data = {
"name": "Janet"
}
response = requests.post(url, json=data)
print("Status Code:", response.status_code)
print("Response Body:", response.json())
You can use python requests library to call this. Install the library using a virtual environment to isolate the application from your overall system. This helps prevent problems with conflicts in the versions of the libraries you install to use.
If you are using UV to manage your virtual environment, just run the command:
uv add requests
Then run in the code test.py From within the virtual environment:
uv run python3 test.py

You should see the desired response on the terminal.

How to push your image to the Amazon Elastic Container Registry (ECR)
Now that you have a working Docker image to deploy Lambda to, the next step is to push the image to the Docker registry. For this use case, your image will be pushed to Amazon ECR, a container registry for storing Docker images.
To promote your Docker image, you first need to tag the image, which simply means naming the image in a specific way.
Currently, this is the image tag lambda-docker:1.0.0. To tag it AWS Way, first create an ECR repository. Let’s use AWS CLI for this (for this you need to configure AWS credentials locally by running aws configure command and providing your credentials).
Environment variable setup
export AWS_PROFILE=
AWS_REGION=
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
REPO_NAME=lambda-docker
TAG=1.0.0
Compile the above commands AWS_PROFILE For the CLI to target the correct AWS account for API calls. Other variables specify the region, account ID, and ECR repository name and tag.
Create and validate ECR repository
Now, create the ECR repository:
aws ecr create-repository \
--repository-name "$REPO_NAME" \
--region "$AWS_REGION"
Verify Amazon ECR:
aws ecr get-login-password --region "$AWS_REGION" \
| docker login \
--username AWS \
--password-stdin "$ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com"
Tag and push the Docker image
Now, tag the Docker image:
docker tag $REPO_NAME:$TAG \
$ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO_NAME:$TAG
Click the icon on the ECR repository you created:
docker push $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO_NAME:$TAG
And that’s it! Your photo is now in ECR.


How to deploy your Docker image to Lambda
With your image now in ECR, you can create a lambda function. Go to the Lambda console, and click Create a Function.
Create a lambda function

Select Container Image And proceed to find the ECR repository you have created.

Next, select an image:

Leave the other settings as default and click Create.

After creation go to function.

Test deployment
Now, let’s test the deployment. For this, simply use the existing lambda Test Tab. Provide all the required details, including the payload for you POST Application


And that’s it. You have successfully deployed a Docker container on AWS by leveraging ECR and Lambda. You can go a step further by integrating an API gateway and making the function accessible from the Internet.
Cleaning
Remember to delete your AWS ECR repositories and the services you created on Lambda to avoid additional charges.
The result
Deploying your Docker container on AWS Lambda is an efficient way to quickly run your application without having to worry about managing servers or platforms.
Thanks for reading!