
Photo by author
# Introduction
Building your own local AI hub gives you the freedom to automate tasks, process private data, and create custom assistants, all without relying on the cloud or dealing with monthly fees. In this article, I’ll walk you through how to build a self-hosted AI workflow hub on a home server, giving you full control, greater privacy and powerful automation.
We will collect tools like Docker For packaging software, Allama To run local machine learning models, n8n Portner for creating visual automation, and easy management. This setup is best for a moderately powerful x86-64 system such as a MiniPC or an older desktop with at least 8 GB of RAM, which can competently handle several services at once.
# Why create a local AI hub?
When you self-host your tools, you move from being a consumer of services to an owner of the infrastructure, and that’s powerful. A local hub is private (your data never leaves your network), cost-effective (there are no application programming interface (API) fees) and fully customizable.
The core of this core is a powerful group of objects where:
- Olama acts as your private, on-device AI brain, running models for text generation and analysis.
- n8n acts as a nervous system, connecting Olama to other apps (such as Calendar, Email, or Files) to create automated workflows.
- Docker is a framework, which packages each tool into separate, easily managed containers.
// Basic Components of Your Self-Hosted AI Hub
| Tool | Main character | A key benefit for your center |
|---|---|---|
| Docker/ Porter | Containerization and management | Separates apps, simplifies deployment, and provides a visual management dashboard. |
| Allama | Local Large Language Model (LLM) server | Runs AI models natively for privacy. Provides an API for other tools to use. |
| n8n | Workflow Automation Platform | Visually connects Olama to other services (APIs, databases, files) to create powerful automation. |
| Nginx Proxy Manager | Secure access and routing | Provides a secure web gateway to your services with easy SSL certificate setup. |
# Preparing your server foundation
First, make sure your server is ready. We recommend a clean install of Ubuntu Server LTS or a similar Linux distribution. Once installed, connect to your server via secure shell (SSH). The first and most important step is to install Docker, which will run all of our later tools.
// Installing Docker and Docker Compose
Run the following commands in your terminal to install Docker and Docker Compose. Docker Compose is a tool that lets you define and manage multi-container applications with a simple YAML file.
sudo apt update && sudo apt upgrade -y
sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb (arch=amd64) $(lsb_release -cs) stable"
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y// Verify and configure permissions
Verify the installation and add your user to the Docker group to run commands without. sudo:
sudo docker version
sudo usermod -aG docker $USEROutput:

You will need to log out and then log back in for this to take effect.
// Managing with porterner
Instead of just using the command line, we’ll deploy Porterner, a web-based graphical user interface (GUI) for managing Docker. Create a directory for it and a docker-compose.yml file with the following command.
mkdir -p ~/portainer && cd ~/portainer
nano docker-compose.ymlPaste the following configuration into the file. This tells Docker to download the Portner image, restart it automatically, and expose its web interface on port 9000.
services:
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
ports:
- "9000:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
volumes:
portainer_data:Save the file (Ctrl+X, then Y, then Enter). Now, deploy Porterner:
Your output should look like this:
Visit now. in your browser. For me, it is
You may need to restart the server. You can do this with the following command:
sudo docker start portainerCreate an admin account:

And you will see Portner dashboard after creating an account.
This is your mission control for all other containers. You can start, stop, view logs, and manage every other service from here.
# Installing Olama: Your Local AI Engine
Llama is a tool designed to easily run open source Large Language Models (LLMs) such as Llama 3.2 or Mistral natively. It provides a simple API that n8n and other apps can use.
// Deploying Ulama along with Dr
While Olama can be installed directly, using Docker ensures consistency. Create a new directory and a docker-compose.yml File for it with the following command.
mkdir -p ~/ollama && cd ~/ollama
nano docker-compose.ymlUse this setting. gave volumes LINE is important because it permanently stores your downloaded machine learning models, so you don’t lose them if the container restarts.
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
restart: unless-stopped
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
volumes:
ollama_data:Deploy it: docker compose up -d
// Drawing and running your first model
Once the container is running, you can draw a model. Let’s start with a capable but efficient model like Llama 3.2.
This command is executed. ollama pull llama3.2 Inside the running container:
docker exec -it ollama ollama pull llama3.2Task Demonstration: Inquiring Ulama
Now you can interact directly with your local AI. The following command sends a signal to the model running inside the container.
docker exec -it ollama ollama run llama3.2 "Write a short haiku about technology."You should see a generated poem in your terminal. More importantly, Olama’s API is now available. To use n8n.
# Integrating n8n for intelligent automation
n8n is a visual workflow automation tool. You can drag and drop nodes to create an arrangement. For example, “When I save a document, I summarize it with Ollama, then send the summary to my Notes app.”
// Deploying n8n with Docker
Create a directory for n8n. We’ll use a compose file that contains n8n’s database to store your workflows and execution data.
mkdir -p ~/n8n && cd ~/n8n
nano docker-compose.ymlNow paste the following inside the YAML file:
services:
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: unless-stopped
ports:
- "5678:5678"
environment:
- N8N_PROTOCOL=http
- WEBHOOK_URL=
- N8N_ENCRYPTION_KEY=your_secure_encryption_key_here
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=db
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=your_secure_db_password
volumes:
- n8n_data:/home/node/.n8n
depends_on:
- db
db:
image: postgres:17-alpine
container_name: n8n_db
restart: unless-stopped
environment:
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=your_secure_db_password
- POSTGRES_DB=n8n
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
n8n_data:
postgres_data:Change it. YOUR_SERVER_IP and placeholder passwords. Deploy with docker compose up -d. Access n8n. http://YOUR_SERVER_IP:5678.
Task Demonstration: Building Your First AI Workflow
Let’s create a simple workflow where n8n uses Ollama to act as a creative writing assistant.
- In the n8n editor, add a “Schedule Trigger” node and set it to run manually for testing.
- Add an “HTTP Request” node. Configure it to call your Ollama API:
- Method: Post.
- URL:
- Set the body content-type to JSON.
- Enter in the JSON body: {“model”: “llama3.2”, “prompt”: “Generate three ideas for a sci-fi short story.”}
- Add a “set” node to extract only the text from Olama’s JSON response. Set the value.
{{ $json("response") }} - Add a “code” node and use a simple line like
items = ({"json": {"story_ideas": $input.item.json}}); return items;To format the data - Finally, add a “Send Email” node (configured with your email service) or a “Save to File” node to output the results.
Click “Execute Workflow”. n8n will send prompts, receive and process ideas to your local Olama container. You’ve just created a private, automated AI assistant.
# Securing Your Hub with Nginx Proxy Manager
Now you have services on different ports (Portainer: 9000, n8n: 5678). Nginx Proxy Manager (NPM) lets you access clean subdomains (like porttainer.home.net) with free Secure Sockets Layer (SSL) encryption from Let’s Encrypt.
// Deploying Nginx Proxy Manager
Create a final directory for NPM.
mkdir -p ~/npm && cd ~/npm
nano docker-compose.ymlPaste the following code into your YAML file:
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
container_name: nginx-proxy-manager
restart: unless-stopped
ports:
- '80:80'
- '443:443'
- '81:81'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
volumes:
data:
letsencrypt:Deploy with docker compose up -d.
It’s on the admin panel. http://YOUR_SERVER_IP:81. Log in with the default credentials (admin@example.com/changeme) and change them immediately.

Task Demonstration: Securing n8n access
- In your home router, forward ports 80 and 443 to your server’s internal Internet Protocol (IP) address. This is the only port forwarding required.
- In NPM admin panel go to (your-server-ip:81) Hosts -> Proxy Hosts -> Add Proxy Host

- For n8n, fill in the details:
- Domain: n8n.yourdomain.com (or a subdomain that points to your home IP)
- Scheme: http
- Forward hostname / IP: n8n (Docker’s internal network resolves the container name!).
- Forward port: 5678
- Click SSL and request a Let’s Encrypt certificate by forcing SSL.
Now you can safely access n8n on Repeat for Portainer (portainer.yourdomain.com forwarding to portainer:9000).
# The result
You now have a fully functional, private AI automation hub. Your next steps may be:
- Symbol Extension: Experiment with different models like Mistral for speed or codellama for programming tasks
- Advanced n8n workflows: Connect your hub to external APIs (Google Calendar, Telegram, RSS feeds) or internal services (such as a local file server).
- Supervision: Add a tool like UptimeComma (also deployable via Docker) to monitor the health of all your services.
This setup turns your modest hardware into a powerful, private digital brain. You control the software, own the data, and pay no ongoing fees. The skills you’ve learned in managing containers, orchestrating services, and automating with AI are the foundation for a modern, independent tech infrastructure.
// Further reading
Shatu Olomide A software engineer and technical writer with a knack for simplifying complex concepts and a keen eye for detail, passionate about leveraging modern technology to craft compelling narratives. You can also search on Shittu. Twitter.