
In Centralized AI deserts, where corporate clouds collect computational spices, your home lab is a strong stronghold of a frames – an oasis of creation.
These StaThe inspired series has guided you through the construction of AI Art Homebal. Part 1 fake the hardware foundation, and released stable dispersion with Part 2 podman. Now, in this last chapter, we measure the vision with podman composes, a device that easy and enhances your AI Empire. The spices must flow – let’s perform it.
In Part 2, A Containerfile An example of the stable dispersion is launched, which is a Christie -kinoff for AI art. Effective measurements, I improved it in a multi -phase construction, which re -usable base image with CUDA (NVIDIA’s GPU framework), Pytorch (a machine learning library), and rust (consisting version 1.18 for durability). This base supports heavy AI dependence, avoiding a library copy in services like music breed or chat bots.
Nevertheless, just one container is like lonely frames in the desert – effective but limited. Managing several containers, networks and parameters becomes a slogan. Enter the podman composes, the orchestration spice, offer ease for your home lab, read qualification and extension.
I Sta The universe, the spices brings clear and connecting. For AI Home Labs, Podman Compos Clouds do the same by surpassing the containerfile in local workflows.
- Easily: A single
podman-compose up
Your entire service stack can rotate, no need for ancient scrolls of Bash History. This is a steel suit, which is a safe effort for smooth deployment. - Reading eligibility: A file services, ports and volumes clearly explain. Teams can catch a setup at a glance and, for me, have documents themselves. This is a navigator’s hologram.
- Growth: A composing file supports multiple services in a file. Its pod architecture and Dokar compose compatibility make it a bridge of cloud platforms like open shift or AWS. This is the power of your desert.
Is a simple podman composer built using the latest project structure for stable dispersion web UI (Note dockerfile
Line pointing to the addition of a new containerfile location and bind mounts):
version: '3.8'
services:
webui:
build:
context: .
dockerfile: podman/Containerfile
target: webui
pull_policy: never
image: webui:latest
ports:
- "7860:7860"
volumes:
- ./mounts/models:/app/models:Z
- ./outputs:/app/outputs:Z
environment:
- PYTHONUNBUFFERED=1
- COMMANDLINE_ARGS=--opt-sdp-attention --xformers --api
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: (gpu)
Drive podman-compose up
And your AI art studio is straight! Additionally, like the services Beset (With Open webui) The file can only be added with a new designated service section:
ollama:
image: ghcr.io/open-webui/open-webui:ollama
ports: "8080:8080"
volumes:
- ./mounts/ollama:/root/.ollama:Z
- ./mounts/ollama-data:/app/backend/data:Z
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: (gpu)
You can see my whole Podman compos.mal Showing the addition of Comphyui and ACE-STEPP.
Make sure your system meets the requirements from 2 (podman and coda). Repo Clone:
git clone https://github.com/thecodenomad/ai-homelab
cd ai-homelab
podman-compose up
If you do not already have a model, you will need one before you start producing photos. Ai Art Generation requires significant storage webui
The service creates a ~ 9GB base image, which combined with models such as stable dispersion (~ 4GB) and ACE-STEP (~ 2GB, if used), you will need tomorrow ~ 20GB. Make sure you have a disk enough space!
To fit the new folder structure updates in the repo we will edit the Curl Command from Part 2:
curl -Lo mounts/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors
Once it is complete you can walk podman-compose up webui
To get a stable dispensary service. If the download fails, try again on the curl command after freeing the space (check df -h
Jes
Note: If you choose to run the ACE-STEP, it will automatically start downloading the music generation model during its first generation if it has not been found.
You can see the full code for this project https://github.com/thecodenomad/ai-homelab.
Running a number of services (such as, web UI and unit phase) on a single GPU can cause memory errors, especially with less than 8GB of VRAM. To avoid this, run a service at a time (such as, podman-compose up webui
) If errors remain intact, restart the service podman-compose restart
. NVIDIA-CONTAINER-TOKITIncludes the Ukurus installed in Part 1, GPU helps to help avoid these problems.
Note: I have been able to produce music with my RTX 3060 12G Web UI with a step and photos, but your mileage may vary.
Indicators of additional defects
- Model download fails: Try again on the curl command after freeing space (
df -h
, - GPU errors: NVIDIA-CONTAINER-TOKIT ATTEMPT (
nvidia-ctk --version
) And driver (nvidia-smi
) 1 per portion. - Service fails: Check with the login
podman-compose logs
.
Podman composes convert your Himalab to a strong AI stronghold, and unites precision services like frames. When you run this device, your creations – art, music, or chat boats – flow freely, through corporate clouds. Try Podman Composes in your Home Lab, share your setup on X with #Ahomelab, and fuel your journey of decentralized AI.