The explosion of local AI tools has created a new problem: setting up a complete local AI development environment means installing and configuring multiple independent services, each with its own dependencies, configuration, and networking requirements. Harbor solves this with a single docker compose up command that spins up an entire pre-wired AI stack on your local machine.
Developed as an open-source project, Harbor packages the most popular local AI tools into a cohesive, containerized stack. With one command, you get Ollama serving local LLMs, Open WebUI providing a ChatGPT-compatible chat interface, ComfyUI for image generation workflows, and optional components like ChromaDB for vector storage, PostgreSQL for persistence, and various monitoring and management tools.
Harbor has become particularly popular among AI developers and enthusiasts who want to experiment with local AI models without spending hours on setup. It also serves as a reference architecture for teams deploying AI infrastructure, demonstrating how to wire together the various components of a modern AI stack. The project’s Docker Compose configuration is designed to be both comprehensive and understandable, making it easy to learn from and adapt.
How Does Harbor’s Architecture Connect the Components?
Harbor’s architecture is designed around the principle of “pre-wired connectivity” – every component is configured to work with every other component out of the box.
graph TD
A[Docker Compose] --> B[Harbor Network]
B --> C[Ollama]
B --> D[Open WebUI]
B --> E[ComfyUI]
B --> F[ChromaDB]
B --> G[PostgreSQL]
H[User Browser] --> D
H --> E
C --> I[Local LLMs]
D --> C
E --> F
D --> G
D --> F
The critical wiring that Harbor handles automatically includes: Open WebUI connecting to Ollama for model inference, ComfyUI reaching Ollama for text encoding, both services sharing the same GPU resources, ChromaDB providing vector storage for RAG workflows, and PostgreSQL persisting chat history and user data.
What Components Are Included in the Harbor Stack?
Harbor offers a modular set of components that can be enabled or disabled based on your needs.
| Component | Purpose | Default | Docker Image |
|---|---|---|---|
| Ollama | Local LLM serving | Enabled | ollama/ollama |
| Open WebUI | Chat interface | Enabled | ghcr.io/open-webui/open-webui |
| ComfyUI | Image generation | Disabled | comfyui/comfyui |
| ChromaDB | Vector database | Disabled | chromadb/chroma |
| PostgreSQL | Relational database | Disabled | postgres:16 |
| pgAdmin | Database management | Disabled | dpage/pgadmin4 |
| Watchtower | Auto-update containers | Disabled | containrrr/watchtower |
Each component is defined as a Docker Compose service with pre-configured environment variables, volume mounts, and network settings. Enabling additional components typically requires uncommenting a few lines in the docker-compose.yml file.
How Do You Configure Harbor for Different Use Cases?
Harbor’s flexibility comes from its environment variable configuration and Docker Compose profiles.
| Use Case | Enabled Components | Configuration Notes |
|---|---|---|
| Basic LLM chat | Ollama + Open WebUI | Set OLLAMA_MODELS for default models |
| RAG prototype | + ChromaDB | Configure embedding model in Open WebUI |
| Image generation | + ComfyUI | Requires GPU, set COMFYUI_MODELS |
| Full development | All components | Requires 16GB+ RAM, GPU recommended |
| Lightweight | Open WebUI only | Use Ollama on separate machine |
| CI testing | Ollama only | Minimal resource footprint |
Configuration is handled through a .env file in the project root. Harbor includes a comprehensive .env.example file with detailed comments explaining each setting.
What Hardware Requirements Does Harbor Have?
The hardware requirements vary significantly depending on which components you enable and which models you run.
| Configuration | Minimum RAM | Recommended RAM | GPU | Storage |
|---|---|---|---|---|
| Harbor + Ollama (chat) | 8 GB | 16 GB | Optional | 10-50 GB |
| Harbor + Ollama (RAG) | 16 GB | 32 GB | Optional | 20-100 GB |
| Harbor + Ollama + ComfyUI | 16 GB | 32 GB | 8GB+ VRAM | 50-200 GB |
| Harbor full stack | 32 GB | 64 GB | 12GB+ VRAM | 100-500 GB |
Ollama can run on CPU-only systems for small models (3B-13B parameters), but larger models (30B+) and ComfyUI image generation require a capable GPU. Harbor automatically exposes GPU devices to containers when available.
FAQ
What is Harbor? Harbor is an open-source containerized LLM toolkit that spins up a complete pre-wired AI stack – including Ollama, Open WebUI, ComfyUI, and more – with a single Docker Compose command for local AI development.
What components are included in the Harbor stack? Harbor includes Ollama for local model serving, Open WebUI for a ChatGPT-like interface, ComfyUI for Stable Diffusion workflows, and optional components like ChromaDB for vector storage, PostgreSQL for persistence, and monitoring tools.
How do I get started with Harbor?
You need Docker and Docker Compose installed. Clone the repository, optionally edit the configuration, and run docker compose up. Harbor will download and start all components automatically.
Can I customize which components Harbor deploys? Yes, Harbor uses composable Docker Compose profiles and environment variables. You can enable or disable individual components, configure ports, set resource limits, and use different model backends through configuration.
Is Harbor suitable for production deployment? Harbor is primarily designed for local development and prototyping. For production use, it provides a solid foundation that can be adapted with additional security, scaling, and monitoring infrastructure.
Further Reading
- Harbor GitHub Repository – Source code, configuration, and setup guide
- Ollama Official Site – Local LLM runtime for model serving
- Open WebUI Documentation – ChatGPT-like interface for local LLMs
- ComfyUI GitHub Repository – Node-based Stable Diffusion workflow tool
- Docker Compose Documentation – The orchestration tool Harbor is built on
無程式碼也能輕鬆打造專業LINE官方帳號!一鍵導入模板,讓AI助你行銷加分!