AI

Harbor: One-Command Containerized LLM Stack for Local AI Development

Harbor is a containerized LLM toolkit that spins up a complete pre-wired AI stack with Ollama, Open WebUI, ComfyUI, and more in one command.

Keeping this site alive takes effort — your support means everything.
無程式碼也能輕鬆打造專業LINE官方帳號!一鍵導入模板,讓AI助你行銷加分! 無程式碼也能輕鬆打造專業LINE官方帳號!一鍵導入模板,讓AI助你行銷加分!
Harbor: One-Command Containerized LLM Stack for Local AI Development

The explosion of local AI tools has created a new problem: setting up a complete local AI development environment means installing and configuring multiple independent services, each with its own dependencies, configuration, and networking requirements. Harbor solves this with a single docker compose up command that spins up an entire pre-wired AI stack on your local machine.

Developed as an open-source project, Harbor packages the most popular local AI tools into a cohesive, containerized stack. With one command, you get Ollama serving local LLMs, Open WebUI providing a ChatGPT-compatible chat interface, ComfyUI for image generation workflows, and optional components like ChromaDB for vector storage, PostgreSQL for persistence, and various monitoring and management tools.

Harbor has become particularly popular among AI developers and enthusiasts who want to experiment with local AI models without spending hours on setup. It also serves as a reference architecture for teams deploying AI infrastructure, demonstrating how to wire together the various components of a modern AI stack. The project’s Docker Compose configuration is designed to be both comprehensive and understandable, making it easy to learn from and adapt.


How Does Harbor’s Architecture Connect the Components?

Harbor’s architecture is designed around the principle of “pre-wired connectivity” – every component is configured to work with every other component out of the box.

graph TD
    A[Docker Compose] --> B[Harbor Network]
    B --> C[Ollama]
    B --> D[Open WebUI]
    B --> E[ComfyUI]
    B --> F[ChromaDB]
    B --> G[PostgreSQL]
    H[User Browser] --> D
    H --> E
    C --> I[Local LLMs]
    D --> C
    E --> F
    D --> G
    D --> F

The critical wiring that Harbor handles automatically includes: Open WebUI connecting to Ollama for model inference, ComfyUI reaching Ollama for text encoding, both services sharing the same GPU resources, ChromaDB providing vector storage for RAG workflows, and PostgreSQL persisting chat history and user data.


What Components Are Included in the Harbor Stack?

Harbor offers a modular set of components that can be enabled or disabled based on your needs.

ComponentPurposeDefaultDocker Image
OllamaLocal LLM servingEnabledollama/ollama
Open WebUIChat interfaceEnabledghcr.io/open-webui/open-webui
ComfyUIImage generationDisabledcomfyui/comfyui
ChromaDBVector databaseDisabledchromadb/chroma
PostgreSQLRelational databaseDisabledpostgres:16
pgAdminDatabase managementDisableddpage/pgadmin4
WatchtowerAuto-update containersDisabledcontainrrr/watchtower

Each component is defined as a Docker Compose service with pre-configured environment variables, volume mounts, and network settings. Enabling additional components typically requires uncommenting a few lines in the docker-compose.yml file.


How Do You Configure Harbor for Different Use Cases?

Harbor’s flexibility comes from its environment variable configuration and Docker Compose profiles.

Use CaseEnabled ComponentsConfiguration Notes
Basic LLM chatOllama + Open WebUISet OLLAMA_MODELS for default models
RAG prototype+ ChromaDBConfigure embedding model in Open WebUI
Image generation+ ComfyUIRequires GPU, set COMFYUI_MODELS
Full developmentAll componentsRequires 16GB+ RAM, GPU recommended
LightweightOpen WebUI onlyUse Ollama on separate machine
CI testingOllama onlyMinimal resource footprint

Configuration is handled through a .env file in the project root. Harbor includes a comprehensive .env.example file with detailed comments explaining each setting.


What Hardware Requirements Does Harbor Have?

The hardware requirements vary significantly depending on which components you enable and which models you run.

ConfigurationMinimum RAMRecommended RAMGPUStorage
Harbor + Ollama (chat)8 GB16 GBOptional10-50 GB
Harbor + Ollama (RAG)16 GB32 GBOptional20-100 GB
Harbor + Ollama + ComfyUI16 GB32 GB8GB+ VRAM50-200 GB
Harbor full stack32 GB64 GB12GB+ VRAM100-500 GB

Ollama can run on CPU-only systems for small models (3B-13B parameters), but larger models (30B+) and ComfyUI image generation require a capable GPU. Harbor automatically exposes GPU devices to containers when available.


FAQ

What is Harbor? Harbor is an open-source containerized LLM toolkit that spins up a complete pre-wired AI stack – including Ollama, Open WebUI, ComfyUI, and more – with a single Docker Compose command for local AI development.

What components are included in the Harbor stack? Harbor includes Ollama for local model serving, Open WebUI for a ChatGPT-like interface, ComfyUI for Stable Diffusion workflows, and optional components like ChromaDB for vector storage, PostgreSQL for persistence, and monitoring tools.

How do I get started with Harbor? You need Docker and Docker Compose installed. Clone the repository, optionally edit the configuration, and run docker compose up. Harbor will download and start all components automatically.

Can I customize which components Harbor deploys? Yes, Harbor uses composable Docker Compose profiles and environment variables. You can enable or disable individual components, configure ports, set resource limits, and use different model backends through configuration.

Is Harbor suitable for production deployment? Harbor is primarily designed for local development and prototyping. For production use, it provides a solid foundation that can be adapted with additional security, scaling, and monitoring infrastructure.


Further Reading

TAG
CATEGORIES