Installation
Not Yet Public
OWL is currently in development and not open for public installation. This documentation is for presentation and demonstration purposes.
Get OWL running with a single command.
One-Line Install (Recommended)
curl -sSL https://raw.githubusercontent.com/kavinps/owl/master/install.sh | bash
This script automatically:
- Installs Ollama if not present
- Pulls required models (
llama3.2andnomic-embed-text) - Installs uv package manager
- Installs OWL globally
After installation, run from any directory:
owl
Manual Installation
If you prefer manual setup:
1. Install Ollama
# Linux
curl -fsSL https://ollama.ai/install.sh | sh
# macOS
brew install ollama
2. Pull Models
# Start Ollama
ollama serve
# Pull models (in another terminal)
ollama pull llama3.2 # Chat model
ollama pull nomic-embed-text # Embeddings for knowledge base
3. Install OWL
Using uv (recommended):
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install OWL globally
uv tool install git+https://github.com/kavinps/owl.git
Or using pip:
pip install git+https://github.com/kavinps/owl.git
Verify Installation
owl
You should see:
(o,o)
/)_) OWL - A wise AI with a growing soul
" "
Project: your-project-name
Type /help for commands, or just chat!
Commands
owl # Start interactive session (auto-starts daemon)
owl stop # Stop daemon
owl status # Check daemon status
Data Directory
OWL stores data in ~/.owl/:
~/.owl/
├── config.yaml # Configuration
├── soul.yaml # OWL's evolving character
├── memory.md # Human-readable memory
├── memory/
│ └── owl.db # SQLite database
├── knowledge/
│ └── chroma/ # Vector database
└── owl.sock # Unix socket (when running)
Configuration
Create ~/.owl/config.yaml to customize:
llm:
provider: ollama
host: http://localhost:11434
model: llama3.2 # Or any Ollama model
timeout: 180
temperature: 0.7
daemon:
log_level: INFO
Next Steps
- Quick Start - Start using OWL
- Configuration - Advanced settings