Installation
Prerequisites
1. Python 3.10+
python3 --version
# Python 3.10.0 or higher required
2. Ollama
OWL Watch uses Ollama for local AI inference.
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Start the Ollama service
ollama serve
# Pull a model (in a new terminal)
ollama pull llama3.2
Verify Ollama is running:
curl http://localhost:11434/api/tags
Install OWL Watch
Option 1: pip (Recommended)
pip install owl-watch
Option 2: From Source
git clone https://github.com/anthropics/owl-watch.git
cd owl-watch
pip install -e .
Option 3: pipx (Isolated Environment)
pipx install owl-watch
Verify Installation
owl-watch --help
You should see:
usage: owl-watch [-h] [--project PROJECT] [--port PORT] [--model MODEL]
[--no-server] [-v]
{study} ...
Configuration
OWL Watch stores configuration in ~/.owl-watch/config.json:
{
"ollama": {
"host": "http://localhost:11434",
"model": "llama3.2"
},
"server": {
"port": 8080,
"buffer_timeout": 2.0
},
"alerting": {
"webhook": null,
"severities": ["critical", "high"],
"cooldown": 300
}
}
Configuration Options
| Section | Option | Default | Description |
|---|---|---|---|
ollama.host | string | http://localhost:11434 | Ollama server URL |
ollama.model | string | llama3.2 | LLM model to use |
server.port | number | 8080 | Dashboard port |
server.buffer_timeout | number | 2.0 | Seconds to wait for complete stack traces |
alerting.webhook | string | null | Webhook URL for notifications |
alerting.severities | array | ["critical", "high"] | Severity levels to alert on |
alerting.cooldown | number | 300 | Seconds between alerts for same error |
Webhook Configuration
To send alerts to Slack, Discord, or other services:
{
"alerting": {
"webhook": "https://hooks.slack.com/services/YOUR/WEBHOOK/URL",
"severities": ["critical", "high"],
"cooldown": 300
}
}
Data Storage
OWL Watch stores data in ~/.owl-watch/:
| File | Purpose |
|---|---|
config.json | User configuration |
investigations.json | Investigation history |
profiles/<name>.json | Project profiles |
debug.log | Debug logs for troubleshooting |
Troubleshooting
Ollama not running
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Start Ollama
ollama serve
Model not found
# List available models
ollama list
# Pull the required model
ollama pull llama3.2
Port already in use
Use a different port:
owl-watch app.log --port 9000
Or configure in ~/.owl-watch/config.json:
{
"server": {
"port": 9000
}
}
Next Steps
- Quick Start - Start monitoring your first log file