4.0 KiB
4.0 KiB
CodeCompanion + Ollama Integration Summary
What Was Done
Your CodeCompanion configuration has been updated to support Ollama models alongside your existing Claude adapters. Here's what changed:
1. Added Ollama Adapter (lua/shelbybark/plugins/codecompanion.lua)
- Reads
OLLAMA_ENDPOINTenvironment variable - Defaults to
http://localhost:11434if not set - Uses
mistralas the default model (configurable)
2. Added Ollama Keymaps
<leader>cll- Toggle Ollama chat (normal and visual modes)- Works alongside existing Claude keymaps
3. Created Documentation
docs/OLLAMA_SETUP.md- Comprehensive setup guidedocs/OLLAMA_QUICK_SETUP.md- Quick reference for other machinesdocs/ollama_env_example.sh- Shell configuration example
How It Works
On Your Main Machine (with Ollama)
# Ollama runs locally, CodeCompanion uses http://localhost:11434 by default
nvim
# Press <leader>cll to chat with Ollama
On Other Machines (without Ollama)
# Set environment variable to your Ollama server's Tailscale IP
export OLLAMA_ENDPOINT="http://100.123.45.67:11434"
nvim
# Press <leader>cll to chat with Ollama via Tailscale
Key Features
✅ Network Access: Access Ollama from any machine on your Tailscale network ✅ Fallback Support: Keep Claude as primary, use Ollama as alternative ✅ Easy Switching: Use keymaps to switch between models instantly ✅ Environment-Based: Configuration adapts to each machine automatically ✅ No Code Changes: Just set an environment variable on other machines
Next Steps
1. On Your Ollama Server Machine
Ensure Ollama is exposed to the network:
# Check current Ollama binding
ps aux | grep ollama
# If needed, set it to listen on all interfaces
sudo systemctl edit ollama
# Add: Environment="OLLAMA_HOST=0.0.0.0:11434"
# Save and exit, then:
sudo systemctl restart ollama
# Find your Tailscale IP
tailscale ip -4
2. On Other Machines
Add to your shell config (~/.zshrc, ~/.bashrc, etc.):
export OLLAMA_ENDPOINT="http://YOUR_TAILSCALE_IP:11434"
3. Test It
# Verify connection
curl http://YOUR_TAILSCALE_IP:11434/api/tags
# Start Neovim and press <leader>cll
nvim
Configuration Details
Ollama Adapter Settings
- Location:
lua/shelbybark/plugins/codecompanion.lua(lines 35-45) - Default Model:
mistral(change to your preference) - Endpoint: Read from
OLLAMA_ENDPOINTenv var - Fallback:
http://localhost:11434
Available Models to Try
mistral- Fast, good quality (recommended)neural-chat- Optimized for conversationdolphin-mixtral- Larger, higher qualityllama2- General purposeorca-mini- Very fast, lightweight
Pull models with: ollama pull <model-name>
Troubleshooting
Connection Issues
# Test Ollama is running
curl http://localhost:11434/api/tags
# Test Tailscale connectivity
ping 100.x.x.x # Use your Tailscale IP
# Check Ollama is bound to network
sudo netstat -tlnp | grep 11434
Model Issues
# List available models
curl http://localhost:11434/api/tags | jq '.models[].name'
# Pull a model
ollama pull mistral
Files Modified/Created
- ✏️
lua/shelbybark/plugins/codecompanion.lua- Added Ollama adapter and keymaps - ✨
docs/OLLAMA_SETUP.md- Comprehensive setup guide - ✨
docs/OLLAMA_QUICK_SETUP.md- Quick reference - ✨
docs/ollama_env_example.sh- Shell config example - 📄
docs/INTEGRATION_SUMMARY.md- This file
Support
For issues or questions:
- Check the troubleshooting section in
docs/OLLAMA_SETUP.md - Verify Ollama is running:
curl http://localhost:11434/api/tags - Verify Tailscale connectivity:
tailscale status - Check CodeCompanion logs in Neovim:
:messages