3.3 KiB
3.3 KiB
Implementation Checklist
✅ Completed
- Added Ollama adapter to CodeCompanion configuration
- Configured environment variable support (
OLLAMA_ENDPOINT) - Added keymaps for Ollama (
<leader>cll) - Created comprehensive documentation
- Created quick setup guide for other machines
- Created shell configuration example
📋 To Do
On Your Ollama Server Machine
- Verify Ollama is installed:
ollama --version - Ensure Ollama listens on all interfaces:
sudo systemctl edit ollama # Add: Environment="OLLAMA_HOST=0.0.0.0:11434" sudo systemctl restart ollama - Pull your preferred model:
ollama pull mistral # or: ollama pull neural-chat - Find your Tailscale IP:
tailscale ip -4 # Note this down: 100.x.x.x - Test Ollama is accessible:
curl http://localhost:11434/api/tags
On Your Main Machine (with Ollama)
- Reload Neovim config:
:source ~/.config/nvim/init.luaor restart Neovim - Test Ollama integration:
:CodeCompanionChat ollama Toggle - Verify it works by sending a message
On Other Machines (without Ollama)
- Add to shell config (
~/.zshrc,~/.bashrc, etc.):export OLLAMA_ENDPOINT="http://100.x.x.x:11434" # Replace 100.x.x.x with your Tailscale IP - Reload shell:
source ~/.zshrc(or your shell config) - Test connection:
curl http://100.x.x.x:11434/api/tags - Start Neovim and test:
:CodeCompanionChat ollama Toggle
🔧 Optional Customizations
- Change default Ollama model in
lua/shelbybark/plugins/codecompanion.lua(line 40) - Add more Ollama adapters for different models
- Create machine-specific configs if needed
- Set up Ollama to run on GPU for faster inference
📚 Documentation Files
docs/OLLAMA_SETUP.md- Full setup guide with troubleshootingdocs/OLLAMA_QUICK_SETUP.md- Quick reference for other machinesdocs/ollama_env_example.sh- Shell configuration exampledocs/INTEGRATION_SUMMARY.md- Overview of changes
🧪 Testing
Test 1: Local Ollama Access
# On your Ollama server machine
curl http://localhost:11434/api/tags
# Should return JSON with available models
Test 2: Network Access via Tailscale
# On another machine
export OLLAMA_ENDPOINT="http://100.x.x.x:11434"
curl $OLLAMA_ENDPOINT/api/tags
# Should return JSON with available models
Test 3: Neovim Integration
# In Neovim on any machine
:CodeCompanionChat ollama Toggle
# Should open chat window
# Type a message and press Enter
# Should get response from Ollama
🆘 Quick Troubleshooting
| Problem | Quick Fix |
|---|---|
| "Connection refused" | Check Ollama is running: curl http://localhost:11434/api/tags |
| "Model not found" | Pull the model: ollama pull mistral |
| "Can't reach server" | Verify Tailscale: tailscale status |
| "Slow responses" | Try smaller model or check server resources |
📞 Support Resources
- Ollama Issues: https://github.com/ollama/ollama/issues
- Tailscale Help: https://tailscale.com/kb/
- CodeCompanion: https://github.com/olimorris/codecompanion.nvim
Status: Ready to use! Follow the "To Do" section to complete setup.