# Quick Reference Card ## 🎯 At a Glance ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ CodeCompanion + Ollama + Tailscale Integration β”‚ β”‚ Quick Reference Card β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ## ⌨️ Keymaps | Keymap | Action | Mode | |--------|--------|------| | `cll` | Chat with Ollama | Normal, Visual | | `cc` | Chat with Claude Haiku | Normal, Visual | | `cs` | Chat with Claude Sonnet | Normal, Visual | | `co` | Chat with Claude Opus | Normal, Visual | | `ca` | Show CodeCompanion actions | Normal, Visual | | `cm` | Show current model | Normal | ## πŸ”§ Setup Checklist ### On Ollama Server - [ ] `sudo systemctl edit ollama` β†’ Add `Environment="OLLAMA_HOST=0.0.0.0:11434"` - [ ] `sudo systemctl restart ollama` - [ ] `ollama pull mistral` (or your preferred model) - [ ] `tailscale ip -4` β†’ Note the IP (e.g., 100.123.45.67) ### On Other Machines - [ ] Add to `~/.zshrc` (or `~/.bashrc`): ```bash export OLLAMA_ENDPOINT="http://100.123.45.67:11434" ``` - [ ] `source ~/.zshrc` (reload shell) - [ ] `curl $OLLAMA_ENDPOINT/api/tags` (test connection) - [ ] Start Neovim and press `cll` ## πŸ§ͺ Quick Tests ```bash # Test Ollama is running curl http://localhost:11434/api/tags # Test remote access curl http://100.x.x.x:11434/api/tags # Test Tailscale tailscale status ping 100.x.x.x # List models ollama list # Pull a model ollama pull mistral ``` ## πŸ“Š Model Comparison ``` β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Model β”‚ Size β”‚ Speed β”‚ Quality β”‚ Best For β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ orca-mini β”‚ 3B β”‚ ⚑⚑⚑ β”‚ ⭐⭐ β”‚ Quick answersβ”‚ β”‚ mistral β”‚ 7B β”‚ ⚑⚑ β”‚ ⭐⭐⭐ β”‚ Coding β”‚ β”‚ neural-chat β”‚ 7B β”‚ ⚑⚑ β”‚ ⭐⭐⭐ β”‚ Chat β”‚ β”‚ llama2 β”‚ 7B β”‚ ⚑⚑ β”‚ ⭐⭐⭐ β”‚ General β”‚ β”‚ dolphin-mix β”‚ 8x7B β”‚ ⚑ β”‚ ⭐⭐⭐⭐│ Complex β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` ## πŸ” Troubleshooting Quick Fixes | Problem | Quick Fix | |---------|-----------| | Connection refused | `ps aux \| grep ollama` (check if running) | | Model not found | `ollama pull mistral` | | Can't reach remote | `ping 100.x.x.x` (check Tailscale) | | Env var not working | `echo $OLLAMA_ENDPOINT` (verify it's set) | | Slow responses | Try smaller model: `ollama pull orca-mini` | ## πŸ“ Important Files | File | Purpose | |------|---------| | `lua/shelbybark/plugins/codecompanion.lua` | Main config (modified) | | `docs/OLLAMA_SETUP.md` | Full setup guide | | `docs/TROUBLESHOOTING.md` | Detailed troubleshooting | | `docs/ARCHITECTURE.md` | Network diagrams | | `docs/IMPLEMENTATION_CHECKLIST.md` | Step-by-step checklist | ## 🌐 Network Setup ``` Machine A (Ollama Server) β”œβ”€ Ollama: http://localhost:11434 β”œβ”€ Tailscale IP: 100.123.45.67 └─ OLLAMA_HOST=0.0.0.0:11434 Machine B (Client) β”œβ”€ OLLAMA_ENDPOINT=http://100.123.45.67:11434 └─ Connects via Tailscale VPN Machine C (Client) β”œβ”€ OLLAMA_ENDPOINT=http://100.123.45.67:11434 └─ Connects via Tailscale VPN ``` ## πŸ’Ύ Environment Variable ```bash # Add to ~/.zshrc, ~/.bashrc, or ~/.config/fish/config.fish export OLLAMA_ENDPOINT=\"http://100.123.45.67:11434\" # Then reload source ~/.zshrc # or ~/.bashrc ``` ## πŸš€ Usage Flow ``` 1. Press cll ↓ 2. CodeCompanion opens chat window ↓ 3. Reads OLLAMA_ENDPOINT env var ↓ 4. Connects to Ollama server ↓ 5. Type message and press Enter ↓ 6. Ollama generates response ↓ 7. Response appears in Neovim ``` ## πŸ“ž Help Commands ```bash # Check Ollama status sudo systemctl status ollama # View Ollama logs journalctl -u ollama -f # List available models ollama list # Pull a model ollama pull # Check Tailscale tailscale status # Find your Tailscale IP tailscale ip -4 # Test connection curl http://localhost:11434/api/tags curl http://100.x.x.x:11434/api/tags ``` ## ⚑ Performance Tips 1. **Use 7B models** for best balance (mistral, neural-chat) 2. **Avoid 13B+ models** on slow networks 3. **Monitor latency**: `ping 100.x.x.x` (should be < 50ms) 4. **Run on GPU** if available for faster inference 5. **Close other apps** to free up resources ## πŸ” Security Checklist - βœ… Ollama only accessible via Tailscale - βœ… All traffic encrypted end-to-end - βœ… Uses private Tailscale IPs (100.x.x.x) - βœ… No exposure to public internet - βœ… Firewall rules can further restrict access ## πŸ“‹ Common Commands ```bash # Start Ollama ollama serve # Or with systemd sudo systemctl start ollama # Pull a model ollama pull mistral # List models ollama list # Remove a model ollama rm mistral # Test connection curl http://localhost:11434/api/tags | jq '.models[].name' # Check Tailscale tailscale status # Restart Ollama sudo systemctl restart ollama ``` ## πŸŽ“ Learning Resources - Ollama: https://github.com/ollama/ollama - Tailscale: https://tailscale.com/kb/ - CodeCompanion: https://github.com/olimorris/codecompanion.nvim - Neovim: https://neovim.io/ ## πŸ“ Notes - Default model: `mistral` (change in codecompanion.lua line 40) - Default endpoint: `http://localhost:11434` (override with env var) - Keymaps use `` (usually `\` or `,`) - All documentation in `docs/` folder --- **Print this card and keep it handy!** **Last Updated**: 2026-02-05