Troubleshooting
Troubleshooting Common Issues
This guide helps you resolve common problems you might encounter when installing or using Ollama.
Installation Issues
Problem: “App Can’t Be Opened” (macOS)
Symptoms: macOS shows security warning preventing Ollama from launching
Solution:
- Go to System Settings → Privacy & Security
- Scroll down to Security section
- Look for message about Ollama being blocked
- Click “Open Anyway”
- Confirm by clicking “Open” in the dialog
Alternative:
# Remove quarantine attribute
xattr -d com.apple.quarantine /Applications/Ollama.appProblem: “Windows Protected Your PC” (Windows)
Symptoms: SmartScreen prevents installation
Solution:
- Click “More info” on the SmartScreen dialog
- Click “Run anyway”
- Proceed with installation
Problem: Linux Installation Script Fails
Symptoms: curl: command not found or permission errors
Solution:
Install curl first:
# Debian/Ubuntu
sudo apt-get update && sudo apt-get install curl
# Fedora/RHEL
sudo dnf install curl
# Arch
sudo pacman -S curlThen run the installation script with proper permissions:
curl -fsSL https://ollama.com/install.sh | sudo shModel Download Issues
Problem: Download Fails or Stalls
Symptoms: Model download stops at a certain percentage or shows errors
Solutions:
1. Check Internet Connection
# Test connection to Ollama registry
ping registry.ollama.ai2. Restart Download
The download should resume automatically. If not:
CLI Method:
# Stop Ollama
# macOS/Linux: Find and kill the process
pkill ollama
# Windows: Task Manager → End Ollama process
# Restart and try again
ollama pull llama3.2GUI Method:
- Restart the Ollama application
- Try downloading the model again
3. Check Disk Space
# macOS/Linux
df -h
# Windows
Get-PSDrive CEnsure you have at least 5 GB free before downloading models.
4. Use Smaller Model
If the 3B model is too large:
ollama pull llama3.2:1bThis downloads a smaller 1.3 GB model.
Problem: “Model Not Found” Error
Symptoms: Error when trying to run a model
Solution:
Verify model is actually downloaded:
ollama listIf not listed, download it:
ollama pull llama3.2Check for typos in model name:
- ✓ Correct:
llama3.2 - ✗ Wrong:
llama-3.2,llama32,llama 3.2
Performance Issues
Problem: Responses Are Very Slow
Symptoms: Each word takes 5+ seconds to generate
Diagnostic Steps:
-
Check Available RAM
Your system needs enough free RAM for the model:
- Llama 3.2 (1B): Needs ~3-4 GB available RAM
- Llama 3.2 (3B): Needs ~4-6 GB available RAM
# macOS vm_stat # Linux free -h # Windows (PowerShell) Get-ComputerInfo | Select-Object CsTotalPhysicalMemory, CsNumberOfProcessors -
Close Other Applications
Close browser tabs, large applications, etc. to free up RAM
-
Try Smaller Model
Use the 1B variant if the 3B is too slow:
ollama pull llama3.2:1b -
Check CPU Usage
Open Task Manager (Windows) or Activity Monitor (macOS) and verify Ollama is using CPU resources. If CPU usage is very low, there may be a configuration issue.
Expected Performance Benchmarks:
| Hardware | Llama 3.2 (1B) | Llama 3.2 (3B) |
|---|---|---|
| Apple M1/M2/M3 | 10-30 tokens/sec | 5-15 tokens/sec |
| Modern Intel/AMD (16GB RAM) | 5-15 tokens/sec | 2-8 tokens/sec |
| Minimum Spec (8GB RAM) | 2-8 tokens/sec | 1-4 tokens/sec |
Problem: Ollama Uses Too Much Memory
Symptoms: System becomes sluggish, swap usage is high
Solution:
-
Limit Context Length
In GUI settings or via CLI:
ollama run llama3.2 --ctx-size 2048Reduces memory from default (128K tokens) to 2K tokens
-
Use Quantized Model
The default Llama 3.2 uses Q4_K_M quantization. Verify:
ollama show llama3.2 -
Switch to Smaller Model
ollama run llama3.2:1b -
Close and Restart Ollama
Sometimes memory isn’t released properly. Restart the application.
Connection Issues
Problem: CLI Can’t Connect to Server
Symptoms: Error: could not connect to ollama server
Solutions:
1. Verify Ollama is Running
Check if Ollama process is active:
# macOS/Linux
ps aux | grep ollama
# Windows (PowerShell)
Get-Process | Where-Object {$_.ProcessName -like "*ollama*"}If not running, start it:
- GUI: Launch the Ollama application
- CLI:
ollama serve(runs in foreground)
2. Check Port 11434
Ollama listens on port 11434 by default. Verify nothing else is using it:
# macOS/Linux
lsof -i :11434
# Windows (PowerShell)
netstat -ano | findstr :114343. Test Connection
curl http://localhost:11434/api/tagsShould return a JSON response with available models.
4. Check Firewall
Ensure your firewall allows Ollama:
- macOS: System Settings → Network → Firewall
- Windows: Windows Security → Firewall & network protection
- Linux:
sudo ufw status(if using UFW)
Problem: GUI Won’t Start
Symptoms: Ollama icon appears but window doesn’t open
Solutions:
1. Reset Ollama
macOS:
rm -rf ~/.ollama/logs
killall Ollama
open -a OllamaWindows:
Remove-Item -Recurse $env:USERPROFILE\.ollama\logs
taskkill /F /IM ollama.exe
Start-Process "C:\Users\[YourName]\AppData\Local\Programs\Ollama\Ollama.exe"2. Check Logs
macOS/Linux:
cat ~/.ollama/logs/server.logWindows:
Get-Content $env:USERPROFILE\.ollama\logs\server.logLook for error messages indicating the problem.
3. Reinstall Ollama
As a last resort:
- Uninstall Ollama
- Delete data directory (
~/.ollamaor%USERPROFILE%\.ollama) - Reinstall from ollama.com
- Re-download models
~/.ollama removes all downloaded models. You’ll need to re-download them (2-5 GB per model).Model Quality Issues
Problem: Responses Are Nonsensical or Repetitive
Symptoms: Model outputs gibberish, repeats phrases, or provides irrelevant answers
Solutions:
1. Adjust Temperature
Try lowering temperature for more focused responses:
GUI: Settings → Temperature → Set to 0.5-0.7
CLI:
ollama run llama3.2 --temperature 0.62. Reset Context
Sometimes the context becomes corrupted:
- GUI: Start a new conversation (new chat button)
- CLI: Exit (
/bye) and restart (ollama run llama3.2)
3. Verify Model Integrity
Re-download the model:
ollama rm llama3.2
ollama pull llama3.24. Check Quantization
Default Q4_K_M should work well. If quality is poor, try higher quantization:
ollama pull llama3.2:q5_k_mPython Integration Issues
(For Part 2 of the workshop)
Problem: ollama Python Library Not Found
Symptoms: ModuleNotFoundError: No module named 'ollama'
Solution:
Install the library:
pip install ollamaOr with specific Python version:
python3 -m pip install ollamaProblem: Python Can’t Connect to Ollama
Symptoms: Connection errors in Python scripts
Solution:
Ensure Ollama server is running:
import ollama
# Test connection
try:
ollama.list()
print("Connected successfully!")
except Exception as e:
print(f"Connection failed: {e}")If connection fails:
- Start Ollama application
- Or run
ollama servein terminal - Verify port 11434 is accessible
Getting More Help
If your issue isn’t covered here:
- Check Official Documentation: ollama.com/docs
- GitHub Issues: github.com/ollama/ollama/issues
- Community Discord: Join Ollama Discord for community support
- During Workshop: Ask questions in Zoom chat or raise your hand
Diagnostic Information
When seeking help, provide:
# Ollama version
ollama --version
# System info
# macOS
system_profiler SPSoftwareDataType SPHardwareDataType
# Linux
uname -a && lsb_release -a
# Windows (PowerShell)
Get-ComputerInfo | Select-Object OsName, OsVersion, CsProcessors, CsTotalPhysicalMemory
# List models
ollama list
# Check logs
cat ~/.ollama/logs/server.log # macOS/Linux
Get-Content $env:USERPROFILE\.ollama\logs\server.log # WindowsThis information helps diagnose issues quickly.