Installing Ollama

Installing Ollama GUI

This page walks you through downloading and installing Ollama with its native GUI interface.

Step 1: Download Ollama

Visit the official Ollama download page:

πŸ‘‰ ollama.com/download

Select your operating system and download the installer.

Important: Download only from the official Ollama website (ollama.com) to ensure you get the authentic, safe version.

Step 2: Install on Your Operating System

The installation process varies slightly by platform:

macOS Installation

  1. Download the .dmg file from ollama.com/download

  2. Open the downloaded .dmg file

    Double-click Ollama-[version].dmg in your Downloads folder

  3. Drag Ollama to Applications

    A window will appear showing the Ollama icon and your Applications folder. Drag the Ollama icon to the Applications folder.

  4. Launch Ollama

    • Open your Applications folder
    • Double-click the Ollama icon
    • If you see a security warning, click “Open” to confirm
  5. Grant Permissions (if prompted)

    macOS may ask for permissions to run Ollama. Click “OK” or “Allow” to grant necessary permissions.

First Launch: On first launch, macOS may take a few seconds to verify the application. This is normal.

Windows Installation

  1. Download the .exe installer from ollama.com/download

  2. Run the installer

    Double-click OllamaSetup.exe in your Downloads folder

  3. User Account Control (UAC) prompt

    Windows will ask if you want to allow the app to make changes. Click “Yes”

  4. Follow the installation wizard

    • Click “Next” to begin
    • Choose installation location (default is recommended)
    • Click “Install”
    • Wait for installation to complete
  5. Launch Ollama

    • The installer will offer to launch Ollama automatically
    • Or find Ollama in your Start Menu
Installation Location: Ollama installs to C:\Users\[YourName]\AppData\Local\Programs\Ollama by default. The models will be stored in C:\Users\[YourName]\.ollama\models.

Linux Installation

Note: As of 2025, Ollama’s native GUI is available only for macOS and Windows. Linux users should use the CLI with a web-based interface like Open WebUI or ChatBox.
  1. Install Ollama CLI

    Run the official installation script:

    curl -fsSL https://ollama.com/install.sh | sh
  2. Verify installation

    ollama --version
  3. Start Ollama service

    ollama serve

    This starts Ollama as a background service on http://localhost:11434

  4. (Optional) Install a GUI Alternative

    For a GUI experience on Linux, consider:

    • Open WebUI: Web-based interface

      docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway \
        -v open-webui:/app/backend/data --name open-webui --restart always \
        ghcr.io/open-webui/open-webui:main
    • ChatBox: Desktop application for Linux GUI


Step 3: Verify Installation

After installation, verify that Ollama is running:

Using the GUI (macOS/Windows)

  1. Look for the Ollama icon in your system tray (Windows) or menu bar (macOS)

  2. Click the icon to open the Ollama window

  3. Check the interface

    You should see the Ollama chat interface with a welcome message

Using the CLI (All Platforms)

Open Terminal (macOS/Linux) or Command Prompt (Windows) and run:

ollama --version

Expected output:

ollama version is 0.10.0 (or later)
Success! If you see the version number, Ollama is installed correctly.

Step 4: Understanding the Ollama Interface

Once installed, Ollama provides two main interfaces:

GUI Interface (Recommended for Beginners)

The native Ollama GUI includes:

  • Chat Window: Main area for conversations
  • New Chat Button: Start fresh conversations
  • Model Selector: Dropdown at the bottom to choose and download models
  • Message Input: “Send a message” box at the bottom
  • Attachment Support: Add files to your conversations (+ button)

CLI Interface (For Advanced Users)

The command-line interface allows you to:

ollama list                    # List installed models
ollama pull llama3.2          # Download a model
ollama run llama3.2           # Start chatting with a model
ollama rm llama3.2            # Remove a model
ollama serve                  # Start Ollama server

What’s Next?

Now that Ollama is installed, let’s download a model and start chatting:

Having Issues?

If installation didn’t work, check the Troubleshooting guide.