Ollama + Gemma 4 Local AI Setup Guide
Ollama + Gemma 4
Local AI Setup Guide
Install and run a local LLM on an external drive — step
by step
Overview
This guide walks you through installing Ollama and the Gemma 4
language model entirely on an external drive, so your main system drive stays
clean. You will also learn how to expose the model via an API for use in tools
like Postman or your own applications.
|
Topic |
Details |
|
Tool |
Ollama — a lightweight local LLM runner |
|
Model |
Gemma 4 by Google (~9 GB RAM required) |
|
Storage |
External drive (e.g. D:\ or F:\ |
|
OS |
Windows (PowerShell) |
|
API Port |
localhost:11434 |
Part 1 — Download Ollama
Visit the official Ollama website to download the Windows
installer.
•
Go to: https://ollama.com/download
•
Download the OllamaSetup.exe file
|
📝 Note: The installer file is
approximately 1.7 GB. Make sure you have a stable internet connection before
downloading. |
Part 2 — Install Ollama to Your External Drive
Instead of double-clicking the installer (which installs to
C:\ by default), use PowerShell with the /DIR flag to choose your external
drive location.
2.1 Open PowerShell and navigate to
the installer
Open PowerShell and change directory to wherever you saved
OllamaSetup.exe. Then run the command below, replacing D:\Ollama with your own
drive letter and folder:
|
.\OllamaSetup.exe
/DIR="D:\Ollama" |
|
📝 Note: Replace D:\Ollama with your
external drive path, e.g. F:\Ollama if your drive is on F:\ |
After installation, you should be able to see the Ollama
folder extracted on your external drive (D:\ in this example):
Complete the installation wizard to finish setup.
Part 3 — Store Models on Your External Drive
By default Ollama saves downloaded models to your C:\ drive.
You need to point it to your external drive so models are stored there instead.
3.1 Set the Model Storage Path in
Settings
Open Ollama settings and update the "Model location"
to your external drive path:
Part 4 — Add Ollama to Environment Variables
Adding Ollama to your Windows environment variables lets you
run ollama commands from any folder in PowerShell.
1.
Press Win + R, type sysdm.cpl, and press Enter
2.
Click the Advanced tab, then Environment Variables
3.
Under User variables, click New
4.
Set Variable name to: OLLAMA_MODELS
5.
Set Variable value to: D:\Ollama (or your external
drive path)
6.
Click OK, then restart Ollama
Part 5 — Verify the Installation
After setting the environment variable, fully close and reopen
PowerShell — existing sessions do not pick up PATH changes.
|
📝 Note: Always close and reopen
PowerShell after changing environment variables. Otherwise the new PATH will
not be active. |
Run the following command to confirm Ollama is installed
correctly:
|
ollama --version |
You should see the Ollama version number printed in the
terminal:
Part 6 — Install Gemma 4 Model
Now that Ollama is set up, you can pull and run the Gemma 4
model. Gemma 4 is a powerful open model made by Google.
6.1 Find the install command
Visit the Ollama library page for Gemma 4 to get the exact
pull command:
•
URL: https://ollama.com/library/gemma4
6.2 Run the pull command in
PowerShell
Copy the command shown on the page and run it in PowerShell:
|
📝 Note: Gemma 4 requires approximately 9
GB of available RAM to run. Close other heavy applications before starting
the model. |
Once the download completes, you will see the model files
stored in your external drive folder:
Part 7 — Chat with Gemma 4
You can interact with Gemma 4 directly in the terminal using
the ollama run command:
|
ollama run gemma4 |
This opens an interactive chat session in PowerShell:
Part 8 — Access Gemma 4 via API
Ollama exposes a local REST API on port 11434 by default. You
can use this to integrate Gemma 4 with your own code, Postman, or any HTTP
client.
8.1 Expose the port (VS Code
example)
In Visual Studio Code, open the Ports panel and add port
11434. Set visibility to Public to get a remote URL you can share or use
externally:
8.2 Send a request from Postman
Use the following endpoint with a JSON payload to query Gemma
4 from Postman or any API client:
|
POST
http://localhost:11434/api/generate |
Example payload and response in Postman:
Quick Reference — Command Cheat Sheet
|
Action |
Command |
|
Install to external drive |
.\OllamaSetup.exe
/DIR="D:\Ollama" |
|
Check Ollama version |
ollama
--version |
|
Pull Gemma 4 model |
ollama
pull gemma4 |
|
Run Gemma 4 in terminal |
ollama
run gemma4 |
|
List installed models |
ollama
list |
|
API endpoint |
http://localhost:11434/api/generate |
Comments
Post a Comment