localPress

Guides

Ollama Setup for AI Caption Generation

Install Ollama and pull a vision model to enable local AI alt-text generation with the caption command.

The localpress caption command uses Ollama to generate alt-text for images entirely on your local machine — no cloud API, no API key, no per-image credits.

Prerequisites

  • macOS, Linux, or Windows (WSL2)
  • ~2 GB free disk space for the recommended model

1. Install Ollama

macOS / Linux:

curl -fsSL https://ollama.com/install.sh | sh

macOS (Homebrew):

brew install ollama

Windows: Download the installer from ollama.com/download.

2. Start the Ollama server

ollama serve

Ollama runs at http://localhost:11434 by default. You can verify it's running:

curl http://localhost:11434/api/tags

On macOS, the Ollama app (from the installer) starts the server automatically in the menu bar.

3. Pull a vision model

localPress works with any Ollama multimodal (vision) model. Recommended options:

ModelSizeSpeedQualityCommand
moondream~1.7 GBFastGreat for alt-textollama pull moondream
llava~4.7 GBMediumHigh qualityollama pull llava
llava:13b~8 GBSlowBest qualityollama pull llava:13b
bakllava~4.7 GBMediumAlternativeollama pull bakllava

For most use cases, moondream is the best choice — it's fast, small, and specifically tuned for image description tasks.

ollama pull moondream

4. Use the caption command

# Caption a specific attachment
localpress caption 123

# Caption all images missing alt text (dry-run by default)
localpress caption --missing-alt

# Execute the bulk caption run
localpress caption --missing-alt --apply

# Caption all images in the library
localpress caption --all --apply

# Generate alt text in a specific language
localpress caption 123 --language Spanish
localpress caption --missing-alt --language French --apply

# Preview without writing to WordPress
localpress caption --missing-alt --dry-run

# Use a different model
localpress caption 123 --model llava

# List locally available vision models
localpress caption --list-models

Keeping Ollama running

By default, ollama serve runs in the foreground. To keep it running in the background:

macOS (launchd):

# The Ollama.app from ollama.com/download handles this automatically
# Or with Homebrew services:
brew services start ollama

Linux (systemd): The official installer registers a systemd service automatically:

sudo systemctl enable --now ollama

Custom prompt

The default prompt instructs the model to write concise, factual alt-text under 125 characters. You can override it:

localpress caption 123 --prompt "Describe this product image in detail, including colors, materials, and any text visible."

Custom Ollama URL

If Ollama is running on a different port or host (e.g. a local server):

localpress caption 123 --ollama-url http://192.168.1.100:11434

Troubleshooting

"Ollama is not running" Start it with ollama serve, or ensure the Ollama menu bar app is running on macOS.

"model not found" Pull the model first: ollama pull moondream

Slow generation

  • moondream is the fastest option
  • Ensure Ollama is using GPU acceleration (check ollama ps while a generation is running)
  • On Apple Silicon, Ollama automatically uses the Neural Engine

Low quality captions Try llava or llava:13b for higher quality, or write a custom --prompt tuned to your content type.

Related commands

  • localpress caption --list-models — show locally available vision models
  • localpress audit --missing-alt — find all attachments with missing alt text
  • localpress stats — see caption operation history and counts

Sourced from the GitHub Wiki. Updates on each deploy.