Generate photo metadata with AI. Locally.
Generate keywords, captions, and structured metadata for your photo library with on-device AI — no uploads, no subscriptions, and nothing leaving your Mac.
Requires Apple Silicon Mac with macOS 26
Shoots add up fast — finding images later shouldn’t be the hard part
After every shoot you end up with hundreds or thousands of files — and weeks later you can’t remember which folder contains the one image you need. Manual keywording is slow and inconsistent, and cloud taggers mean uploading client work to third-party servers — a deal-breaker for many photographers.
Auto-tag photos with on-device AI
VisionTagger runs vision models locally on your Mac to analyze images and generate keywords, captions, and structured metadata automatically. Process an entire shoot in one batch, write XMP sidecars for your cataloging workflow, and apply Finder tags — without uploading a single file.
Tag entire shoots in one batch
Drop a folder of exports — JPEG, PNG, RAW, or other common formats — or select images from Photos. VisionTagger generates metadata for every image in one run, so you don’t have to open and tag files one by one.
Create a metadata schema that matches your workflow
Go beyond generic keywords. Define fields like subject, location, mood, lighting, composition, or color palette. Save schemas as presets and reuse them across shoots for consistent, searchable results.
Export to XMP sidecars, JSON, CSV, Finder tags, and more
Write XMP sidecar files for import into your cataloging workflow. Export JSON, CSV or TXT for an archive pipeline or portfolio tooling. Apply Finder tags for quick Spotlight search — all from a single run.
Examples
System requirements
-
macOS Tahoe 26.0 or later
-
Apple Silicon required (M1 or later)
-
For optimal performance with larger models 16GB RAM or more is recommended
-
Model storage: plan for ~4–8 GB per model (downloaded locally)
One-Time Purchase
VAT included (except US & CA)
Secure payment via FastSpring