
gemma-tuner-multimodal
Fine-tune Gemma 4 and 3n with audio, images and text on Apple Silicon, using PyTorch and Metal Performance Shaders.
The Lens
Gemma Tuner lets you fine-tune Google's Gemma 4 and 3n models on your Mac, no cloud GPU required. Text, images, and audio, all via Apple Silicon's MPS backend. You bring a CSV of training data, point the wizard at a HuggingFace checkpoint, and watch the training run on your local GPU with a real-time visualizer showing loss curves, attention heatmaps, and memory pressure.
The 2B and 4B parameter models are the sweet spot for consumer hardware. 16GB RAM minimum, 32GB recommended. It streams training data from Google Cloud Storage or BigQuery for datasets larger than your SSD. Exports land in HuggingFace SafeTensors format with guides for Core ML and GGUF conversion if you want to deploy on-device.
Solo ML practitioners get local fine-tuning without paying $2-5/hr for cloud GPUs. Small teams prototyping custom Gemma models can iterate locally before scaling to cloud training. The wizard CLI makes the setup approachable even if you're not a PyTorch expert.
The catch: Gemma only. No Llama, no Mistral, no other model families. Larger Gemma weights (26B+) are not supported. Audio fine-tuning on non-Mac platforms requires CUDA. And you still need a HuggingFace account with Gemma's license accepted before you can download the weights.
Get tools like this every Wednesday
One featured tool, three on the radar. No fluff.
Free vs Self-Hosted vs Paid
fully free## Free Tier Everything. Open source, no restrictions. Bring your own HuggingFace account (free, Gemma license acceptance required).
## Self-Hosted Runs entirely on your Mac. Python 3.10+, native arm64 Python, PyTorch MPS. No server infrastructure needed. Optional GCS/BigQuery streaming requires GCP account.
## Paid Alternatives OpenAI fine-tuning API ($8/1M training tokens), Google Vertex AI custom models ($2-5/hr GPU), AWS Bedrock custom models (per-hour GPU). Gemma Tuner eliminates the cloud GPU cost entirely for supported model sizes.
Free local fine-tuning on Apple Silicon. Saves $2-5/hr vs cloud GPU alternatives.
License: MIT License
Use freely, including commercial. Just keep the license.
Commercial use: ✓ Yes
About
- Owner
- Matt Mireles (User)
- Stars
- 1,321
- Forks
- 90
Explore Further
More tools in the directory
openclaw
Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞
359.0k ★claw-code
The repo is finally unlocked. enjoy the party! The fastest repo in history to surpass 100K stars ⭐. Join Discord: https://discord.gg/5TUQKqFWd Built in Rust using oh-my-codex.
184.9k ★n8n
Fair-code workflow automation with native AI capabilities
184.4k ★