CRMA Fine-Tuner — Free LLM fine-tuning app with built-in gradient stability (no GPU needed)

Hey community!

I built CRMA Fine-Tuner, a Streamlit-powered app for fine-tuning LLMs on your own datasets — with a stability layer baked in that most fine-tuning tools don’t have.

What it does:

  • Upload your instruction dataset (CSV or JSONL)
  • Fine-tune TinyLlama on your data in under 10 minutes
  • Download your LoRA adapter ready to use
  • No GPU setup, no infrastructure — runs entirely in the browser

The interesting technical part:

During testing on Mistral-7B, I found a reproducible gradient norm spike at step 44 (gn = 15.28 vs normal ~1.0) that silently degrades model quality by 20.5% with plain LoRA. CRMA’s proprietary stability layer eliminates it entirely:

  • Peak gradient norm: −52.7% vs plain LoRA
  • Mistral loss gap closed: 20.5% → 1.77%
  • Spectral norm of mixing matrix held at exactly 1.000000 throughout training

Try it free:

Would love feedback from the community — especially if you’ve hit similar stability issues during fine-tuning. Happy to discuss the approach!

— Kiran N.