Hey everyone! I’m Andrew, the builder behind Monostate. Some of you may know me from AITraining (Open Source CLI Trainer) — Monostate is the next step. I created AITraining because I was tired of trainer boilerplate. But once I had the models, I realized the rest of the workflow was just as fragmented – benchmarking meant a different tool, deployment meant SSH-ing into GPU boxes, and comparing models meant juggling notebooks. Every ML team I spoke to had the same problem: 5+ disconnected tools duct tape together.
Monostat puts it all in one place:
– New code fine-tuning – SFT, DPO, RLHF, reward modeling. Configure via UI, no training scripts.
– Multi-model benchmarking – compare accuracy, latency and cost alongside commercial + open source models.
– One-click GPU deployment – A100s to H100s with auto scaling. No SLURM, no cloud provider negotiations.
– Visual pipeline builder – Chain exclusive models with drag and drop (coming soon).
It supports LoRA, QLoRA, and full parameter training on Llama, Mistral, Phi, Qwen, and more. Free tiers are available to get you started.
Would love feedback — what’s the most painful part of your current ML workflow? We’re actively building on what customers tell us.
Please expect things to break. It’s open beta and things will break. That’s why my WhatsApp and email are on the app. If something goes wrong, get in touch and I’ll help you personally.