Tahuna

The future is not a single intelligent blob.
It's billions of species of models.

The future of artificial intelligence is a wide field of specialized systems. Training them should be easily accessible.

Try it
Works with your stack
PyTorchPyTorchHuggingFaceHuggingFaceUnslothUnslothTRLTRLPrime IntellectPrime Intellect
InstallTahuna

A gentle control plane
for post-training

Post-training is the new frontier, but the tooling haven't caught up yet.

Tahuna is a gentle control plane for post-training that keeps your code and your loop intact while handling provisioning, sync, dependencies, monitoring, and artifacts.

tahuna train
>$ tahuna init .
config def
entrypoint detected, environment scaffolded
>$ tahuna sync
syncing code, data, env config
>$ tahuna train
materializing
finetuning minimax2.5
streaming metrics
runtime/wardenwandb-compatible metrics

curl -fsSL https://raw.githubusercontent.com/Pazuzzu/tahuna-cli/main/scripts/install-tahuna.sh | bash

The Layers

Explore
the core loop

You keep the training loop. Tahuna handles everything around it in four clear steps.

Init

tahuna init .

Tahuna scans your project, detects your framework, identifies your entrypoint and data, and scaffolds anything missing.

Project-awareNo boilerplate

Align

tahuna sync

Your code and data are synced incrementally. Only changed files travel, and every run is pinned to exact snapshots.

Incremental syncDelta-only uploads

Train

tahuna train

Tahuna provisions the GPU, materializes the workspace, installs dependencies, and runs your training entrypoint.

Logs stream liveYour loop stays yours

Persist

runs + artifacts

Your checkpoints, models, and outputs are persisted. Run history stays queryable, and every experiment can be traced back to exact inputs.

Artifacts persistedFully reproducible