Migrate from Axolotl to Soup CLI
Axolotl is a popular fine-tuning framework. Soup CLI can import Axolotl configs directly with soup migrate.
One-line migration
bash
soup migrate --from axolotl path/to/axolotl_config.ymlField mapping
| Axolotl | Soup CLI |
|---|---|
base_model | base.model |
datasets[0].path | data.train |
datasets[0].type | data.format |
sequence_len | training.max_seq_length |
micro_batch_size | training.batch_size |
gradient_accumulation_steps | training.gradient_accumulation_steps |
num_epochs | training.epochs |
adapter: lora / qlora | training.lora.enabled: true (+ quant: 4bit for qlora) |
lora_r / lora_alpha | training.lora.r / training.lora.alpha |
load_in_4bit: true | training.quant: 4bit |
flash_attention: true | (auto-enabled in Soup) |
Example conversion
Axolotl:
yaml
base_model: mistralai/Mistral-7B-v0.3
load_in_4bit: true
adapter: qlora
datasets:
- path: tatsu-lab/alpaca
type: alpaca
sequence_len: 2048
micro_batch_size: 2
gradient_accumulation_steps: 8
num_epochs: 3
learning_rate: 0.0002
lora_r: 16
lora_alpha: 32
flash_attention: trueSoup CLI (after `soup migrate`):
yaml
base:
model: mistralai/Mistral-7B-v0.3
task: sft
data:
train: tatsu-lab/alpaca
format: alpaca
training:
backend: unsloth
quant: 4bit
epochs: 3
learning_rate: 2.0e-4
batch_size: 2
gradient_accumulation_steps: 8
max_seq_length: 2048
lora:
enabled: true
r: 16
alpha: 32What you gain
- One CLI for train / chat / eval / serve / export instead of separate tools
- Native Unsloth backend (automatic 2–5× speedup on supported models)
- GGUF / Ollama / vLLM / SGLang export and serving built-in
- Loss watchdog, curriculum learning, sample packing, freeze training
Dry-run
bash
soup migrate --from axolotl axolotl.yml --dry-runRelated
- [Migrate from LLaMA-Factory](/docs/migrate-from-llamafactory)
- [Training methods](/docs/training)