Getting Started
Installation
Install Soup CLI from PyPI:
bash
# From PyPI (recommended)
pip install soup-cli
# Or from GitHub (latest dev)
pip install git+https://github.com/MakazhanAlpamys/Soup.gitRequirements
- Python 3.9 or higher
- CUDA-compatible GPU (recommended)
- 16GB+ RAM for most models
Optional Extras
bash
pip install 'soup-cli[fast]' # Unsloth 2-5x training speedup
pip install 'soup-cli[serve]' # FastAPI inference server
pip install 'soup-cli[serve-fast]' # vLLM backend
pip install 'soup-cli[sglang]' # SGLang backend
pip install 'soup-cli[eval]' # lm-evaluation-harness
pip install 'soup-cli[data]' # MinHash deduplication
pip install 'soup-cli[vision]' # Pillow for vision fine-tuning
pip install 'soup-cli[audio]' # librosa + soundfile
pip install 'soup-cli[qat]' # Quantization-aware training
pip install 'soup-cli[liger]' # Liger Kernel fused ops
pip install 'soup-cli[onnx]' # ONNX export
pip install 'soup-cli[tensorrt]' # TensorRT-LLM export
pip install 'soup-cli[ui]' # Web UI dashboard
pip install 'soup-cli[wandb]' # W&B logging
pip install 'soup-cli[deepspeed]' # ZeRO distributed training
pip install 'soup-cli[ring-attn]' # Ring FlashAttentionQuick Start
1. Create a config
bash
# Interactive wizard
soup init
# Or use a template
soup init --template chat2. Train
bash
soup train --config soup.yamlSoup automatically detects your GPU, sets optimal batch size, configures LoRA, and begins training.
3. Chat with your model
bash
soup chat --model ./output4. Push to HuggingFace
bash
soup push --model ./output --repo your-username/my-model5. Export and serve
bash
# Export to GGUF for Ollama / llama.cpp
soup export --model ./output --format gguf --quant q4_k_m
# Start OpenAI-compatible server
soup serve --model ./output --port 8000Health Check
Check your environment for compatibility issues:
bash
soup doctorShows Python version, GPU availability, all dependency versions, and fix suggestions.
Quick Demo
Run a complete demo in one command:
bash
soup quickstart # Creates sample data + config + trains TinyLlama
soup quickstart --dry-run # Just create files without training