Fine-tuning
Multi-GPU training in repeatable environments, as easy as a function call away.
"Training with multiple GPUs"
Refine models using up to 8 GPUs employing your preferred sharding method. Utilize A100 80 GB nodes with access to 640 GB of VRAM.
Code-defined environments
Specify environments in code to ensure the repeatability of your fine-tuning runs across your entire team, eliminating the need for cumbersome Jupyter notebooks!
Resources as needed
"Initiate fine-tuning runs on-demand from your application or terminal, and incur charges only for GPUs when they are actively utilized. Simplify the definition and execution of hyperparameter sweeps
Storage for models
Store fine-tuned weights or LoRA adapters in Dibtrun. Volume, as effortlessly as writing to local disk. Volumes are optimized for high read throughput, ensuring cold-start times .