Pretraining Large Language Models
Researchers or developers can pretrain LLMs from scratch on custom datasets using scalable multi-GPU setups.
Finetuning with LoRA
Users can finetune existing models with parameter-efficient LoRA techniques to adapt models to specific tasks or domains.
Deploying Custom LLMs
After training or finetuning, models can be deployed and tested locally via command-line interfaces for inference.