Fine-tuning using (Q)LoRA
You can use the following command to train Vicuna-7B using QLoRA using ZeRO2. Note that ZeRO3 is not currently supported with QLoRA but ZeRO3 does support LoRA, which has a reference configuraiton under playground/deepspeed_config_s3.json. To use QLoRA, you must have bitsandbytes>=0.39.0 and transformers>=4.30.0 installed.Fine-tuning Vicuna-7B with Local NPUs
You can use the following command to train Vicuna-7B with 8 x 910B (60GB). Use--nproc_per_node
to specify the number of NPUs.