Wednesday, May 1, 2024

Fine-Tune Any 7B LLM on a Single 8GB GPU Locally

 This video is a hands-on step-by-step tutorial to show how to fine-tune any model locally on single GPU. 


Code:

conda create --name xtuner-env python=3.10 -y

conda activate xtuner-env


pip install -U 'xtuner[deepspeed]'


xtuner list-cfg


xtuner train internlm2_chat_7b_qlora_oasst1_e3 --deepspeed deepspeed_zero2


xtuner convert pth_to_hf ${CONFIG_NAME_OR_PATH} ${PTH} ${SAVE_PATH}


xtuner chat ${NAME_OR_PATH_TO_LLM} --adapter {NAME_OR_PATH_TO_ADAPTER} [optional arguments]

No comments: