Saturday, August 26, 2023

Fine-Tune Any Model Locally or in AWS SageMaker on Your Own Dataset

If you have your dataset in pdf or any other format and you want to train Llama or any other LLM on this custom dataset then this video will help.




Commands Used:

!pip install transformers

!pip install autotrain-advanced

!pip install huggingface_hub

!autotrain setup --update-torch


# Get Huggingface token from https://huggingface.co/

from huggingface_hub import notebook_login

notebook_login()


!autotrain llm --train --project_name customllm --model TinyPixel/Llama-2-7B-bf16-sharded --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048


train.csv:


text


### Instruction:

How to learn AI

### Response:

Read and Practice


### Instruction:

How to relax

### Response:

exercise in morning


### Instruction:

How to sleep well

### Response:

Sleep in dark and quiet room

No comments: