From 8b7277644b9ca1ede7bb5d980c206da27800e2d1 Mon Sep 17 00:00:00 2001 From: cjm <490083538@qq.com> Date: Tue, 21 May 2024 17:58:05 +0800 Subject: [PATCH] update finetune readme --- finetune/readme.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/finetune/readme.md b/finetune/readme.md index bc6c69b..1dd1414 100644 --- a/finetune/readme.md +++ b/finetune/readme.md @@ -66,5 +66,8 @@ To launch your training, run the following script: sh finetune_ds.sh ``` +Specially, Llama3 has a different chat_template for training and inference, we modified the chat_template for training, so please take care to restore the chat_template when inference on the training ckpt. + + #### Customizing Hyperparameters To tailor the training process according to your specific requirements, you can adjust various hyperparameters. For comprehensive documentation on available hyperparameters and their functionalities, you can refer to the [official Transformers documentation](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments). Experimentation and fine-tuning of these parameters are essential for achieving optimal model performance tailored to your specific task and dataset.