update finetuning code

This commit is contained in:
qianyu chen
2024-05-08 09:51:34 +08:00
committed by GitHub
parent 9f345c4020
commit f6cbd4fb25
9 changed files with 0 additions and 37 deletions

0
finetune/__init__.py Normal file
View File

View File

@@ -1,13 +1,3 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright @2024 AI, ZHIHU Inc. (zhihu.com)
#
# @author: wangchongyi <wangchongyi@zhihu.com>
# @author: chenqianyu <cqy1195@zhihu.com>
# @date: 2024/5/06
#
import os
import math
import json

View File

@@ -1,11 +1,3 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright @2024 AI, ZHIHU Inc. (zhihu.com)
#
# @author: chenqianyu <cqy1195@zhihu.com@zhihu.com>
# @date: 2024/5/03
#
import os
import glob
import json

View File

@@ -64,6 +64,3 @@ sh finetune_ds.sh
```
#### Customizing Hyperparameters
To tailor the training process according to your specific requirements, you can adjust various hyperparameters. For comprehensive documentation on available hyperparameters and their functionalities, you can refer to the [official Transformers documentation](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments). Experimentation and fine-tuning of these parameters are essential for achieving optimal model performance tailored to your specific task and dataset.
### LoRA finetuning
**This part is still unfinished, and we will complete it as soon as possible.**

View File

@@ -1,11 +1,3 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright @2024 AI, ZHIHU Inc. (zhihu.com)
#
# @author: chenqianyu <cqy1195@zhihu.com@zhihu.com>
# @date: 2024/5/03
#
import torch
import torch.nn as nn
from typing import Tuple, Union, Optional, List, Dict, Any

View File

@@ -1,8 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright @2024 AI, ZHIHU Inc. (zhihu.com)
#
# @author: chenqianyu <cqy1195@zhihu.com@zhihu.com>
# @date: 2024/5/02
#