Update LoRA finetuning code (#154)

* update lora tuning

* updata lora fine-tuning code

* update finetuning lora code

* lora code

* lora finetuning code

* updating lora finetuning code

* update lora finetuning code

* Update Lora finetuning code

* Update LoRA finetuning code

* Update LoRA finetuning code
This commit is contained in:
qianyu chen
2024-05-27 19:02:59 +08:00
committed by GitHub
parent 2b572c9221
commit 7e12387362
7 changed files with 261 additions and 32 deletions

View File

@@ -13,14 +13,10 @@ class CPMTrainer(Trainer):
labels = inputs.pop("labels")
else:
labels = None
vllm_embedding, vision_hidden_states = self.model.get_vllm_embedding(
inputs)
outputs = self.model.llm(
inputs_embeds=vllm_embedding,
use_cache=False,
)
if not self. args.use_lora:
outputs = self.model(data = inputs, use_cache=False)
else:
outputs = self.model.base_model(data = inputs, use_cache=False)
if labels is not None:
# Flatten the tokens