You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I finetune lora for Qwen2VL in a 5-fold way. My aim is to load 5 lora models according to the following procedure:
from peft import PeftConfig, PeftModel, get_peft_model
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import Qwen2VLForConditionalGeneration, Qwen2VLConfig
import torch
path="/xxx/saves/qwen2_vl-7b/kgroup_fold_0"
config = PeftConfig.from_pretrained(path)
model = Qwen2VLForConditionalGeneration.from_pretrained(config.base_model_name_or_path,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("/mnt_nas/download-model-mllm/Qwen2-VL-7B-Instruct")
lora_path="/xxx/saves/qwen2_vl-7b/kgroup_fold_{fold}"
model = PeftModel.from_pretrained(model, lora_path.format(fold=0), adapter_name=f"fold_{0}")
for i in range(1,5):
print(i)
model.load_adapter(lora_path.format(fold=i), adapter_name=f"fold_{i}")
But it reports module is not supported error:
File ~/miniconda3/envs/mllm/lib/python3.10/site-packages/peft/tuners/lora/model.py:322, in LoraModel._create_new_module(lora_config, adapter_name, target, **kwargs)
317 break
319 if new_module is None:
320
321 # no module could be matched
--> 322 raise ValueError(
323 f"Target module {target} is not supported. Currently, only the following modules are supported: "
324 "torch.nn.Linear, torch.nn.Embedding, torch.nn.Conv2d, transformers.pytorch_utils.Conv1D."
325 )
327 return new_module
ValueError: Target module ModuleDict(
(fold_0): Dropout(p=0.05, inplace=False)
(fold_1): Dropout(p=0.05, inplace=False)
) is not supported. Currently, only the following modules are supported: torch.nn.Linear, torch.nn.Embedding, torch.nn.Conv2d, transformers.pytorch_utils.Conv1D.
According to this issue, I try to ignore dropout module mannualy.
However, I would like to use a combination of theses loras model.add_weighted_adapter( adapters=['fold_0', 'fold_1'], weights=[0.5, 0.5], adapter_name="combined", combination_type="svd", )
But it also failed, reporting:
System Info
Env info:
I finetune lora for Qwen2VL in a 5-fold way. My aim is to load 5 lora models according to the following procedure:
But it reports module is not supported error:
According to this issue, I try to ignore dropout module mannualy.
However, I would like to use a combination of theses loras
model.add_weighted_adapter( adapters=['fold_0', 'fold_1'], weights=[0.5, 0.5], adapter_name="combined", combination_type="svd", )
But it also failed, reporting:
As my aim is to merge loras and make their contributions equally. I'm not sure if I can use from_pretrain method like this
And adding a weight (like 0.2) to the merge_and_unload method.
Who can help?
@ben
Information
Tasks
examples
folderReproduction
path="/xxx/saves/qwen2_vl-7b/kgroup_fold_0"
config = PeftConfig.from_pretrained(path)
model = Qwen2VLForConditionalGeneration.from_pretrained(config.base_model_name_or_path,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("/mnt_nas/download-model-mllm/Qwen2-VL-7B-Instruct")
lora_path="/xxx/saves/qwen2_vl-7b/kgroup_fold_{fold}"
model = PeftModel.from_pretrained(model, lora_path.format(fold=0), adapter_name=f"fold_{0}")
for i in range(1,5):
print(i)
model.load_adapter(lora_path.format(fold=i), adapter_name=f"fold_{i}")`
Expected behavior
Expect to load_adapter successfully
The text was updated successfully, but these errors were encountered: