Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error of load_adapter of Target module is not supported when using Qwen2-VL #2296

Open
1 of 4 tasks
bigmouthbabyguo-530 opened this issue Dec 24, 2024 · 2 comments
Open
1 of 4 tasks

Comments

@bigmouthbabyguo-530
Copy link

bigmouthbabyguo-530 commented Dec 24, 2024

System Info

Env info:

  • torch 2.4.0
  • peft 0.11.1
  • transformers 4.46.1

I finetune lora for Qwen2VL in a 5-fold way. My aim is to load 5 lora models according to the following procedure:

from peft import PeftConfig, PeftModel, get_peft_model
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import Qwen2VLForConditionalGeneration, Qwen2VLConfig
import torch

path="/xxx/saves/qwen2_vl-7b/kgroup_fold_0"
config = PeftConfig.from_pretrained(path)
model = Qwen2VLForConditionalGeneration.from_pretrained(config.base_model_name_or_path, 
                                                     trust_remote_code=True,
                                                     torch_dtype=torch.bfloat16,
                                                     device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("/mnt_nas/download-model-mllm/Qwen2-VL-7B-Instruct")
lora_path="/xxx/saves/qwen2_vl-7b/kgroup_fold_{fold}"
model = PeftModel.from_pretrained(model, lora_path.format(fold=0), adapter_name=f"fold_{0}")
for i in range(1,5):
    print(i)
    model.load_adapter(lora_path.format(fold=i), adapter_name=f"fold_{i}")

But it reports module is not supported error:

File ~/miniconda3/envs/mllm/lib/python3.10/site-packages/peft/tuners/lora/model.py:322, in LoraModel._create_new_module(lora_config, adapter_name, target, **kwargs)
317 break
319 if new_module is None:
320
321 # no module could be matched
--> 322 raise ValueError(
323 f"Target module {target} is not supported. Currently, only the following modules are supported: "
324 "torch.nn.Linear, torch.nn.Embedding, torch.nn.Conv2d, transformers.pytorch_utils.Conv1D."
325 )
327 return new_module

ValueError: Target module ModuleDict(
(fold_0): Dropout(p=0.05, inplace=False)
(fold_1): Dropout(p=0.05, inplace=False)
) is not supported. Currently, only the following modules are supported: torch.nn.Linear, torch.nn.Embedding, torch.nn.Conv2d, transformers.pytorch_utils.Conv1D.

According to this issue, I try to ignore dropout module mannualy.
However, I would like to use a combination of theses loras
model.add_weighted_adapter( adapters=['fold_0', 'fold_1'], weights=[0.5, 0.5], adapter_name="combined", combination_type="svd", )
But it also failed, reporting:

File ~/miniconda3/envs/mllm/lib/python3.10/site-packages/peft/tuners/lora/model.py:659, in LoraModel.add_weighted_adapter(self, adapters, weights, adapter_name, combination_type, svd_rank, svd_clamp, svd_full_matrices, svd_driver, density, majority_sign_method)
651 target_lora_B.data[:, : loras_B.shape[1]] = loras_B
652 elif combination_type in [
653 "svd",
654 "ties_svd",
(...)
657 "magnitude_prune_svd",
658 ]:
--> 659 target_lora_A.data, target_lora_B.data = self._svd_generalized_task_arithmetic_weighted_adapter(
660 combination_type,
661 adapters,
662 weights,
663 new_rank,
664 target,
665 target_lora_A,
666 target_lora_B,
667 density,
668 majority_sign_method,
669 svd_clamp,
670 full_matrices=svd_full_matrices,
671 driver=svd_driver,
672 )
673 elif combination_type in ["linear", "ties", "dare_linear", "dare_ties", "magnitude_prune"]:
674 target_lora_A.data, target_lora_B.data = self._generalized_task_arithmetic_weighted_adapter(
675 combination_type, adapters, weights, target, density, majority_sign_method
676 )

File ~/miniconda3/envs/mllm/lib/python3.10/site-packages/peft/tuners/lora/model.py:703, in LoraModel._svd_generalized_task_arithmetic_weighted_adapter(self, combination_type, adapters, weights, new_rank, target, target_lora_A, target_lora_B, density, majority_sign_method, clamp, full_matrices, driver)
701 # if no valid adapter, nothing to do
702 if len(valid_adapters) == 0:
--> 703 raise ValueError("No matching LoRAs found. Please raise an issue on Github.")
704 delta_weight = [target.get_delta_weight(adapter) for adapter in valid_adapters]
705 valid_weights = torch.tensor(valid_weights).to(delta_weight[0].device)

ValueError: No matching LoRAs found. Please raise an issue on Github.

As my aim is to merge loras and make their contributions equally. I'm not sure if I can use from_pretrain method like this

for i in range(5):
    model = PeftModel.from_pretrained(model, lora_path.format(fold=i))
    model = model.merge_and_unload()

And adding a weight (like 0.2) to the merge_and_unload method.

Who can help?

@ben

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder
  • My own task or dataset (give details below)

Reproduction

path="/xxx/saves/qwen2_vl-7b/kgroup_fold_0"
config = PeftConfig.from_pretrained(path)
model = Qwen2VLForConditionalGeneration.from_pretrained(config.base_model_name_or_path,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("/mnt_nas/download-model-mllm/Qwen2-VL-7B-Instruct")
lora_path="/xxx/saves/qwen2_vl-7b/kgroup_fold_{fold}"
model = PeftModel.from_pretrained(model, lora_path.format(fold=0), adapter_name=f"fold_{0}")
for i in range(1,5):
print(i)
model.load_adapter(lora_path.format(fold=i), adapter_name=f"fold_{i}")`

Expected behavior

Expect to load_adapter successfully

@bigmouthbabyguo-530
Copy link
Author

bigmouthbabyguo-530 commented Dec 24, 2024

I add a alpha factor (0.2) to delta_weight in layer.py and use the following code:

for i in range(5):
    model = PeftModel.from_pretrained(model, lora_path.format(fold=i))
    model = model.merge_and_unload()

I'm not sure if this is the correct way.

@Huangsz2021
Copy link

same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants