Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Target modules {'modulation.linear', 'txt_attn_proj', 'fc1', 'txt_attn_qkv', 'fc2', 'txt_mod.linear', 'img_mod.linear', 'linear1', 'linear2', 'img_attn_qkv', 'img_attn_proj'} not found in the base model. #10398

Closed
nitinmukesh opened this issue Dec 27, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@nitinmukesh
Copy link

nitinmukesh commented Dec 27, 2024

Describe the bug

Without LORA works fine. I tried the latest version of peft as well as 0.6.0 which gives another error.
Is it that Lora is not supposed to work with GGUF weights?

Reproduction



pipe = HunyuanVideoPipeline.from_pretrained(
    model_id, 
    transformer=transformer,
    torch_dtype=torch.float16
)
pipe.load_lora_weights("lora", weight_name="xiangling_ep1_lora.safetensors", adapter_name="hunyuan-lora")
pipe.set_adapters("hunyuan-lora", 0.8)

pipe.vae.enable_tiling()
pipe.enable_model_cpu_offload()

Logs

Loading checkpoint shards: 100%|████████████████████████████████████| 4/4 [00:01<00:00,  3.27it/s]
Loading pipeline components...: 100%|███████████████████████████████| 7/7 [00:04<00:00,  1.55it/s]
Traceback (most recent call last):
  File "C:\ai1\LTX-Video\HunyuanVideo\HunyuanVideo_LORA_gradio.py", line 169, in <module>
    pipe.load_lora_weights("lora", weight_name="xiangling_ep1_lora.safetensors", adapter_name="hunyuan-lora")
  File "C:\ai1\LTX-Video\venv\lib\site-packages\diffusers\loaders\lora_pipeline.py", line 4075, in load_lora_weights
    self.load_lora_into_transformer(
  File "C:\ai1\LTX-Video\venv\lib\site-packages\diffusers\loaders\lora_pipeline.py", line 4112, in load_lora_into_transformer
    transformer.load_lora_adapter(
  File "C:\ai1\LTX-Video\venv\lib\site-packages\diffusers\loaders\peft.py", line 326, in load_lora_adapter
    inject_adapter_in_model(lora_config, self, adapter_name=adapter_name, **peft_kwargs)
  File "C:\ai1\LTX-Video\venv\lib\site-packages\peft\mapping.py", line 260, in inject_adapter_in_model
    peft_model = tuner_cls(model, peft_config, adapter_name=adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
  File "C:\ai1\LTX-Video\venv\lib\site-packages\peft\tuners\lora\model.py", line 141, in __init__
    super().__init__(model, config, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
  File "C:\ai1\LTX-Video\venv\lib\site-packages\peft\tuners\tuners_utils.py", line 184, in __init__
    self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
  File "C:\ai1\LTX-Video\venv\lib\site-packages\peft\tuners\tuners_utils.py", line 520, in inject_adapter
    raise ValueError(error_msg)
ValueError: Target modules {'modulation.linear', 'txt_attn_proj', 'fc1', 'txt_attn_qkv', 'fc2', 'txt_mod.linear', 'img_mod.linear', 'linear1', 'linear2', 'img_attn_qkv', 'img_attn_proj'} not found in the base model. Please check the target modules and try again.

System Info

python 3.10.11

(venv) C:\ai1\LTX-Video\HunyuanVideo>pip list
Package Version


accelerate 1.2.1
aiofiles 23.2.1
annotated-types 0.7.0
anyio 4.7.0
certifi 2024.12.14
charset-normalizer 3.4.0
click 8.1.7
colorama 0.4.6
diffusers 0.32.0.dev0
einops 0.8.0
exceptiongroup 1.2.2
fastapi 0.115.6
ffmpy 0.5.0
filelock 3.16.1
fsspec 2024.12.0
gguf 0.13.0
gradio 5.9.1
gradio_client 1.5.2
h11 0.14.0
httpcore 1.0.7
httpx 0.28.1
huggingface-hub 0.25.2
idna 3.10
imageio 2.36.1
imageio-ffmpeg 0.5.1
importlib_metadata 8.5.0
Jinja2 3.1.4
markdown-it-py 3.0.0
MarkupSafe 2.1.5
mdurl 0.1.2
mpmath 1.3.0
networkx 3.4.2
numpy 2.2.0
opencv-python 4.10.0.84
orjson 3.10.12
packaging 24.2
pandas 2.2.3
peft 0.14.0
pillow 11.0.0
pip 23.0.1
psutil 6.1.1
pydantic 2.10.4
pydantic_core 2.27.2
pydub 0.25.1
Pygments 2.18.0
python-dateutil 2.9.0.post0
python-multipart 0.0.20
pytz 2024.2
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.3
rich 13.9.4
ruff 0.8.4
safehttpx 0.1.6
safetensors 0.4.5
semantic-version 2.10.0
sentencepiece 0.2.0
setuptools 65.5.0
shellingham 1.5.4
six 1.17.0
sniffio 1.3.1
starlette 0.41.3
sympy 1.13.1
tokenizers 0.21.0
tomlkit 0.13.2
torch 2.5.1+cu124
torchvision 0.20.1+cu124
tqdm 4.67.1
transformers 4.47.1
typer 0.15.1
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.2.3
uvicorn 0.34.0
websockets 14.1
wheel 0.45.1
zipp 3.21.0

Who can help?

@sayakpaul

@nitinmukesh nitinmukesh added the bug Something isn't working label Dec 27, 2024
@nitinmukesh nitinmukesh changed the title arget modules {'modulation.linear', 'txt_attn_proj', 'fc1', 'txt_attn_qkv', 'fc2', 'txt_mod.linear', 'img_mod.linear', 'linear1', 'linear2', 'img_attn_qkv', 'img_attn_proj'} not found in the base model. Target modules {'modulation.linear', 'txt_attn_proj', 'fc1', 'txt_attn_qkv', 'fc2', 'txt_mod.linear', 'img_mod.linear', 'linear1', 'linear2', 'img_attn_qkv', 'img_attn_proj'} not found in the base model. Dec 27, 2024
@sayakpaul
Copy link
Member

pipe.load_lora_weights("lora", weight_name="xiangling_ep1_lora.safetensors", adapter_name="hunyuan-lora")

We don't know where does "lora" here comes from.

@nitinmukesh
Copy link
Author

My bad.
The PR is not merged
#10376

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants