You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have noticed that when updated the target_modules settings in the LoRA config, the PEFT model params remain unchanged. Might affect other PEFT settings too.
My assumption is that get_peft_model() does not re-instantiate/update its settings once it has been initialized before.
Your observation is correct. Of course, re-defining a completely new lora_config cannot influence the model, as this just defines a new, unrelated variable that just happens to have the same name. Probably what you mean is that you would like to change the attribute on the existing lora_config:
lora_config=LoraConfig(..., target_modules=["foo"])
model=get_peft_model(base_model, lora_config)
lora_config.target_modules= ["bar"] # <= you expect this to trigger re-initialization of peft model
Although it is technically possible to turn each parameter into a @property and define a setter that re-initializes the model each time the config is changed, I'd say this is not worth the effort. Intuitively, I also lean towards the current behavior being more intuitive, but that's hard to say.
Your observation is correct. Of course, re-defining a completely new lora_config cannot influence the model, as this just defines a new, unrelated variable that just happens to have the same name.
Sorry, yeah, the model assignment was a bad example. I mean if you save it like that:
In my opinion, if you change the lora_config, these changes should be retrieved by get_peft_model(). Currently, there is not even a warning that the config has changed but the PEFT model doesn't reflect these changes. I don't find this very intuitive.
System Info
I have noticed that when updated the
target_modules
settings in the LoRA config, the PEFT model params remain unchanged. Might affect other PEFT settings too.My assumption is that
get_peft_model()
does not re-instantiate/update its settings once it has been initialized before.System: Windows 11
Python: 3.11
peft: 0.14.0
Who can help?
@BenjaminBossan @sayakpaul
Information
Tasks
examples
folderReproduction
For reproduction in a Jupyter Notebook:
This outputs
But when changing the above code without restarting the kernel to:
and retrieving the trainable params again:
it outputs again
but after the update it should be
Expected behavior
When having updated
lora_config
,get_peft_model()
should retrieve the current config.The text was updated successfully, but these errors were encountered: