-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new feature of SafeLoRA #2201
base: main
Are you sure you want to change the base?
Conversation
…method of loading the peft config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update to the SafeLoRA PR. I did another review and found a few areas to improve. Please take a look. Also, please run make style
once you're finished with your changed.
save_weights=True) | ||
|
||
final_lora_weight = apply_safelora(config) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a bit more to the example. For instance, how to save and load these weights?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added more descriptions to the example.
If you feel there are still any missing parts, please let me know.
examples/safelora/README.md
Outdated
config = SafeLoraConfig(base_model_path='../LLM_Models/llama-2-7b-hf/',\ | ||
aligned_model_path='../LLM_Models/llama-2-7b-chat-fp16/', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use the HF model ids for these two.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has been modified.
src/peft/utils/safelora.py
Outdated
peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()} | ||
else: | ||
peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These 2 lines are identical
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has been modified.
- if (safelora_config.devices).lower() == "cpu":
- peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()}
- else:
- peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()}
+ peft_weights = {name: f.get_tensor(name).to(safelora_config.dtype) for name in f.keys()}
] | ||
align_model_parameters = [ | ||
name for name in sl_align.weight_map.keys() if any(v in name for v in list(peft_config.target_modules)) | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also check that base_model_parameters
and align_model_parameters
are the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added a check to verify if the model weights are the same.
+ if (sl_base.get_tensor(name_base) == sl_align.get_tensor(name_align)).all():
+ raise ValueError("The weights of the base Model and the aligned Model should be different.")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant something else. Would we expect that base_model_parameters == align_model_parameters
? If not, under what circumstances would they differ?
src/peft/utils/safelora.py
Outdated
return safety_vector | ||
|
||
|
||
def project_weights(configs, peft_weights, v): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's rename configs
to config
or safelora_config
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has been modified.
src/peft/utils/safelora.py
Outdated
metadata={"help": "The path of the LoRA wieghts and configs."}, | ||
) | ||
|
||
select_layers_type: str = field( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of str
, we can annotate this as Literal["threshold", "number"]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has been modified.
select_layers_type='threshold', | ||
save_weights=True) | ||
|
||
final_lora_weight = apply_safelora(config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example should show inference, here we only create the weights. What are the next steps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added more explanations in the README.md and also included code on how to use the SafeLoRA model.
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
@chiayi-hsu Once you're finished with your changes and want me to give another review, please ping me. |
@BenjaminBossan I have completed the modifications. Please help review them. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the updates. I did another review. Most of what I found are just smaller things like docs, please take a look.
Now as a next step, it is important that we also add some unit tests. This not going to be very straightforward, because we cannot easily test model alignment and we also don't want to use any big models during unit testing.
One proposal for this would be to use a small model like hf-internal-testing/tiny-random-OPTForCausalLM
as the base model. Then let's modify some weights (setting them to 0?) and save this as the "aligned" model. Then call apply_safelora
with these 2 models and various options to see if those tests pass. This would not really check the alignment though.
In addition, we could think about adding a true alignment test for the nightly run with GPU. For this test, it would be okay to use a bigger model (but ideally still not too big).
LMK what you think about this testing strategy and if you have further questions.
Apart from this, please call make style
on your PR, as this is a prerequisite for the CI to pass.
This is the configuration class to store the configuration of a safeLora. | ||
|
||
|
||
Args: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please format the docstring to be in line with the other docstrings used in PEFT. As an example, check here:
peft/src/peft/tuners/lora/config.py
Line 128 in 8874ab5
Args: |
default="meta-llama/Llama-2-7b-hf", | ||
metadata={"help": "The path of the base model for obtaining the aligned matrix."}, | ||
) | ||
|
||
aligned_model_path: str = field( | ||
default="TheBloke/Llama-2-7B-Chat-fp16", | ||
metadata={"help": "The path of the aligned model for obtaining the aligned matrix."}, | ||
) | ||
|
||
peft_model_path: str = field( | ||
default="LisaSchunke/llama-2-7b-peft-finetuned-20000-dataset", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, it doesn't make sense to set default values here, I would remove them. WDYT?
|
||
peft_model_path: str = field( | ||
default="LisaSchunke/llama-2-7b-peft-finetuned-20000-dataset", | ||
metadata={"help": "The path of the LoRA wieghts and configs."}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metadata={"help": "The path of the LoRA wieghts and configs."}, | |
metadata={"help": "The path of the LoRA weights and config."}, |
After fine-tuning large language models (LLMs) using LoRA, the alignment of the resulting models may decrease. | ||
Therefore, applying `apply_safelora()` is intended to help preserve the alignment of the final models. | ||
|
||
It is important to note that the model weights of the aligned model and the base model must be of the same size. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's also mention that right now, only safetensors format is supported.
) | ||
|
||
with safe_open( | ||
f"{os.path.join(safelora_config.peft_model_path, 'adapter_model.safetensors')}", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not hard-code adapter_model.safetensors
, let's use peft.utils.constants.SAFETENSORS_WEIGHTS_NAME
.
final_weights, _ = project_weights(safelora_config, peft_weights, projected_matrix) | ||
|
||
if safelora_config.save_weights: | ||
save_file(final_weights, f"{os.path.join(safelora_config.peft_model_path, 'adapter_model.safetensors')}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not hard-code adapter_model.safetensors
, let's use peft.utils.constants.SAFETENSORS_WEIGHTS_NAME
.
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
Co-authored-by: Benjamin Bossan <[email protected]>
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
Yes, it still needs to be addressed.
github-actions[bot] ***@***.***>於 2024年12月27日 週五,16:04寫道:
… This issue has been automatically marked as stale because it has not had
recent activity. If you think this still needs to be addressed please
comment on this thread.
—
Reply to this email directly, view it on GitHub
<#2201 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AS67YGQHUBZIJ5BAKWVLSET2HVT6XAVCNFSM6AAAAABRJLCTH2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNRTG44DEMJZHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
The pull request was closed due to syncing with the latest version of PEFT, so I have requested the pull request again.
I have made all the necessary changes based on our previous conversations in this version.
If there are any issues, please let me know.
Thank you.