Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flux-dev-fp8 with Hyper-FLUX.1-dev-8steps-lora #10392

Open
lhjlhj11 opened this issue Dec 27, 2024 · 3 comments
Open

Flux-dev-fp8 with Hyper-FLUX.1-dev-8steps-lora #10392

lhjlhj11 opened this issue Dec 27, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@lhjlhj11
Copy link

Describe the bug

It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)

self.text_encoder_2 = T5EncoderModel.from_pretrained(os.path.join(self.model_root, self.config["text_encoder_2_repo"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.text_encoder_2, weights=qfloat8)
freeze(self.text_encoder_2)

self.pipe = FluxPipeline.from_pretrained(os.path.join(self.model_root, self.config["flux_repo"]), transformer=None, text_encoder_2=None, torch_dtype=torch.bfloat16).to(self.device)

self.pipe.transformer = self.transformer
self.pipe.text_encoder_2 = self.text_encoder_2

self.pipe.load_lora_weights(load_file(os.path.join(self.model_root, self.config["8steps_lora"]), device=self.device), adapter_name="8steps")
self.pipe.fuse_lora(lora_scale=1.0)

Reproduction

It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)

self.text_encoder_2 = T5EncoderModel.from_pretrained(os.path.join(self.model_root, self.config["text_encoder_2_repo"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.text_encoder_2, weights=qfloat8)
freeze(self.text_encoder_2)

self.pipe = FluxPipeline.from_pretrained(os.path.join(self.model_root, self.config["flux_repo"]), transformer=None, text_encoder_2=None, torch_dtype=torch.bfloat16).to(self.device)

self.pipe.transformer = self.transformer
self.pipe.text_encoder_2 = self.text_encoder_2

self.pipe.load_lora_weights(load_file(os.path.join(self.model_root, self.config["8steps_lora"]), device=self.device), adapter_name="8steps")
self.pipe.fuse_lora(lora_scale=1.0)

Logs

No response

System Info

It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)

self.text_encoder_2 = T5EncoderModel.from_pretrained(os.path.join(self.model_root, self.config["text_encoder_2_repo"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.text_encoder_2, weights=qfloat8)
freeze(self.text_encoder_2)

self.pipe = FluxPipeline.from_pretrained(os.path.join(self.model_root, self.config["flux_repo"]), transformer=None, text_encoder_2=None, torch_dtype=torch.bfloat16).to(self.device)

self.pipe.transformer = self.transformer
self.pipe.text_encoder_2 = self.text_encoder_2

self.pipe.load_lora_weights(load_file(os.path.join(self.model_root, self.config["8steps_lora"]), device=self.device), adapter_name="8steps")
self.pipe.fuse_lora(lora_scale=1.0)

Who can help?

It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)

self.text_encoder_2 = T5EncoderModel.from_pretrained(os.path.join(self.model_root, self.config["text_encoder_2_repo"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.text_encoder_2, weights=qfloat8)
freeze(self.text_encoder_2)

self.pipe = FluxPipeline.from_pretrained(os.path.join(self.model_root, self.config["flux_repo"]), transformer=None, text_encoder_2=None, torch_dtype=torch.bfloat16).to(self.device)

self.pipe.transformer = self.transformer
self.pipe.text_encoder_2 = self.text_encoder_2

self.pipe.load_lora_weights(load_file(os.path.join(self.model_root, self.config["8steps_lora"]), device=self.device), adapter_name="8steps")
self.pipe.fuse_lora(lora_scale=1.0)

@lhjlhj11 lhjlhj11 added the bug Something isn't working label Dec 27, 2024
@a-r-r-o-w
Copy link
Member

cc @sayakpaul

@lhjlhj11
Copy link
Author

cc @sayakpaul

I have known the reason.
If you load two loras A and B.
If you set lora weights respectively:
self.pipe.set_adapters(["A"], adapter_weights=[0.125]), self.pipe.set_adapters(["B"], adapter_weights=[0.85]),the lora will not be effective.
I must set lora together:
self.pipe.set_adapters(["A", "B"], adapter_weights=[0.125, 0.85])
So,there must be some bugs in the function "set_adapters"

@sayakpaul
Copy link
Member

@lhjlhj11 just to confirm pipe.set_adapters(["A", "B"], adapter_weights=[0.125, 0.85]) works as expected?

If you do:

from diffusers import DiffusionPipeline 
import torch 

lora_one = "Purz/choose-your-own-adventure"
lora_two = "ByteDance/Hyper-SD"

pipeline = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
pipeline.load_lora_weights(lora_one) # ['default_0']
print(pipeline.get_active_adapters())

pipeline.load_lora_weights(lora_two, weight_name="Hyper-FLUX.1-dev-8steps-lora.safetensors")
print(pipeline.get_active_adapters()) # ['default_1']

pipeline.set_adapters(["default_0", "default_1"])
print(pipeline.get_active_adapters()) # ['default_0', 'default_1']

This is expected and we detail this in this doc:
https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants