How to customize a forward pass #9577
Replies: 2 comments
-
Interesting. This is more of a question that's suitable for pytorch, and is probably not an intended design use case. Something like following should work better IMO (have not tested though): from diffusers import DiffusionPipeline, SomeTransformerModel
class MyTransformer(SomeTransformerModel):
def forward(self, ...):
# My custom implementation
transformer = MyTransformer.from_pretrained(...)
pipe = DiffusionPipeline.from_pretrained(..., transformer=transformer)
pipe.enable_model_cpu_offload() If you just want to modify the inputs before proceeding with the actual existing forward impl, a forward hook would probably be better. I don't see why |
Beta Was this translation helpful? Give feedback.
-
I experienced that weights were still on CPU during inference which raised an torch cast exception. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I realized that when I use
transformer.forward = custom_transformer_forward.__get__(transformer, TransformerModel)
, I can't useenable_model_cpu_offload()
any more, probably I broke one hook doing so.What is the best pratice to customize the forward pass ?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions