You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the scripts in the examples/ folder of Accelerate or an officially supported no_trainer script in the examples folder of the transformers repo (such as run_no_trainer_glue.py)
uv run accelerate launch --config_file default.yaml ./dummy_train.py
I end up with this error: [rank0]: RuntimeError: Input type (float) and bias type (c10::Half) should be the same. I can fix it by explicitly casting the input tensor with half() but Accelerate/DeepSpeed should deal with mixed precision training, right?
Am I doing something wrong?
Expected behavior
Accelerate automatically handles mixed precision training, and correctly casts the input and weights to the correct type.
The text was updated successfully, but these errors were encountered:
PABannier
changed the title
[rank0]: RuntimeError: Input type (float) and bias type (c10::Half) should be the same
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
Dec 11, 2024
System Info
Information
Tasks
no_trainer
script in theexamples
folder of thetransformers
repo (such asrun_no_trainer_glue.py
)Reproduction
I end up with this error:
[rank0]: RuntimeError: Input type (float) and bias type (c10::Half) should be the same
. I can fix it by explicitly casting the input tensor withhalf()
but Accelerate/DeepSpeed should deal with mixed precision training, right?Am I doing something wrong?
Expected behavior
Accelerate automatically handles mixed precision training, and correctly casts the input and weights to the correct type.
The text was updated successfully, but these errors were encountered: