Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MPS Mixed-precision Autocast #20497

Open
laclouis5 opened this issue Dec 13, 2024 · 0 comments
Open

MPS Mixed-precision Autocast #20497

laclouis5 opened this issue Dec 13, 2024 · 0 comments
Labels
feature Is an improvement or enhancement needs triage Waiting to be triaged by maintainers

Comments

@laclouis5
Copy link

laclouis5 commented Dec 13, 2024

Description & Motivation

Support for MPS autocasting has recently be added in PyTorch 2.5.0 here and there is an ongoing effort to implement gradient scaling here.

PyTorch Lightning does not currently support mixed-precision on MPS device but it could be added in a near future when gradient scaling is finalized.

Is this feature considered? This would allow reducing memory usage and improving training time for some models.

Pitch

Currently PyTorch Lightning falls back to FP32 when trying to use mixed-precision and issues a warning mentioning CUDA.

I think that considering adding a path for MPS mixed-precision would be great.

Alternatives

Stick to FP32 training when using MPS device.

Additional context

thanks for your work!

cc @Borda

@laclouis5 laclouis5 added feature Is an improvement or enhancement needs triage Waiting to be triaged by maintainers labels Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement needs triage Waiting to be triaged by maintainers
Projects
None yet
Development

No branches or pull requests

1 participant