Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cpu: x64: brgemm, matmul: add f32:f16 configuration support on AVX2 and AVX512_CORE (fixes MFDNN-11992) #2272

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

dzarukin
Copy link
Contributor

MFDNN-11992

The change adds f32:f16:f32 support on AVX512_CORE and AVX2 through a up-conversion path.
Extended a brgemm kernel (thanks @dmitry-gorokhov) and brgemm matmul copy routines to support the conversion.

@dzarukin dzarukin requested review from a team as code owners December 14, 2024 02:08
@github-actions github-actions bot added the platform:cpu-x64 Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64 label Dec 14, 2024
@dzarukin
Copy link
Contributor Author

make test
enable benchdnn_nightly
disable benchdnn_all
enable benchdnn_conv
enable benchdnn_matmul
enable benchdnn_ip

@vpirogov
Copy link
Member

Uh-oh, : in comment body breaks commit message checker...

@dmitry-gorokhov
Copy link
Contributor

Hey @dzarukin, thanks for upstreaming this!
I am wondering if you can include bf16 weights support as well? Should be quite similar from my understanding.

@dzarukin dzarukin force-pushed the dzarukin/f32f16_matmul branch from b9440a2 to 45cf038 Compare December 16, 2024 18:35
@github-actions github-actions bot added the component:tests Codeowner: @oneapi-src/onednn-arch label Dec 16, 2024
@dzarukin
Copy link
Contributor Author

make test
enable benchdnn_nightly
disable benchdnn_all
enable benchdnn_conv
enable benchdnn_matmul
enable benchdnn_ip

@dzarukin dzarukin force-pushed the dzarukin/f32f16_matmul branch from df2d9cd to 9210383 Compare December 19, 2024 22:38
@dzarukin
Copy link
Contributor Author

make test
enable benchdnn_nightly
disable benchdnn_all
enable benchdnn_conv
enable benchdnn_matmul
enable benchdnn_ip

@dzarukin
Copy link
Contributor Author

Hey @dzarukin, thanks for upstreaming this! I am wondering if you can include bf16 weights support as well? Should be quite similar from my understanding.

Added, thanks for a reminder.

@dzarukin dzarukin force-pushed the dzarukin/f32f16_matmul branch from 9210383 to 33a89d9 Compare December 19, 2024 23:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:tests Codeowner: @oneapi-src/onednn-arch platform:cpu-x64 Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants