Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vulkan: optimize mul_mat for small values of N #10991

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

jeffbolznv
Copy link
Collaborator

This is what I have in mind to fix #10966. Currently Draft because it needs more perf testing, particularly to make sure that it doesn't regress perf when N==1.

Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where the batch_strides are overloaded to hold the row strides. Put the loads from the B matrix in the innermost loop because it should cache better.

Share some code for reducing the result values to memory in mul_mat_vec_base.

Results on RTX 4070
llama-batched-bench -m Phi-3-mini-4k-instruct-q4.gguf -ngl 99 -npp 512 -ntg 128 -npl 1,2,4,8,16 -pps

before:
|    PP |     TG |    B |   N_KV |   T_PP s | S_PP t/s |   T_TG s | S_TG t/s |      T s |    S t/s |
|-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
|   512 |    128 |    1 |    640 |    0.186 |  2752.13 |    1.387 |    92.31 |    1.573 |   406.94 |
|   512 |    128 |    2 |    768 |    0.139 |  3682.69 |    5.796 |    44.17 |    5.935 |   129.40 |
|   512 |    128 |    4 |   1024 |    0.147 |  3476.28 |    5.901 |    86.77 |    6.048 |   169.31 |
|   512 |    128 |    8 |   1536 |    0.142 |  3617.89 |    6.309 |   162.30 |    6.451 |   238.10 |
|   512 |    128 |   16 |   2560 |    0.142 |  3608.86 |    7.470 |   274.17 |    7.612 |   336.32 |

after:
|    PP |     TG |    B |   N_KV |   T_PP s | S_PP t/s |   T_TG s | S_TG t/s |      T s |    S t/s |
|-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
|   512 |    128 |    1 |    640 |    0.211 |  2431.24 |    1.411 |    90.68 |    1.622 |   394.55 |
|   512 |    128 |    2 |    768 |    0.139 |  3686.18 |    1.695 |   151.04 |    1.834 |   418.81 |
|   512 |    128 |    4 |   1024 |    0.140 |  3658.53 |    1.950 |   262.52 |    2.090 |   489.90 |
|   512 |    128 |    8 |   1536 |    0.148 |  3469.54 |    6.253 |   163.76 |    6.401 |   239.98 |
|   512 |    128 |   16 |   2560 |    0.149 |  3433.38 |    7.433 |   275.54 |    7.582 |   337.65 |

I'll put directed perf tests in a separate comment.

@jeffbolznv jeffbolznv requested a review from 0cc4m December 26, 2024 22:30
@github-actions github-actions bot added testing Everything test related Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Dec 26, 2024
@jeffbolznv
Copy link
Collaborator Author

Results from test-backend-ops perf -o MUL_MAT

before (with coopmat2 enabled):
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2556 runs -   492.03 us/run - 117.44 MFLOP/run - 238.69 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    4260 runs -   251.12 us/run - 117.44 MFLOP/run - 467.67 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22152 runs -    45.22 us/run - 117.44 MFLOP/run -   2.60 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17040 runs -    60.39 us/run - 117.44 MFLOP/run -   1.94 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    79.27 us/run - 117.44 MFLOP/run -   1.48 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -    99.25 us/run - 117.44 MFLOP/run -   1.18 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   134.78 us/run - 117.44 MFLOP/run - 871.34 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  18744 runs -    54.35 us/run - 117.44 MFLOP/run -   2.16 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   104.53 us/run - 117.44 MFLOP/run -   1.12 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  20448 runs -    50.65 us/run - 117.44 MFLOP/run -   2.32 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  13632 runs -    74.10 us/run - 117.44 MFLOP/run -   1.58 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   107.20 us/run - 117.44 MFLOP/run -   1.10 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                11928 runs -    85.06 us/run - 117.44 MFLOP/run -   1.38 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2130 runs -   505.11 us/run - 234.88 MFLOP/run - 465.01 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2556 runs -   447.88 us/run - 234.88 MFLOP/run - 524.43 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   435.22 us/run - 234.88 MFLOP/run - 539.68 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   446.16 us/run - 234.88 MFLOP/run - 526.44 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   542.16 us/run - 234.88 MFLOP/run - 433.23 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   510.54 us/run - 234.88 MFLOP/run - 460.07 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   489.83 us/run - 234.88 MFLOP/run - 479.51 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   630.16 us/run - 234.88 MFLOP/run - 372.73 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   567.43 us/run - 234.88 MFLOP/run - 413.94 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   505.23 us/run - 234.88 MFLOP/run - 464.89 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   704.45 us/run - 234.88 MFLOP/run - 333.43 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   485.28 us/run - 234.88 MFLOP/run - 484.01 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1704 runs -   588.47 us/run - 234.88 MFLOP/run - 399.14 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1988 runs -   507.20 us/run - 352.32 MFLOP/run - 694.64 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2272 runs -   446.84 us/run - 352.32 MFLOP/run - 788.48 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   436.91 us/run - 352.32 MFLOP/run - 806.40 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   466.01 us/run - 352.32 MFLOP/run - 756.04 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   542.84 us/run - 352.32 MFLOP/run - 649.03 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   508.58 us/run - 352.32 MFLOP/run - 692.75 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   489.72 us/run - 352.32 MFLOP/run - 719.44 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   623.28 us/run - 352.32 MFLOP/run - 565.27 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   567.26 us/run - 352.32 MFLOP/run - 621.09 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   658.65 us/run - 352.32 MFLOP/run - 534.92 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   567.61 us/run - 352.32 MFLOP/run - 620.71 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   486.81 us/run - 352.32 MFLOP/run - 723.74 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1704 runs -   595.93 us/run - 352.32 MFLOP/run - 591.21 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2130 runs -   509.75 us/run - 469.76 MFLOP/run - 921.54 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2343 runs -   449.00 us/run - 469.76 MFLOP/run -   1.05 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2343 runs -   437.28 us/run - 469.76 MFLOP/run -   1.07 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2343 runs -   468.96 us/run - 469.76 MFLOP/run -   1.00 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   544.17 us/run - 469.76 MFLOP/run - 863.26 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1491 runs -   686.28 us/run - 469.76 MFLOP/run - 684.51 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   489.94 us/run - 469.76 MFLOP/run - 958.81 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   470.73 us/run - 469.76 MFLOP/run - 997.94 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   568.54 us/run - 469.76 MFLOP/run - 826.26 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   508.29 us/run - 469.76 MFLOP/run - 924.20 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   568.17 us/run - 469.76 MFLOP/run - 826.80 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   487.63 us/run - 469.76 MFLOP/run - 963.35 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1917 runs -   528.13 us/run - 469.76 MFLOP/run - 889.49 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2052 runs -   510.96 us/run - 587.20 MFLOP/run -   1.15 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2223 runs -   449.89 us/run - 587.20 MFLOP/run -   1.31 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2394 runs -   438.36 us/run - 587.20 MFLOP/run -   1.34 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2223 runs -   471.64 us/run - 587.20 MFLOP/run -   1.25 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   546.42 us/run - 587.20 MFLOP/run -   1.07 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   510.66 us/run - 587.20 MFLOP/run -   1.15 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   494.39 us/run - 587.20 MFLOP/run -   1.19 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2223 runs -   470.94 us/run - 587.20 MFLOP/run -   1.25 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   570.14 us/run - 587.20 MFLOP/run -   1.03 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1710 runs -   622.60 us/run - 587.20 MFLOP/run - 943.14 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   674.18 us/run - 587.20 MFLOP/run - 870.99 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   489.73 us/run - 587.20 MFLOP/run -   1.20 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 2052 runs -   515.57 us/run - 587.20 MFLOP/run -   1.14 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2033 runs -   515.37 us/run - 939.52 MFLOP/run -   1.82 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2247 runs -   454.28 us/run - 939.52 MFLOP/run -   2.07 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2354 runs -   440.24 us/run - 939.52 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2140 runs -   470.06 us/run - 939.52 MFLOP/run -   2.00 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1926 runs -   547.73 us/run - 939.52 MFLOP/run -   1.72 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1498 runs -   668.86 us/run - 939.52 MFLOP/run -   1.40 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   493.21 us/run - 939.52 MFLOP/run -   1.90 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1712 runs -   607.48 us/run - 939.52 MFLOP/run -   1.55 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1819 runs -   571.28 us/run - 939.52 MFLOP/run -   1.64 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1498 runs -   672.34 us/run - 939.52 MFLOP/run -   1.40 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1391 runs -   745.89 us/run - 939.52 MFLOP/run -   1.26 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   492.09 us/run - 939.52 MFLOP/run -   1.91 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1926 runs -   539.10 us/run - 939.52 MFLOP/run -   1.74 TFLOPS

after:
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2556 runs -   492.15 us/run - 117.44 MFLOP/run - 238.63 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    4260 runs -   251.55 us/run - 117.44 MFLOP/run - 466.87 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22152 runs -    45.77 us/run - 117.44 MFLOP/run -   2.57 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  16188 runs -    63.76 us/run - 117.44 MFLOP/run -   1.84 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    80.85 us/run - 117.44 MFLOP/run -   1.45 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   117.77 us/run - 117.44 MFLOP/run - 997.23 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   134.44 us/run - 117.44 MFLOP/run - 873.56 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  18744 runs -    53.62 us/run - 117.44 MFLOP/run -   2.19 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   104.70 us/run - 117.44 MFLOP/run -   1.12 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  20448 runs -    50.27 us/run - 117.44 MFLOP/run -   2.34 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    95.54 us/run - 117.44 MFLOP/run -   1.23 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   107.41 us/run - 117.44 MFLOP/run -   1.09 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                11928 runs -    84.43 us/run - 117.44 MFLOP/run -   1.39 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2130 runs -   494.87 us/run - 234.88 MFLOP/run - 474.63 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    4260 runs -   253.07 us/run - 234.88 MFLOP/run - 928.13 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  16188 runs -    63.20 us/run - 234.88 MFLOP/run -   3.72 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  13632 runs -    75.14 us/run - 234.88 MFLOP/run -   3.13 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    93.91 us/run - 234.88 MFLOP/run -   2.50 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   110.77 us/run - 234.88 MFLOP/run -   2.12 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   137.74 us/run - 234.88 MFLOP/run -   1.71 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    91.60 us/run - 234.88 MFLOP/run -   2.56 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   123.03 us/run - 234.88 MFLOP/run -   1.91 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11502 runs -    88.40 us/run - 234.88 MFLOP/run -   2.66 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11502 runs -    87.25 us/run - 234.88 MFLOP/run -   2.69 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   109.90 us/run - 234.88 MFLOP/run -   2.14 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 9798 runs -   103.26 us/run - 234.88 MFLOP/run -   2.27 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2272 runs -   499.86 us/run - 352.32 MFLOP/run - 704.84 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    3976 runs -   258.54 us/run - 352.32 MFLOP/run -   1.36 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  13064 runs -    78.11 us/run - 352.32 MFLOP/run -   4.51 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   119.23 us/run - 352.32 MFLOP/run -   2.95 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9088 runs -   111.03 us/run - 352.32 MFLOP/run -   3.17 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6248 runs -   164.92 us/run - 352.32 MFLOP/run -   2.14 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7100 runs -   141.27 us/run - 352.32 MFLOP/run -   2.49 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8236 runs -   123.62 us/run - 352.32 MFLOP/run -   2.85 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   148.13 us/run - 352.32 MFLOP/run -   2.38 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    91.87 us/run - 352.32 MFLOP/run -   3.84 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   108.56 us/run - 352.32 MFLOP/run -   3.25 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   121.65 us/run - 352.32 MFLOP/run -   2.90 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 7668 runs -   132.48 us/run - 352.32 MFLOP/run -   2.66 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2130 runs -   501.44 us/run - 469.76 MFLOP/run - 936.83 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    4047 runs -   259.78 us/run - 469.76 MFLOP/run -   1.81 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -    98.85 us/run - 469.76 MFLOP/run -   4.75 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8094 runs -   125.80 us/run - 469.76 MFLOP/run -   3.73 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7881 runs -   128.16 us/run - 469.76 MFLOP/run -   3.67 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   157.07 us/run - 469.76 MFLOP/run -   2.99 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   172.95 us/run - 469.76 MFLOP/run -   2.72 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8307 runs -   121.43 us/run - 469.76 MFLOP/run -   3.87 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   160.11 us/run - 469.76 MFLOP/run -   2.93 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9159 runs -   111.70 us/run - 469.76 MFLOP/run -   4.21 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   130.82 us/run - 469.76 MFLOP/run -   3.59 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7242 runs -   141.93 us/run - 469.76 MFLOP/run -   3.31 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 6390 runs -   158.66 us/run - 469.76 MFLOP/run -   2.96 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2052 runs -   510.92 us/run - 587.20 MFLOP/run -   1.15 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2223 runs -   457.94 us/run - 587.20 MFLOP/run -   1.28 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2394 runs -   438.06 us/run - 587.20 MFLOP/run -   1.34 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   554.66 us/run - 587.20 MFLOP/run -   1.06 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   544.65 us/run - 587.20 MFLOP/run -   1.08 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   511.42 us/run - 587.20 MFLOP/run -   1.15 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   492.77 us/run - 587.20 MFLOP/run -   1.19 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2223 runs -   472.01 us/run - 587.20 MFLOP/run -   1.24 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   572.29 us/run - 587.20 MFLOP/run -   1.03 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   671.28 us/run - 587.20 MFLOP/run - 874.75 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   570.02 us/run - 587.20 MFLOP/run -   1.03 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   489.23 us/run - 587.20 MFLOP/run -   1.20 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1881 runs -   556.97 us/run - 587.20 MFLOP/run -   1.05 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2033 runs -   515.25 us/run - 939.52 MFLOP/run -   1.82 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2247 runs -   463.40 us/run - 939.52 MFLOP/run -   2.03 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2354 runs -   440.64 us/run - 939.52 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2140 runs -   480.48 us/run - 939.52 MFLOP/run -   1.96 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1926 runs -   547.03 us/run - 939.52 MFLOP/run -   1.72 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   514.89 us/run - 939.52 MFLOP/run -   1.82 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   495.21 us/run - 939.52 MFLOP/run -   1.90 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2140 runs -   475.24 us/run - 939.52 MFLOP/run -   1.98 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1819 runs -   572.38 us/run - 939.52 MFLOP/run -   1.64 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   514.41 us/run - 939.52 MFLOP/run -   1.83 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1391 runs -   751.80 us/run - 939.52 MFLOP/run -   1.25 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   493.02 us/run - 939.52 MFLOP/run -   1.91 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1712 runs -   599.61 us/run - 939.52 MFLOP/run -   1.57 TFLOPS

The "before" results with coopmat1 or no coopmat were worse (I can shared if somebody is interested, but probably more useful to benchmark another GPU instead).

Still thinking about where to put the cutoff for switching from mat_mul_vec to mat_mul. Seems like 8 would still be better using mat_mul_vec, and it doesn't cost anything except a little bit of compile time. Let's collect data on some other systems before finalizing anything.

@jeffbolznv
Copy link
Collaborator Author

CC @netrunnereve, can you please help with some perf tests?

@jeffbolznv
Copy link
Collaborator Author

Results with mul_mat_vec_max_cols == 8:

|    PP |     TG |    B |   N_KV |   T_PP s | S_PP t/s |   T_TG s | S_TG t/s |      T s |    S t/s |
|-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
|   512 |    128 |    1 |    640 |    0.184 |  2777.75 |    1.406 |    91.03 |    1.590 |   402.41 |
|   512 |    128 |    2 |    768 |    0.144 |  3554.54 |    1.691 |   151.36 |    1.835 |   418.43 |
|   512 |    128 |    4 |   1024 |    0.140 |  3655.89 |    1.978 |   258.90 |    2.118 |   483.56 |
|   512 |    128 |    8 |   1536 |    0.147 |  3484.46 |    3.163 |   323.70 |    3.310 |   464.00 |
|   512 |    128 |   16 |   2560 |    0.149 |  3427.04 |    7.199 |   284.49 |    7.348 |   348.38 |

  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8208 runs -   122.57 us/run - 587.20 MFLOP/run -   4.79 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5130 runs -   198.44 us/run - 587.20 MFLOP/run -   2.96 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6498 runs -   154.93 us/run - 587.20 MFLOP/run -   3.79 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5301 runs -   192.70 us/run - 587.20 MFLOP/run -   3.05 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4959 runs -   208.25 us/run - 587.20 MFLOP/run -   2.82 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7011 runs -   144.74 us/run - 587.20 MFLOP/run -   4.06 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5130 runs -   201.45 us/run - 587.20 MFLOP/run -   2.91 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6156 runs -   164.43 us/run - 587.20 MFLOP/run -   3.57 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5472 runs -   183.34 us/run - 587.20 MFLOP/run -   3.20 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6327 runs -   159.89 us/run - 587.20 MFLOP/run -   3.67 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 6327 runs -   160.11 us/run - 587.20 MFLOP/run -   3.67 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2033 runs -   508.04 us/run - 939.52 MFLOP/run -   1.85 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    3531 runs -   284.00 us/run - 939.52 MFLOP/run -   3.31 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5350 runs -   189.77 us/run - 939.52 MFLOP/run -   4.95 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3317 runs -   302.83 us/run - 939.52 MFLOP/run -   3.10 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4601 runs -   221.13 us/run - 939.52 MFLOP/run -   4.25 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3424 runs -   292.64 us/run - 939.52 MFLOP/run -   3.21 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3210 runs -   313.45 us/run - 939.52 MFLOP/run -   3.00 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4601 runs -   219.81 us/run - 939.52 MFLOP/run -   4.27 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3317 runs -   308.18 us/run - 939.52 MFLOP/run -   3.05 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5029 runs -   202.19 us/run - 939.52 MFLOP/run -   4.65 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4708 runs -   216.55 us/run - 939.52 MFLOP/run -   4.34 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4601 runs -   218.30 us/run - 939.52 MFLOP/run -   4.30 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 4173 runs -   242.99 us/run - 939.52 MFLOP/run -   3.87 TFLOPS

@netrunnereve
Copy link
Collaborator

CC @netrunnereve, can you please help with some perf tests?

Here are the numbers on my RX 470, it's much faster with small ns compared to master. My card prefers a max cols of 8 or maybe something even larger.

Master:

  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     852 runs -  1216.30 us/run - 117.44 MFLOP/run -  96.56 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1704 runs -   648.42 us/run - 117.44 MFLOP/run - 181.12 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   219.79 us/run - 117.44 MFLOP/run - 534.32 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   263.19 us/run - 117.44 MFLOP/run - 446.21 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   315.26 us/run - 117.44 MFLOP/run - 372.52 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   343.19 us/run - 117.44 MFLOP/run - 342.21 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   341.83 us/run - 117.44 MFLOP/run - 343.57 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   240.04 us/run - 117.44 MFLOP/run - 489.24 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   444.23 us/run - 117.44 MFLOP/run - 264.37 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   236.60 us/run - 117.44 MFLOP/run - 496.38 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   311.48 us/run - 117.44 MFLOP/run - 377.04 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   365.34 us/run - 117.44 MFLOP/run - 321.46 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 5112 runs -   225.15 us/run - 117.44 MFLOP/run - 521.60 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     426 runs - 34594.86 us/run - 234.88 MFLOP/run -   6.79 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     426 runs -  6174.27 us/run - 234.88 MFLOP/run -  38.04 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3062.32 us/run - 234.88 MFLOP/run -  76.70 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  2869.80 us/run - 234.88 MFLOP/run -  81.85 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3619.29 us/run - 234.88 MFLOP/run -  64.90 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  2944.36 us/run - 234.88 MFLOP/run -  79.77 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3155.41 us/run - 234.88 MFLOP/run -  74.44 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3600.20 us/run - 234.88 MFLOP/run -  65.24 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  5398.02 us/run - 234.88 MFLOP/run -  43.51 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3558.89 us/run - 234.88 MFLOP/run -  66.00 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3923.29 us/run - 234.88 MFLOP/run -  59.87 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3643.44 us/run - 234.88 MFLOP/run -  64.47 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  426 runs -  3137.46 us/run - 234.88 MFLOP/run -  74.86 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     284 runs - 35506.70 us/run - 352.32 MFLOP/run -   9.92 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     284 runs -  6184.04 us/run - 352.32 MFLOP/run -  56.97 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    568 runs -  3336.13 us/run - 352.32 MFLOP/run - 105.61 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    568 runs -  3206.07 us/run - 352.32 MFLOP/run - 109.89 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  4161.13 us/run - 352.32 MFLOP/run -  84.67 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  3721.77 us/run - 352.32 MFLOP/run -  94.67 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    568 runs -  3492.59 us/run - 352.32 MFLOP/run - 100.88 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  3908.29 us/run - 352.32 MFLOP/run -  90.15 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  5905.91 us/run - 352.32 MFLOP/run -  59.66 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  4338.64 us/run - 352.32 MFLOP/run -  81.21 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  4336.33 us/run - 352.32 MFLOP/run -  81.25 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  4010.72 us/run - 352.32 MFLOP/run -  87.84 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  568 runs -  3470.56 us/run - 352.32 MFLOP/run - 101.52 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     213 runs - 36834.47 us/run - 469.76 MFLOP/run -  12.75 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     213 runs -  6144.24 us/run - 469.76 MFLOP/run -  76.46 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3314.64 us/run - 469.76 MFLOP/run - 141.72 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3188.94 us/run - 469.76 MFLOP/run - 147.31 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  4113.36 us/run - 469.76 MFLOP/run - 114.20 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3682.00 us/run - 469.76 MFLOP/run - 127.58 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3454.99 us/run - 469.76 MFLOP/run - 135.97 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3915.42 us/run - 469.76 MFLOP/run - 119.98 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    213 runs -  5907.96 us/run - 469.76 MFLOP/run -  79.51 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  4337.69 us/run - 469.76 MFLOP/run - 108.30 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  4282.27 us/run - 469.76 MFLOP/run - 109.70 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  4013.73 us/run - 469.76 MFLOP/run - 117.04 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  426 runs -  3428.75 us/run - 469.76 MFLOP/run - 137.01 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     171 runs - 63726.35 us/run - 587.20 MFLOP/run -   9.21 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     171 runs -  7152.25 us/run - 587.20 MFLOP/run -  82.10 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4195.32 us/run - 587.20 MFLOP/run - 139.97 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  3472.49 us/run - 587.20 MFLOP/run - 169.10 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5101.41 us/run - 587.20 MFLOP/run - 115.11 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5789.15 us/run - 587.20 MFLOP/run - 101.43 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  3670.29 us/run - 587.20 MFLOP/run - 159.99 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4199.34 us/run - 587.20 MFLOP/run - 139.83 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    171 runs -  6215.60 us/run - 587.20 MFLOP/run -  94.47 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4672.93 us/run - 587.20 MFLOP/run - 125.66 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5186.44 us/run - 587.20 MFLOP/run - 113.22 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4256.72 us/run - 587.20 MFLOP/run - 137.95 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  342 runs -  4293.20 us/run - 587.20 MFLOP/run - 136.78 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     107 runs - 63861.16 us/run - 939.52 MFLOP/run -  14.71 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     214 runs -  7238.21 us/run - 939.52 MFLOP/run - 129.80 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4469.99 us/run - 939.52 MFLOP/run - 210.18 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3718.92 us/run - 939.52 MFLOP/run - 252.63 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  5386.27 us/run - 939.52 MFLOP/run - 174.43 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  6098.87 us/run - 939.52 MFLOP/run - 154.05 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3819.89 us/run - 939.52 MFLOP/run - 245.96 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4489.57 us/run - 939.52 MFLOP/run - 209.27 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  6502.73 us/run - 939.52 MFLOP/run - 144.48 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  4957.55 us/run - 939.52 MFLOP/run - 189.51 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  5439.68 us/run - 939.52 MFLOP/run - 172.72 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4535.69 us/run - 939.52 MFLOP/run - 207.14 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  321 runs -  4558.73 us/run - 939.52 MFLOP/run - 206.09 GFLOPS

PR:

  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     852 runs -  1217.83 us/run - 117.44 MFLOP/run -  96.43 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1704 runs -   648.67 us/run - 117.44 MFLOP/run - 181.05 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   220.52 us/run - 117.44 MFLOP/run - 532.56 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   260.91 us/run - 117.44 MFLOP/run - 450.11 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   317.34 us/run - 117.44 MFLOP/run - 370.07 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   342.40 us/run - 117.44 MFLOP/run - 342.99 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   341.18 us/run - 117.44 MFLOP/run - 344.22 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   239.85 us/run - 117.44 MFLOP/run - 489.63 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   446.84 us/run - 117.44 MFLOP/run - 262.83 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   234.25 us/run - 117.44 MFLOP/run - 501.35 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   313.08 us/run - 117.44 MFLOP/run - 375.12 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   364.63 us/run - 117.44 MFLOP/run - 322.08 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 5112 runs -   225.88 us/run - 117.44 MFLOP/run - 519.93 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     852 runs -  1229.47 us/run - 234.88 MFLOP/run - 191.04 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1704 runs -   719.99 us/run - 234.88 MFLOP/run - 326.23 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3834 runs -   286.27 us/run - 234.88 MFLOP/run - 820.49 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   367.11 us/run - 234.88 MFLOP/run - 639.81 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   391.70 us/run - 234.88 MFLOP/run - 599.64 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   447.79 us/run - 234.88 MFLOP/run - 524.54 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   447.22 us/run - 234.88 MFLOP/run - 525.20 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   335.05 us/run - 234.88 MFLOP/run - 701.02 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   542.40 us/run - 234.88 MFLOP/run - 433.04 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   332.25 us/run - 234.88 MFLOP/run - 706.95 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   411.88 us/run - 234.88 MFLOP/run - 570.26 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   427.64 us/run - 234.88 MFLOP/run - 549.25 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 3408 runs -   295.02 us/run - 234.88 MFLOP/run - 796.15 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     852 runs -  1215.49 us/run - 352.32 MFLOP/run - 289.86 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1420 runs -   817.28 us/run - 352.32 MFLOP/run - 431.09 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2840 runs -   378.74 us/run - 352.32 MFLOP/run - 930.24 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   473.56 us/run - 352.32 MFLOP/run - 743.99 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   461.17 us/run - 352.32 MFLOP/run - 763.97 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   554.53 us/run - 352.32 MFLOP/run - 635.36 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   590.02 us/run - 352.32 MFLOP/run - 597.14 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   449.88 us/run - 352.32 MFLOP/run - 783.15 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   631.23 us/run - 352.32 MFLOP/run - 558.15 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   445.85 us/run - 352.32 MFLOP/run - 790.22 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   500.70 us/run - 352.32 MFLOP/run - 703.66 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   529.47 us/run - 352.32 MFLOP/run - 665.43 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 2840 runs -   388.79 us/run - 352.32 MFLOP/run - 906.19 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     852 runs -  1235.53 us/run - 469.76 MFLOP/run - 380.21 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1065 runs -   939.33 us/run - 469.76 MFLOP/run - 500.10 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2343 runs -   459.59 us/run - 469.76 MFLOP/run -   1.02 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   567.11 us/run - 469.76 MFLOP/run - 828.35 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   514.35 us/run - 469.76 MFLOP/run - 913.30 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   664.07 us/run - 469.76 MFLOP/run - 707.40 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1278 runs -   839.34 us/run - 469.76 MFLOP/run - 559.68 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   560.88 us/run - 469.76 MFLOP/run - 837.54 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1278 runs -   882.50 us/run - 469.76 MFLOP/run - 532.31 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   568.51 us/run - 469.76 MFLOP/run - 826.31 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   657.32 us/run - 469.76 MFLOP/run - 714.66 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   619.77 us/run - 469.76 MFLOP/run - 757.96 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 2343 runs -   470.00 us/run - 469.76 MFLOP/run - 999.50 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     171 runs - 63729.99 us/run - 587.20 MFLOP/run -   9.21 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     171 runs -  7168.01 us/run - 587.20 MFLOP/run -  81.92 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4195.74 us/run - 587.20 MFLOP/run - 139.95 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  3476.33 us/run - 587.20 MFLOP/run - 168.91 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5097.87 us/run - 587.20 MFLOP/run - 115.19 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5788.51 us/run - 587.20 MFLOP/run - 101.44 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  3678.56 us/run - 587.20 MFLOP/run - 159.63 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4203.57 us/run - 587.20 MFLOP/run - 139.69 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    171 runs -  6215.48 us/run - 587.20 MFLOP/run -  94.47 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4675.67 us/run - 587.20 MFLOP/run - 125.59 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5188.15 us/run - 587.20 MFLOP/run - 113.18 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4257.13 us/run - 587.20 MFLOP/run - 137.93 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  342 runs -  4294.46 us/run - 587.20 MFLOP/run - 136.73 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     107 runs - 63876.04 us/run - 939.52 MFLOP/run -  14.71 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     214 runs -  7251.70 us/run - 939.52 MFLOP/run - 129.56 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4471.63 us/run - 939.52 MFLOP/run - 210.11 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3718.53 us/run - 939.52 MFLOP/run - 252.66 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  5383.71 us/run - 939.52 MFLOP/run - 174.51 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  6097.81 us/run - 939.52 MFLOP/run - 154.08 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3819.02 us/run - 939.52 MFLOP/run - 246.01 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4489.68 us/run - 939.52 MFLOP/run - 209.26 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  6507.34 us/run - 939.52 MFLOP/run - 144.38 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  4957.38 us/run - 939.52 MFLOP/run - 189.52 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  5441.24 us/run - 939.52 MFLOP/run - 172.67 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4533.26 us/run - 939.52 MFLOP/run - 207.25 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  321 runs -  4559.45 us/run - 939.52 MFLOP/run - 206.06 GFLOPS

max cols of 8:

  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     855 runs -  1311.54 us/run - 587.20 MFLOP/run - 447.72 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1026 runs -  1087.28 us/run - 587.20 MFLOP/run - 540.07 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   552.73 us/run - 587.20 MFLOP/run -   1.06 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   665.14 us/run - 587.20 MFLOP/run - 882.83 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1710 runs -   601.74 us/run - 587.20 MFLOP/run - 975.85 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   765.90 us/run - 587.20 MFLOP/run - 766.68 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1026 runs -  1153.59 us/run - 587.20 MFLOP/run - 509.02 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   720.47 us/run - 587.20 MFLOP/run - 815.03 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1026 runs -  1006.35 us/run - 587.20 MFLOP/run - 583.50 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   711.60 us/run - 587.20 MFLOP/run - 825.18 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   764.91 us/run - 587.20 MFLOP/run - 767.68 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   765.83 us/run - 587.20 MFLOP/run - 766.76 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1881 runs -   573.36 us/run - 587.20 MFLOP/run -   1.02 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     642 runs -  1672.57 us/run - 939.52 MFLOP/run - 561.73 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     749 runs -  1454.35 us/run - 939.52 MFLOP/run - 646.01 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1284 runs -   834.69 us/run - 939.52 MFLOP/run -   1.13 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    856 runs -  1180.97 us/run - 939.52 MFLOP/run - 795.55 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1177 runs -   862.64 us/run - 939.52 MFLOP/run -   1.09 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    749 runs -  1387.07 us/run - 939.52 MFLOP/run - 677.35 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    749 runs -  1511.23 us/run - 939.52 MFLOP/run - 621.69 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1060.27 us/run - 939.52 MFLOP/run - 886.12 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    856 runs -  1257.63 us/run - 939.52 MFLOP/run - 747.06 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1067.26 us/run - 939.52 MFLOP/run - 880.32 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1091.97 us/run - 939.52 MFLOP/run - 860.39 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1069.41 us/run - 939.52 MFLOP/run - 878.54 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1177 runs -   866.70 us/run - 939.52 MFLOP/run -   1.08 TFLOPS

@Mushoz
Copy link

Mushoz commented Dec 28, 2024

Giving my results with a 7900XTX running radv:

This PR:

main: n_kv_max = 4096, n_batch = 2048, n_ubatch = 512, flash_attn = 0, is_pp_shared = 1, n_gpu_layers = 99, n_threads = 12, n_threads_batch = 12

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.590 322.06 3.899 32.83 5.489 116.60
512 128 2 768 1.567 326.75 5.118 50.02 6.684 114.89
512 128 4 1024 1.578 324.52 7.198 71.13 8.776 116.68
512 128 8 1536 1.579 324.20 37.659 27.19 39.238 39.15
512 128 16 2560 1.584 323.15 28.294 72.38 29.879 85.68

Master:

main: n_kv_max = 4096, n_batch = 2048, n_ubatch = 512, flash_attn = 0, is_pp_shared = 1, n_gpu_layers = 99, n_threads = 12, n_threads_batch = 12

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.578 324.39 3.838 33.35 5.416 118.17
512 128 2 768 1.555 329.33 31.047 8.25 32.602 23.56
512 128 4 1024 1.570 326.11 33.209 15.42 34.779 29.44
512 128 8 1536 1.571 325.94 37.241 27.50 38.812 39.58
512 128 16 2560 1.575 325.05 28.106 72.87 29.681 86.25

Conclusion:

  1. Very minor regression at the N=1 case, but given the speedup at other sizes probably worth it. Unless we can keep the N=1 case the same as it is right now perhaps?
  2. Absolutely massive boost at N=2 and N=4. I am actually seeing very good speedups at those batchsizes instead of the massive performance dropoff before.
  3. N=8 and N=16 seem unchanged. Is there any chance we can use the same logic for these batch sizes? Given the fact N=4 is faster than N=8, it probably makes sense to use this logic at larger batch sizes as well. At least speaking for the 7900XTX.

Let me know if you want any additional tests at different batch sizes. Thanks for making this PR!

Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where
the batch_strides are overloaded to hold the row strides. Put the loads from the
B matrix in the innermost loop because it should cache better.

Share some code for reducing the result values to memory in mul_mat_vec_base.
@jeffbolznv jeffbolznv changed the title draft: vulkan: optimize mul_mat for small values of N vulkan: optimize mul_mat for small values of N Dec 28, 2024
@jeffbolznv
Copy link
Collaborator Author

I didn't see a perf regression for N==1. I've updated the limit to 8, and removed "draft".

@jeffbolznv
Copy link
Collaborator Author

Thanks @Mushoz . I've updated the limit to 8. Feel free to try 16, but I suspect the mat-mat mul path would work better for 16, at least if we tuned the matrix sizes (the current set of three sizes may be limiting...).

@Mushoz
Copy link

Mushoz commented Dec 28, 2024

Token generation is looking good at batch size 8 as well now!

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.593 321.35 3.899 32.83 5.493 116.52
512 128 2 768 1.569 326.24 5.125 49.96 6.694 114.73
512 128 4 1024 1.572 325.74 7.211 71.00 8.783 116.59
512 128 8 1536 1.585 323.12 11.803 86.75 13.388 114.73
512 128 16 2560 1.582 323.66 28.380 72.16 29.962 85.44

Going to try and see if a limit of 16 makes more sense. As N=8 is now outperforming N=16/

@Mushoz
Copy link

Mushoz commented Dec 28, 2024

I didn't see a perf regression for N==1

What did you mean with this btw? I can clearly see a 0.5 token/sec drop on my N=1 result on this branch vs the master branch. I think that's outside the margin of error?

@jeffbolznv
Copy link
Collaborator Author

I meant in my own local testing. Is this outside the margin of error for you?

@Mushoz
Copy link

Mushoz commented Dec 28, 2024

Limit at 16:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.596 320.78 3.896 32.85 5.492 116.53
512 128 2 768 1.568 326.60 5.129 49.91 6.697 114.68
512 128 4 1024 1.575 325.00 7.209 71.02 8.785 116.57
512 128 8 1536 1.581 323.78 11.813 86.68 13.394 114.67
512 128 16 2560 1.589 322.18 71.415 28.68 73.005 35.07

So seems like 8 is indeed the sweet spot

@jeffbolznv
Copy link
Collaborator Author

I'm surprised it's worse at 16. Maybe using too many registers? You could try changing rm_kq and rm_stdq to 1, it may not make sense to do multiple rows with such a large value of N.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning testing Everything test related Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Misc. bug: Vulkan backend with 7900XTX has severe performance dropoff at some batch sizes
3 participants