You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Upon further investigation it seems like there is an overflow occurring for the underlying spconv library:
/home/ruiningli/spconv/spconv/build/core_cc/src/cumm/conv/main/ConvMainUnitTest/ConvMainUnitTest_matmul_split_Ampere_f16f16f16_0.cu(294)
int64_t(N) * int64_t(C) * tv::bit_size(algo_desp.dtype_a) / 8 < int_max assert faild. your data exceed int32 range. this will be fixed in cumm + nvrtc (spconv 2.2/2.3).
I'm wondering if anyone has encountered the same during training / inference, and how I might be able to get around this.
Many thanks!
Ruining
The text was updated successfully, but these errors were encountered:
OK I dig a bit further down on this, it seems like the issue occurs only during batched inference, i.e., num_samples is bigger than 1 during inference time.
Hi,
Thank you for the enormous efforts in open-sourcing this amazing project!
I'm testing the model on some in-the-wild images. Most images work fine, but on a small number of images there is the following error:
Upon further investigation it seems like there is an overflow occurring for the underlying spconv library:
I'm wondering if anyone has encountered the same during training / inference, and how I might be able to get around this.
Many thanks!
Ruining
The text was updated successfully, but these errors were encountered: