-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
libc++abi: terminating due to uncaught exception of type std::runtime_error: [AddMM::eval_cpu] Currently only supports float32. Abort trap: 6 #1
Comments
MLX is an array framework for machine learning on Apple silicon ie M chips (read more here: https://github.com/ml-explore/mlx). Seems you have an Intel chip. First thoughts: you can swap out the whisper part for a compatible whisper one and then change the mlx model portions to 'normal' huggingface models. |
Similar issue for me on an M2 Mac Air. Transcribing files/audio/the_vision.wav (this may take a while)... |
I will try to recreate the issue when I get back on my system. Some things you can try, make sure you are not running out of memory (happened once when I used a bigger model), also there are discussions on this here: conda/conda#9589, apple/ml-stable-diffusion#8, try also summarize_with_mlx instead of summarize_in_parallel to see if that helps (if it is a memory issue). |
it is decode_result = model.decode(segment, options) printing out the segment shows: and the error is: could it just be a data type mismatch? |
Interesting. I have never faced this particular issue, because I just use the already converted whisper from mlx examples and haven't ran into issues yet. Can you add this and see what happens: mel_segment = mel_segment.astype(mx.float32)? |
Could this be apple silicon M1 M2 only issue? |
This didnt work. I changed the transcribe call to def to putting fp16 to false (def transcribe(audio_file, fp16=False, output_path="files/transcripts"):), and that issue is fixed, however I get a similar error when it is doing the summarize_with_mlx. I have the 4bit mixtral. Audio has been transcribed in 22 seconds Looking at this, I may be missing something though. The 4bit model shouldnt need a float 32 I think. |
Great that you solved that! Been busy for a couple of weeks. Have you solved the resource_tracker error? I reproduced the error but only at the Whisper level and it was always because I ran a big model or a very long input, and it was always solved by using a smaller model or breaking the input into smaller chunks. |
In my MacBook Pro (intel i5 cpu 16GB ram), with the following config, I did my test like this:
environment
MacOS 14.3
Xcode 15.2
kMDItemVersion = "2.8.5"
Apple clang version 15.0.0 (clang-1500.1.0.2.5)
Python 3.11.7
install miniconda
comment out the following lines of requirements.txt
because they can not be installed by pip
install pkgs in requirements.txt by pip
install the remaining pkgs by conda
and to run the code like this:
but getting libc++abi: terminating due to uncaught exception of type std::runtime_error: [AddMM::eval_cpu] Currently only supports float32. Abort trap: 6
The text was updated successfully, but these errors were encountered: