Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Whisper language detection #1097

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

ae9is
Copy link

@ae9is ae9is commented Dec 13, 2024

See #302

Adds support for automatically detecting language to Whisper tasks.

The existing HuggingFace and Whisper implementations in Python were used as reference:
Hugging Face Transformers
Original Whisper

Also updates the existing Whisper test suites, including adding a string similarity check for actual model output (as opposed to just output length). Please note that the "new" development dependency for these tests, "fastest-levenshtein" is already used by "webpack-cli".

@xenova
Copy link
Collaborator

xenova commented Dec 13, 2024

Thanks for the PR! This will certainly be a useful feature. Regarding the implementation, I think it can be greatly simplified as follows:

  • Instead of using .generate, perform a single forward pass of the inputs
  • Then, consider all logits which correspond to the language token ids
  • Choose the language with the highest score

Currently, the implementation seems to perform a full generation step (could be hundreds of forward passes).

@ae9is
Copy link
Author

ae9is commented Dec 15, 2024

Sorry about that, it was simpler to code and the performance impact for my app was minimal! I've reworked things now to only run one pass for language detection.

Thanks for all the work on this library.

@ae9is ae9is force-pushed the add-whisper-language-detection branch from 7bbc92f to db84540 Compare December 16, 2024 11:46
@ZhangPeng4242
Copy link

hey there, please approve this feature, it is a quite useful feature :)

Comment on lines +3148 to +3158
const output = await this.generate({
...options,
generation_config: {
...generation_config,
good_words_ids,
num_beams: 1,
do_sample: false,
},
stopping_criteria,
decoder_input_ids,
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be able to replace this with a single forward pass (by called this.forward(...) instead of using a generation step.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a lot of user options for (and logic in) generate and I wanted to respect it while running language detection. It was simpler to extend generate to just stop after one pass than to duplicate that and use forward directly.

Like, hypothetically, a user adds a logits processor that suppresses the first 10 seconds worth of tokens. There is a 15s audio clip in two languages, and the context switches at 10s. The language detection should detect the second language not the first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants