-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does this work with the unseen speech? #10
Comments
Hi @sbkim052, Yeah, it should work with unseen speech as the input. All the examples here are converted from unseen speech. If you want to convert to an unseen speaker, you'd have to retrain the model. You could also look into conditioning on x-vectors or other speaker embeddings if you want to do zero-shot conversion. |
Thank you for answering:) I have an additional question about your answer. |
@sbkim052, no problem. The basic idea is to train a speaker verification/classification model to learn an embedding space for speaker identity. Then, instead of conditioning the decoder on a fixed speaker id (like I did in this repo), you condition on the learned embeddings. At test time you can get the embedding for a new unseen speaker and condition the decoder to generate speech in that voice. For more info, you can take a look at this paper. They use a text-to-speech model instead of an autoencoder but the general idea is the same. |
Thank you for sharing your git.
My question is same above.
Does this work with the unseen speech?
The text was updated successfully, but these errors were encountered: