You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have converted the "facebook/m2m100_418M" model, and now it is divided into encoder and decoder. I'm not sure what to do next. Since "facebook/m2m100_418M" does not have a tokenizer.json and uses a SentencePiece tokenizer, I'm a beginner.
Does it support the vocabulary segmentation trained by SentencePiece?
The text was updated successfully, but these errors were encountered: