You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to adapt the distilbert example to make it process multiple sequences at once (the provided example just processes one prompt).
But I'm having trouble providing the proper attention mask to the DistilBertModel::forward() method.
I noticed, when reading the documentation of the forward() method of the equivalent Python class, that that this mask is expected to have the same shape as the input_ids parameter.
This seems sound, and is also consistent with the BERT example in Candle, which does it that way when it comes to processing multiple sequences to compute similarities:
In the distilbert example the tokenizer doesn't add any padding, and there is a quite mysterious function that is supposed compute the attention mask returning a different (squared) shape:
For example for a sequence of 3 tokens this generates the following NxN mask:
[[0, 1, 1],
[0, 0, 1],
[0, 0, 0]]
If I simply replace this by the result of the get_attention_mask() function in a 1xN tensor, it works for one sequence.
But for several sequences, if I pad all the sequences to same size S, and stack the masks obtained for N sequence to get a NxS tensor (as does the bert example mentioned earlier), I get an error like this:
cannot broadcast [2, 32] to [2, 12, 32, 32]
I must admit that I don't get the expectations of DistilBertModel::forward() regarding the provided mask. I also don't understand what the gest_mask() function is supposed to do.
Maybe this is due to my lack of knowledge on that matter, but when I refer to the elements mentioned above (Python equivalent and similar Candle example with Bert), I'm wondering if there isn't something wrong with the distilbert example and/or model implementation ?
The text was updated successfully, but these errors were encountered:
Hi,
I'm trying to adapt the distilbert example to make it process multiple sequences at once (the provided example just processes one prompt).
But I'm having trouble providing the proper attention mask to the
DistilBertModel::forward()
method.I noticed, when reading the documentation of the
forward()
method of the equivalent Python class, that that this mask is expected to have the same shape as theinput_ids
parameter.This seems sound, and is also consistent with the BERT example in Candle, which does it that way when it comes to processing multiple sequences to compute similarities:
BUT:
In the
distilbert
example the tokenizer doesn't add any padding, and there is a quite mysterious function that is supposed compute the attention mask returning a different (squared) shape:For example for a sequence of 3 tokens this generates the following NxN mask:
If I simply replace this by the result of the
get_attention_mask()
function in a 1xN tensor, it works for one sequence.But for several sequences, if I pad all the sequences to same size S, and stack the masks obtained for N sequence to get a NxS tensor (as does the
bert
example mentioned earlier), I get an error like this:I must admit that I don't get the expectations of
DistilBertModel::forward()
regarding the provided mask. I also don't understand what thegest_mask()
function is supposed to do.Maybe this is due to my lack of knowledge on that matter, but when I refer to the elements mentioned above (Python equivalent and similar Candle example with Bert), I'm wondering if there isn't something wrong with the
distilbert
example and/or model implementation ?The text was updated successfully, but these errors were encountered: