You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, first thank you for the great work for this library and making it open source. I wanted to ask a possibility to get the token embeddings directly from the input. I was not able to find that option. If it's possible it would be great to imitate sentence transformers behaviour
model = SentenceTransformer("all-mpnet-base-v2")
model.encode("This is a test sentence", output_value="token_embeddings")
output-value can be added to the api parameters if possible. Moreover, if possible other types of poolings can be utilized on runtime (I don't know if this is possible in the library)
Motivation
I want to use late chunking and for that, the token embeddings are required.
Your contribution
I can try to help.
The text was updated successfully, but these errors were encountered:
Feature request
Hello, first thank you for the great work for this library and making it open source. I wanted to ask a possibility to get the
token embeddings
directly from the input. I was not able to find that option. If it's possible it would be great to imitatesentence transformers
behaviouroutput-value
can be added to the api parameters if possible. Moreover, if possible other types of poolings can be utilized on runtime (I don't know if this is possible in the library)Motivation
I want to use late chunking and for that, the token embeddings are required.
Your contribution
I can try to help.
The text was updated successfully, but these errors were encountered: