You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
curl http://192.168.1.157:11434/api/embeddings -d '{
"model": "qwen2.5:7b",
"prompt": "prompt"
}'
How to call the above example? Pass in the URL address: model name and prompt word, three parameters. Which method should be used
Hey! I hope this answer might help to solve your query.
Byaldi with the help of colpali/colqwen2 engine enables you to create Vision Embeddings directly from your docs.
These embeddings allow you to retrieve specific sets of pages from large volume of docs, depending upon your query, that will fit in the context window of your VLM, and you can pass that to any VLM (Local/API) to do Q/A or any other task
For creation of these embeddings, we are specially using Modified & Finetuned versions of PaliGemma/Qwen models to create the Vision Embeddings.
You can only use these modified models from Vidore. Unfortunately you can't use some other models from huggingface separately.
The model that you are referring to here does not have Multimodal capabilities, so it won't work for multimodal docs anyway.
I hope this answers your question. Feel free to continue this discussion if you have something else to ask. Thank you.
How to call remote ollama
How to call ollama service through URL
The text was updated successfully, but these errors were encountered: