Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to call remote ollama How to call ollama service through URL #57

Open
smileyboy2019 opened this issue Nov 14, 2024 · 3 comments
Open

Comments

@smileyboy2019
Copy link

How to call remote ollama

How to call ollama service through URL

@DebopamParam
Copy link
Contributor

Hey @smileyboy2019. Can you elaborate a little of what you are trying to achieve so that hopefully we can help?

@smileyboy2019
Copy link
Author

curl http://192.168.1.157:11434/api/embeddings -d '{
"model": "qwen2.5:7b",
"prompt": "prompt"
}'
How to call the above example? Pass in the URL address: model name and prompt word, three parameters. Which method should be used

@DebopamParam
Copy link
Contributor

DebopamParam commented Nov 14, 2024

Hey! I hope this answer might help to solve your query.

Byaldi with the help of colpali/colqwen2 engine enables you to create Vision Embeddings directly from your docs.

These embeddings allow you to retrieve specific sets of pages from large volume of docs, depending upon your query, that will fit in the context window of your VLM, and you can pass that to any VLM (Local/API) to do Q/A or any other task

For creation of these embeddings, we are specially using Modified & Finetuned versions of PaliGemma/Qwen models to create the Vision Embeddings.

You can only use these modified models from Vidore. Unfortunately you can't use some other models from huggingface separately.

The model that you are referring to here does not have Multimodal capabilities, so it won't work for multimodal docs anyway.

I hope this answers your question. Feel free to continue this discussion if you have something else to ask. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants