-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama & Qwen2.5coder:7b #983
Comments
Try to use Chrome, as Brave/Safari and other browsers are known to make problems. Also make sure you are on the "stable" branch not main. If you still got problems, I would recommend joining the community and open a topic there, as there are more people watchting and better to discuss: |
Did you verify your ollama is working properly without bolt? (not the terminal, instead of curl requests to simulate requests from outside). |
same issue. I am calling ollama from a different server hosted locally. Curl is working fine but not able to make it work on bolt ui. checked with other models on ollama as well. |
fix the .env.example file name to just to .env |
Hi! I have the same thing. It seems to me that those who participate in the development do not want others to use this product. You need to notify the Github administration. Earlier versions worked badly, but now they don't work at all. |
Hi @Roninos, Also dont know what you want to notify to the github administration. They have nothing to do with the project itself. If you are a great developer and think you can help this project doing good. Feel free to contribute and implement fixes/PRs and also support in the community: https://thinktank.ottomator.ai/ |
@leex279 It don't show error msg but not response. thinkg more and super slow. I wait 1 hour but it dont change. OS: Mac 16 inch. 16 GB ram.
ollama create -f Modelfile qwen2.5-coder:7b |
also can you use |
Same issue here. My Ollama works fine as I use it with Open Web UI all the time. Here's the error on the console: app-dev-1 | INFO LLMManager Found 45 cached models for Ollama |
I ended up fixing it by actually setting up a proper .env file with the OLLAMA_API_BASE_URL=http://ollama-ip-address:11434 in my case. That is IP of my ollama host spcifically on my local network. It's not running on the same host as bolt.diy |
watching at this output i see you are using docker build
can you try this PR #1006 and see if you are getting response from ollama, |
Good afternoon, I am not using Docker for Bolt. I have a local installation and another one via pinokio, both don't work with local Ollama and LM Studio models, in Docker stands OpenUi works on local models. Bolt not work local models. |
pinokio uses docker to setup the services, and when using open webui you are not loading a huge system prompt to the ollama model so it requires very less memory and gives you faster responses the system prompt of bolt alone is 4000 tokens then what ever the conversation histry gets added long with it makes it very heavy to run on a 16Gb macos with 7B parameter try running the model in openWebUI with the exact system prompt that bolt is using the you will get the whole picture but I would request you to test the PR #1006 that I raised which resolves the streaming issue and let you see the response immediately, instead of waiting for the response to finish |
I don't remember what error was in mine, but it only worked when I put both the url (in providers) http://127.0.0.1:11434 in the bolt.diy settings and in the .ENV file. Just including it in configurations > only providers didn't work for me. |
Describe the bug
I have installed and download Qwen2.5coder:7b then set ollama base url to Bolt but it shows error:
Link to the Bolt URL that caused the error
localhost
Steps to reproduce
Expected behavior
I did it all as video shows(
Screen Recording / Screenshot
Platform
Provider Used
No response
Model Used
No response
Additional context
I did it all as video shows(
The text was updated successfully, but these errors were encountered: