Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama & Qwen2.5coder:7b #983

Open
firdavsDev opened this issue Jan 3, 2025 · 18 comments
Open

Ollama & Qwen2.5coder:7b #983

firdavsDev opened this issue Jan 3, 2025 · 18 comments

Comments

@firdavsDev
Copy link

Describe the bug

I have installed and download Qwen2.5coder:7b then set ollama base url to Bolt but it shows error:

  • There was an error processing your request: No details were returned.

Link to the Bolt URL that caused the error

localhost

Steps to reproduce

  1. Download ollama and Qwen2.5coder:7b
  2. Set ollama OLLAMA_API_BASE_URL=http://127.0.0.1:11434 to .env

Expected behavior

I did it all as video shows(

Screen Recording / Screenshot

image

Platform

  • OS: macOS
  • Browser:Brave
  • Current Version Tag: v"0.0.5"
  • Current Commit Version: "31e03ce"

Provider Used

No response

Model Used

No response

Additional context

I did it all as video shows(

@leex279
Copy link

leex279 commented Jan 3, 2025

Try to use Chrome, as Brave/Safari and other browsers are known to make problems.

Also make sure you are on the "stable" branch not main.

If you still got problems, I would recommend joining the community and open a topic there, as there are more people watchting and better to discuss:
https://thinktank.ottomator.ai/c/bolt-diy/bolt-diy-issues-and-troubleshooting/22

@firdavsDev
Copy link
Author

Thanks I download chrome but i does not help (

image

app-dev-1  |  INFO   LLMManager  Getting dynamic models for Ollama
app-dev-1  |  ERROR   LLMManager  Error getting dynamic models Ollama : TypeError: fetch failed
app-dev-1  |  ERROR   api.chat  Error: No models found for provider Ollama

@leex279
Copy link

leex279 commented Jan 3, 2025

Did you verify your ollama is working properly without bolt? (not the terminal, instead of curl requests to simulate requests from outside).

@firdavsDev
Copy link
Author

yeah bro ollama works well via curl
image

@jyotisah00
Copy link

jyotisah00 commented Jan 3, 2025

same issue. I am calling ollama from a different server hosted locally. Curl is working fine but not able to make it work on bolt ui. checked with other models on ollama as well.

@jyotisah00
Copy link

fix the .env.example file name to just to .env
that will take away the error from browser. but now i am stuck on
2025-01-03 19:47:12 app-dev-1 | INFO LLMManager Got 4 dynamic models for Ollama
2025-01-03 19:47:12 app-dev-1 | INFO stream-text Sending llm call to Ollama with model codegemma:latest

@Roninos
Copy link

Roninos commented Jan 3, 2025

Hi! I have the same thing. It seems to me that those who participate in the development do not want others to use this product. You need to notify the Github administration. Earlier versions worked badly, but now they don't work at all.

@leex279
Copy link

leex279 commented Jan 3, 2025

Hi @Roninos,
I´m pretty sure they to their best to make this product work, but you also have to consider that this is open source and all people contributing spent their free/spare time to work on it. And know this are many many hours. So its a bit rude in my opinion what you write her.

Also dont know what you want to notify to the github administration. They have nothing to do with the project itself.

If you are a great developer and think you can help this project doing good. Feel free to contribute and implement fixes/PRs and also support in the community: https://thinktank.ottomator.ai/

@firdavsDev
Copy link
Author

@leex279 It don't show error msg but not response. thinkg more and super slow. I wait 1 hour but it dont change.

OS: Mac 16 inch. 16 GB ram.
my Modelfile:

FROM qwen2.5-coder:7b
PARAMETER num_ctx 16384

ollama create -f Modelfile qwen2.5-coder:7b

image

@firdavsDev
Copy link
Author

image

why super slow? May my RAM not enough.

@thecodacus
Copy link
Collaborator

thecodacus commented Jan 4, 2025

image

why super slow? May my RAM not enough.

your Ollama base URL is correctly set.
can you use the UI to set the ollama url instead of .env file?

@thecodacus
Copy link
Collaborator

also can you use pnpm run dev to start a dev server.
the build version is not streaming properly and sending the response at the end.

@scouzi1966
Copy link

Same issue here. My Ollama works fine as I use it with Open Web UI all the time.

Here's the error on the console:

app-dev-1 | INFO LLMManager Found 45 cached models for Ollama
app-dev-1 | INFO stream-text Sending llm call to Ollama with model llama3.3:latest
app-dev-1 | ERROR api.chat TypeError: Cannot read properties of undefined (reading 'replace')
app-dev-1 | at OllamaProvider.getModelInstance (/app/app/lib/modules/llm/providers/ollama.ts:59:34)
app-dev-1 | at Module.streamText (/app/app/lib/.server/llm/stream-text.ts:156:21)
app-dev-1 | at processTicksAndRejections (node:internal/process/task_queues:95:5)
app-dev-1 | at chatAction (/app/app/routes/api.chat.ts:116:20)
app-dev-1 | at Object.callRouteAction (/app/node_modules/.pnpm/@remix-run[email protected][email protected]/node_modules/@remix-run/server-runtime/dist/data.js:36:16)
app-dev-1 | at /app/node_modules/.pnpm/@remix-run[email protected]/node_modules/@remix-run/router/router.ts:4899:19
app-dev-1 | at callLoaderOrAction (/app/node_modules/.pnpm/@remix-run[email protected]/node_modules/@remix-run/router/router.ts:4963:16)
app-dev-1 | at async Promise.all (index 0)
app-dev-1 | at defaultDataStrategy (/app/node_modules/.pnpm/@remix-run[email protected]/node_modules/@remix-run/router/router.ts:4772:17)
app-dev-1 | at callDataStrategyImpl (/app/node_modules/.pnpm/@remix-run[email protected]/node_modules/@remix-run/router/router.ts:4835:17)
app-dev-1 | at callDataStrategy (/app/node_modules/.pnpm/@remix-run[email protected]/node_modules/@remix-run/router/router.ts:3992:19)
app-dev-1 | at submit (/app/node_modules/.pnpm/@remix-run[email protected]/node_modules/@remix-run/router/router.ts:3755:21)
app-dev-1 | at queryImpl (/app/node_modules/.pnpm/@remix-run[email protected]/node_modules/@remix-run/router/router.ts:3684:22)
app-dev-1 | at Object.queryRoute (/app/node_modules/.pnpm/@remix-run[email protected]/node_modules/@remix-run/router/router.ts:3629:18)
app-dev-1 | at handleResourceRequest (/app/node_modules/.pnpm/@remix-run[email protected][email protected]/node_modules/@remix-run/server-runtime/dist/server.js:402:20)
app-dev-1 | at requestHandler (/app/node_modules/.pnpm/@remix-run[email protected][email protected]/node_modules/@remix-run/server-runtime/dist/server.js:156:18)
app-dev-1 | at /app/node_modules/.pnpm/@remix-run+dev@2.15.0_@remix-run[email protected][email protected]_react@[email protected]_typ_3djlhh3t6jbfog2cydlrvgreoy/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25

@scouzi1966
Copy link

I ended up fixing it by actually setting up a proper .env file with the OLLAMA_API_BASE_URL=http://ollama-ip-address:11434 in my case. That is IP of my ollama host spcifically on my local network. It's not running on the same host as bolt.diy

@thecodacus
Copy link
Collaborator

watching at this output i see you are using docker build
and I found that the build version of the app currently does not stream output but sends the final message at the end which blocks the UI while llm is streaming

app-dev-1 | INFO LLMManager Getting dynamic models for Ollama
app-dev-1 | ERROR LLMManager Error getting dynamic models Ollama : TypeError: fetch failed
app-dev-1 | ERROR api.chat Error: No models found for provider Ollama

can you try this PR #1006 and see if you are getting response from ollama,
it can be the case where the ollama is responding slowly but as the streaming is not happening its not appearing to the UI immediately

@Roninos
Copy link

Roninos commented Jan 5, 2025

Good afternoon, I am not using Docker for Bolt. I have a local installation and another one via pinokio, both don't work with local Ollama and LM Studio models, in Docker stands OpenUi works on local models. Bolt not work local models.

@thecodacus
Copy link
Collaborator

thecodacus commented Jan 5, 2025

Good afternoon, I am not using Docker for Bolt. I have a local installation and another one via pinokio, both don't work with local Ollama and LM Studio models, in Docker stands OpenUi works on local models. Bolt not work local models.

pinokio uses docker to setup the services, and when using open webui you are not loading a huge system prompt to the ollama model so it requires very less memory and gives you faster responses

the system prompt of bolt alone is 4000 tokens then what ever the conversation histry gets added long with it makes it very heavy to run on a 16Gb macos with 7B parameter

try running the model in openWebUI with the exact system prompt that bolt is using the you will get the whole picture

but I would request you to test the PR #1006 that I raised which resolves the streaming issue and let you see the response immediately, instead of waiting for the response to finish

@kelvinvgomes
Copy link

kelvinvgomes commented Jan 6, 2025

I don't remember what error was in mine, but it only worked when I put both the url (in providers) http://127.0.0.1:11434 in the bolt.diy settings and in the .ENV file. Just including it in configurations > only providers didn't work for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants