-
Notifications
You must be signed in to change notification settings - Fork 513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Gptcache server: with OpenAi embedding cache seems to be not working properly. #612
Comments
|
@SimFG , Thanks for the response.
|
@swatataidbuddy |
ok the behaviour seems to be very inconsistent , now when i retested , earlier issue i saw is not happening, but seeing a different issue, eventhough when i ask the same question multiple times , it never gets from cache, call goes to openai. Pls refer below: virtual_env) swathinarayanan@Swathis-MacBook-Air tolka_feedback_sep % /Users/swathinarayanan/virtual_env/bin/python /Users/swathinarayanan/tolka_feedback_sep/testgptcacheAPI.py (virtual_env) swathinarayanan@Swathis-MacBook-Air tolka_feedback_sep % /Users/swathinarayanan/virtual_env/bin/python /Users/swathinarayanan/tolka_feedback_sep/testgptcacheAPI.py (virtual_env) swathinarayanan@Swathis-MacBook-Air tolka_feedback_sep % /Users/swathinarayanan/virtual_env/bin/python /Users/swathinarayanan/tolka_feedback_sep/testgptcacheAPI.py Earlier this was not the case, if my first question is "president of india" and second question , "what is coral reef", i will get the answer from the cache , derived for the first question. |
which version of openai you are using? |
Version: 0.28.0 |
@swatataidbuddy |
I also have the same issues with @swatataidbuddy . Are there any solutions to this? |
Current Behavior
After i start the gptcache server with below command:
python server.py (file is from https://github.com/zilliztech/GPTCache/tree/main/gptcache_server) -s 0.0.0.0 -p 8000 -of gptcache.yml -o True
the service is up and running.
Now from a client program , when i make a request , for example:
but after this , when i post a new question , lets say
No matter how many new questions i ask , its the same answer from the cache.
This is happening , when i set embedding - "openai" in gptcache.yaml file.
Also irrespective of embedding, there is an issue in semantic cache.
Expected Behavior
, it should check whether there is a exact or similar cache entry in the cache, if so , the answer should be from the cache, else the answer should from open ai, and response to be stored in cache.
Steps To Reproduce
Environment
Anything else?
The docker build image given by you not working, when running the docker getting below error:
successfully installed package: openai
Traceback (most recent call last):
File "/usr/local/bin/gptcache_server", line 5, in
from gptcache_server.server import main
File "/usr/local/lib/python3.8/site-packages/gptcache_server/server.py", line 8, in
from gptcache.adapter import openai
File "/usr/local/lib/python3.8/site-packages/gptcache/adapter/openai.py", line 31, in
class ChatCompletion(openai.ChatCompletion, BaseCacheLLM):
File "/usr/local/lib/python3.8/site-packages/openai/lib/_old_api.py", line 39, in call
raise APIRemovedInV1(symbol=self._symbol)
openai.lib._old_api.APIRemovedInV1:
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run
openai migrate
to automatically upgrade your codebase to use the 1.0.0 interface.Alternatively, you can pin your installation to the old version, e.g.
pip install openai==0.28
The text was updated successfully, but these errors were encountered: