You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
however, this code got me the following exception:
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/site-packages/openai/api_requestor.py", line 766, in _interpret_response_line
data = json.loads(rbody)
File "/root/miniconda3/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/root/miniconda3/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/root/miniconda3/lib/python3.8/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "test1.py", line 45, in <module>
response = openai.ChatCompletion.create(
File "/root/miniconda3/lib/python3.8/site-packages/gptcache/adapter/openai.py", line 125, in create
return adapt(
File "/root/miniconda3/lib/python3.8/site-packages/gptcache/adapter/adapter.py", line 78, in adapt
embedding_data = time_cal(
File "/root/miniconda3/lib/python3.8/site-packages/gptcache/utils/time.py", line 9, in inner
res = func(*args, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/gptcache/embedding/openai.py", line 60, in to_embeddings
sentence_embeddings = openai.Embedding.create(model=self.model, input=data, api_base=self.api_base)
File "/root/miniconda3/lib/python3.8/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
File "/root/miniconda3/lib/python3.8/site-packages/openai/api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
File "/root/miniconda3/lib/python3.8/site-packages/openai/api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "/root/miniconda3/lib/python3.8/site-packages/openai/api_requestor.py", line 768, in _interpret_response_line
raise error.APIError(
openai.error.APIError: HTTP code 405 from API ()
I guess it's an issue about openai's version? because i tested the embedding api using the same api_key and api_base in another python environment with openai's version of 1.10.0, and it returned me the right embedding. Here is the code:
It looks like this, it seems that we need to modify the openai embedding interface. The embedding of this part is very simple, you can try to implement an embedding function for gptcache yourself
I wanted to use openai's
text-embedding-3-small
model to generate embeddings, following is my test code:however, this code got me the following exception:
I guess it's an issue about openai's version? because i tested the embedding api using the same
api_key
andapi_base
in another python environment with openai's version of 1.10.0, and it returned me the right embedding. Here is the code:The text was updated successfully, but these errors were encountered: