Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

got "openai.error.APIError: HTTP code 405 from API ()" when using openai model to generating embeddings #640

Open
canonhui opened this issue Aug 27, 2024 · 2 comments

Comments

@canonhui
Copy link

canonhui commented Aug 27, 2024

I wanted to use openai's text-embedding-3-small model to generate embeddings, following is my test code:

os.environ['OPENAI_API_KEY'] = 'my_api_key'
os.environ['OPENAI_API_BASE'] = 'my_api_base'

openai_embed_fnc = OpenAI('text-embedding-3-small', api_key=os.environ['OPENAI_API_KEY'])
vector_base = VectorBase('chromadb', dimension=1536, persist_directory='./.chroma')
data_manager = get_data_manager(CacheBase("sqlite"), vector_base=vector_base)
cache.init(
    embedding_func=openai_embed_fnc.to_embeddings,
    data_manager=data_manager,
    similarity_evaluation=SearchDistanceEvaluation(),
    )
cache.set_openai_key()

question = 'hi there'
response = openai.ChatCompletion.create(
    model='gpt-4o-mini',
    messages=[
        {
            'role': 'user',
            'content': question
        }
    ],
)
print(f'Answer: {response['choices'][0]['message']['content']}\n')

however, this code got me the following exception:

Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.8/site-packages/openai/api_requestor.py", line 766, in _interpret_response_line
    data = json.loads(rbody)
  File "/root/miniconda3/lib/python3.8/json/__init__.py", line 357, in loads
    return _default_decoder.decode(s)
  File "/root/miniconda3/lib/python3.8/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/root/miniconda3/lib/python3.8/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "test1.py", line 45, in <module>
    response = openai.ChatCompletion.create(
  File "/root/miniconda3/lib/python3.8/site-packages/gptcache/adapter/openai.py", line 125, in create
    return adapt(
  File "/root/miniconda3/lib/python3.8/site-packages/gptcache/adapter/adapter.py", line 78, in adapt
    embedding_data = time_cal(
  File "/root/miniconda3/lib/python3.8/site-packages/gptcache/utils/time.py", line 9, in inner
    res = func(*args, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/gptcache/embedding/openai.py", line 60, in to_embeddings
    sentence_embeddings = openai.Embedding.create(model=self.model, input=data, api_base=self.api_base)
  File "/root/miniconda3/lib/python3.8/site-packages/openai/api_resources/embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
  File "/root/miniconda3/lib/python3.8/site-packages/openai/api_requestor.py", line 299, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/root/miniconda3/lib/python3.8/site-packages/openai/api_requestor.py", line 710, in _interpret_response
    self._interpret_response_line(
  File "/root/miniconda3/lib/python3.8/site-packages/openai/api_requestor.py", line 768, in _interpret_response_line
    raise error.APIError(
openai.error.APIError: HTTP code 405 from API ()

I guess it's an issue about openai's version? because i tested the embedding api using the same api_key and api_base in another python environment with openai's version of 1.10.0, and it returned me the right embedding. Here is the code:

api_key = 'my_api_key'
base_url = 'my_api_base'
model_name = 'text-embedding-3-small'

client = OpenAI(
    api_key = api_key,
    base_url = base_url
)
response = client.embeddings.create(
    input='how are you',
    model=model_name
)
print(response.data[0].embedding)
@SimFG
Copy link
Collaborator

SimFG commented Aug 27, 2024

It looks like this, it seems that we need to modify the openai embedding interface. The embedding of this part is very simple, you can try to implement an embedding function for gptcache yourself

@canonhui
Copy link
Author

OK, i'll try. Thanks for your reply

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants