Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

max_completion_tokens (and max_tokens) param in ChatOpenAI() can't be processed by OpenAI() object #28943

Open
5 tasks done
Armasse opened this issue Dec 27, 2024 · 1 comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature investigate Flagged for investigation.

Comments

@Armasse
Copy link

Armasse commented Dec 27, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

from langchain_openai import ChatOpenAI

chat_model = ChatOpenAI(
    model_name="model",
    max_completion_tokens=800,
    openai_api_base="base_url",
    openai_api_key="your_key"
)

chat_model.invoke("Hello, how are you ?")

Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
  File "/home/user/test_python/langchain_bug.py", line 12, in <module>
    chat_model.invoke("Hello, how are you ?")
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 289, in invoke
    self.generate_prompt(
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 800, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 655, in generate
    raise e
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 645, in generate
    self._generate_with_cache(
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 872, in _generate_with_cache
    result = self._generate(
             ^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 726, in _generate
    response = self.client.create(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/openai/_utils/_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 859, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/openai/_base_client.py", line 1280, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/openai/_base_client.py", line 957, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/test-python-OI3Fy4Nv-py3.12/lib/python3.12/site-packages/openai/_base_client.py", line 1061, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "[{'type': 'extra_forbidden', 'loc': ('body', 'max_completion_tokens'), 'msg': 'Extra inputs are not permitted', 'input': 800}]", 'type': 'BadRequestError', 'param': None, 'code': 400}

Description

I'm trying to use langchain ChatOpenAI() object with max_completion_tokens parameter initialized. Since September 2024, the max_tokens parameter is deprecated in favor of max_completion_tokens. The change was made in langchain but for now, it has not been done in the OpenAI Python library.

When I pass max_completion_tokens parameter, an error is raised because extra parameter is forbidden when we create OpenAI() object (from the OpenAI Python library).

I know, it's not a bug from the langchain library strictly speaking. But while waiting for the OpenAI library to make the change, is it possible to mitigate the problem? Because, for now, the feature is unavailable.

System Info

System Information

OS: Linux
OS Version: #1 SMP Tue Nov 5
Python Version: 3.12.5

Package Information

langchain_core: 0.3.28
langchain: 0.3.13
langsmith: 0.2.6
langchain_bug: Installed. No version info available.
langchain_openai: 0.2.14
langchain_text_splitters: 0.3.4

Optional packages not installed

langserve

Other Dependencies

aiohttp: 3.11.11
async-timeout: Installed. No version info available.
httpx: 0.27.2
httpx-sse: 0.4.0
jsonpatch: 1.33
langsmith-pyo3: Installed. No version info available.
numpy: 2.2.1
openai: 1.58.1
orjson: 3.10.12
packaging: 24.2
pydantic: 2.10.4
PyYAML: 6.0.2
requests: 2.32.3
requests-toolbelt: 1.0.0
SQLAlchemy: 2.0.36
tenacity: 9.0.0
tiktoken: 0.8.0
tokenizers: 0.21.0
typing-extensions: 4.12.2
zstandard: Installed. No version info available.

@langcarl langcarl bot added the investigate Flagged for investigation. label Dec 27, 2024
@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Dec 27, 2024
@QuentinFuxa
Copy link

QuentinFuxa commented Dec 27, 2024

Thank you for reporting this issue.

It seems likely that the problem lies in the version of the OpenAI Python library you have installed. I tested a similar setup using the following versions:
• Python: 3.12.6
• langchain-openai: 0.2.14
• openai: 1.58.1

Here’s the code I used for testing:

from langchain_openai import ChatOpenAI

chat_model = ChatOpenAI(
    model_name="gpt-4o",
    max_completion_tokens=10,
    openai_api_key="your_key"
)

response = chat_model.invoke("Hello, how are you?")
print(response)

The call to the OpenAI client is done here in the LangChain codebase:

response = self.client.create(**payload)

In this test, the payload being sent to OpenAI contained:

{
    "messages": [...],
    "model": "gpt-4o",
    "stream": False,
    "n": 1,
    "temperature": 0.7,
    "max_completion_tokens": 10
}

This payload worked correctly without any errors.

Could you double-check the version of the OpenAI Python library in your environment? Specifically, ensure you are using openai==1.58.1.
If you confirm that you’re using the correct version and the issue persists, please share additional details about your setup or any modifications you might have made to the code. Maybe the OpenAI lib uses different parameters depending on the model you use?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature investigate Flagged for investigation.
Projects
None yet
Development

No branches or pull requests

2 participants