Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

find a case that cursor can fix but aider can not #2694

Closed
zuozuo opened this issue Dec 23, 2024 · 0 comments
Closed

find a case that cursor can fix but aider can not #2694

zuozuo opened this issue Dec 23, 2024 · 0 comments

Comments

@zuozuo
Copy link

zuozuo commented Dec 23, 2024

Issue

I hava a wrong litellm config.yaml

model_list:
  - model_name: "claude-3"
    litellm_params:
      model: ""
      api_key: ${ANTHROPIC_API_KEY}
    model_info:
      provider: "anthropic"
      display_name: "Claude 3"
      description: "Anthropic Claude 3 model"

i want aider help me to solve this, it failed

I have tried muiltiple model: gpt-4o/gemini-2.0-flash-exp/gemini-2.0-flash-thinking-exp
all failed.

But cursor can fix this very easily with all the models above.

(litellm) [mac.new]➜ litellm git:(main) ✗ aider --model gemini/gemini-2.0-flash-exp
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Aider v0.69.1
Model: gemini/gemini-2.0-flash-exp with diff edit format
Git repo: .git with 1 files
Repo-map: using 1024 tokens, auto refresh
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

/add config.yaml

Added config.yaml to the chat
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml

/run litellm -c config.yaml

INFO: Started server process [11559]
INFO: Waiting for application startup.

#------------------------------------------------------------#

'It would help me if you could add...'

https://github.com/BerriAI/litellm/issues/new

#------------------------------------------------------------#

Thank you for using LiteLLM! - Krrish & Ishaan

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new

LiteLLM: Proxy initialized with Config, Set models:
claude-3
ERROR: Traceback (most recent call last):
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 732, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 608, in aenter
await self._router.startup()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 709, in startup
await handler()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 3068, in startup_event
await initialize(**worker_config)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 2500, in initialize
) = await proxy_config.load_config(router=llm_router, config_file_path=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 1936, in load_config
router = litellm.Router(
^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 361, in init
self.set_model_list(model_list)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3910, in set_model_list
self._create_deployment(
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3830, in _create_deployment
deployment = self._add_deployment(deployment=deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3949, in _add_deployment
) = litellm.get_llm_provider(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 353, in get_llm_provider
raise e
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 330, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=claude-3.5-sonnet
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

ERROR: Application startup failed. Exiting.
Add command output to the chat? (Y)es/(N)o [Yes]: y
Added 53 lines of output to the chat.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml

Fix that

The error message indicates that the litellm library requires the provider to be specified in the litellm_params. The config.yaml file is missing the provider key under litellm_params.

To fix this, I will add provider: "anthropic" to the litellm_params section of the config.yaml file.

config.yaml

<<<<<<< SEARCH
model: "claude-3.5-sonnet"
api_key: ${ANTHROPIC_API_KEY}
model_info:
provider: "anthropic"

   model: "claude-3.5-sonnet"
   api_key: ${ANTHROPIC_API_KEY}
   provider: "anthropic"
 model_info:
   provider: "anthropic"

REPLACE

After this change, you should be able to run litellm -c config.yaml without the BadRequestError.

Tokens: 3.4k sent, 202 received.
Committing config.yaml before applying edits.
Commit 3cb8f7c fix: Update claude model name to claude-3.5-sonnet
Applied edit to config.yaml

Commit 709d93e fix: Add provider to litellm_params in config
You can use /undo to undo and discard each aider commit.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
config.yaml

/run litellm -c config.yaml

INFO: Started server process [11807]
INFO: Waiting for application startup.

#------------------------------------------------------------#

'The worst thing about this product is...'

https://github.com/BerriAI/litellm/issues/new

#------------------------------------------------------------#

Thank you for using LiteLLM! - Krrish & Ishaan

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new

LiteLLM: Proxy initialized with Config, Set models:
claude-3
ERROR: Traceback (most recent call last):
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 732, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 608, in aenter
await self._router.startup()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/starlette/routing.py", line 709, in startup
await handler()
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 3068, in startup_event
await initialize(**worker_config)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 2500, in initialize
) = await proxy_config.load_config(router=llm_router, config_file_path=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 1936, in load_config
router = litellm.Router(
^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 361, in init
self.set_model_list(model_list)
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3910, in set_model_list
self._create_deployment(
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3830, in _create_deployment
deployment = self._add_deployment(deployment=deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/router.py", line 3949, in _add_deployment
) = litellm.get_llm_provider(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 353, in get_llm_provider
raise e
File "/Users/yonghuizuo/miniconda3/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 330, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=claude-3.5-sonnet
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

ERROR: Application startup failed. Exiting.
Add command output to the chat? (Y)es/(N)o [Yes]:

Version and model info

Aider: v0.69.1

@zuozuo zuozuo closed this as completed Dec 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant