You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bug Report for ScrapeGraphAI - Nvidia Model Configuration
Describe the bug:
ScrapeGraphAI encounters an error when attempting to use Nvidia API models with the ChatNVIDIA class. The expected format for specifying the model in the configuration is "modelprovider/modelname", such as "nvidia/nemotron-4-340b-instruct". However, the internal code in abstract_graph.py separates the model provider and model name before passing them to the ChatNVIDIA class, resulting in the model not being found.
To Reproduce:
Install ScrapeGraphAI and its dependencies, ensuring the langchain_nvidia_ai_endpoints package is included for Nvidia model support (pip install scrapegraphai[other-language-models]).
Configure ScrapeGraphAI to use a Nvidia API model by setting the model key in the llm section of the configuration to the format "modelprovider/modelname". Here's an example:
Run a scraping script that utilizes the configured ScrapeGraphAI instance.
Observe the following error message in the traceback:
ValueError: Model nemotron-4-340b-instruct is unknown, check 'available_models'
Additional context:
You can access models from Meta and Mistral from Nvidia apis, so using model_provider to determine the API might not be applicable.
Possible solutions:
Modify the code in abstract_graph.py to preserve the original format of "modelprovider/modelname" when passing it to the ChatNVIDIA class. But also add additional parameter such as model_source = 'Nvidia' to determine if NVIDIA's API should be called using ChatNVIDIA.
Update the documentation to clearly explain the expected format for specifying Nvidia models in the configuration and the potential consequences of using an incorrect format.
The text was updated successfully, but these errors were encountered:
Bug Report for ScrapeGraphAI - Nvidia Model Configuration
Describe the bug:
ScrapeGraphAI encounters an error when attempting to use Nvidia API models with the
ChatNVIDIA
class. The expected format for specifying the model in the configuration is"modelprovider/modelname"
, such as"nvidia/nemotron-4-340b-instruct"
. However, the internal code inabstract_graph.py
separates the model provider and model name before passing them to theChatNVIDIA
class, resulting in the model not being found.To Reproduce:
langchain_nvidia_ai_endpoints
package is included for Nvidia model support (pip install scrapegraphai[other-language-models]
).model
key in thellm
section of the configuration to the format"modelprovider/modelname"
. Here's an example:Run a scraping script that utilizes the configured ScrapeGraphAI instance.
Observe the following error message in the traceback:
Additional context:
You can access models from Meta and Mistral from Nvidia apis, so using model_provider to determine the API might not be applicable.
Possible solutions:
Modify the code in abstract_graph.py to preserve the original format of "modelprovider/modelname" when passing it to the ChatNVIDIA class. But also add additional parameter such as model_source = 'Nvidia' to determine if NVIDIA's API should be called using ChatNVIDIA.
Update the documentation to clearly explain the expected format for specifying Nvidia models in the configuration and the potential consequences of using an incorrect format.
The text was updated successfully, but these errors were encountered: