Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JsonProcessor does not work with add_tools #213

Open
jeroenbourgois opened this issue Dec 12, 2024 · 10 comments
Open

JsonProcessor does not work with add_tools #213

jeroenbourgois opened this issue Dec 12, 2024 · 10 comments

Comments

@jeroenbourgois
Copy link

I have a fairly basic example. When I run the chain like below, with a custom function, everything is executed.

{:ok, updated_chain, response} =
  %{llm: chat_model, custom_context: context, verbose: true}
  |> LLMChain.new!()
  |> LLMChain.add_messages(messages)
  |> LLMChain.add_tools([function])
  # keep running the LLM chain against the LLM if needed to evaluate
  # function calls and provide a response.
  |> LLMChain.run(mode: :while_needs_response)

However, if I add LLMChain.message_processors([JsonProcessor.new!(~r/```json(.*?)```/s)]) just before the run I get an error. If then I remove the add_tools call it works again. Maybe the result the add_tools function is putting on the chain does not work nicely with what the JsonProcessor expects?

@vkryukov
Copy link
Contributor

Hi @jeroenbourgois , could you please add more details (e.g., a stack trace) or a toy reproducible example?

@brainlid
Copy link
Owner

Hi @jeroenbourgois!

Interesting. It should not cause an error, but the two aren't intended to be used together. If you are working with a model that supports tool calls, you don't need a JSON processor. The JsonProcessor is really intended for models that don't support tools but can give you JSON in a text response. The response may also have text around it like, "Here's your result in JSON:" and the JsonProcessor can help pluck out and parse the JSON parts from the text.

If you're using a model with tools, that is a more reliable way to get structured data extraction.

@brainlid brainlid reopened this Dec 13, 2024
@brainlid
Copy link
Owner

If you can provide some additional details or example errors, I think it's worth trying to make the dev experience clearer and smoother. Thanks!

@jeroenbourgois
Copy link
Author

@brainlid that's more then fair, I will try to set up a demo project this weekend!

But elaborating on your initial response, you say that if you are working with a model that support tool calls, you don't need a JSON processor? I am having trouble to understand that - being also relatively new to the whole LangChain concept.

I use the tool to allow the model to answer a very context specific question. The chat config itself requests json_object as an output. But in the end (and even more since using LangChain) the model tends answer in text, sometimes elaborating on its reasoning (which can be interesting, especially during testing) and then the eventual answer is wrapper in the know some text ``json\{...\}\n other text sometimes text we all know.

Should I do something different when using tools then?

@jeroenbourgois
Copy link
Author

@brainlid I have put a simple example on Github: https://github.com/jeroenbourgois/langchain_json_preprocessor/blob/main/lib/langchain_json_processor.ex

So running the module will give an error:

** (CaseClauseError) no case clause matching: {:halted, %LangChain.Message{content: "ERROR: An exception was raised! Exception: %FunctionClauseError{module: Regex, function: :run, arity: 3, kind: nil, args: nil, clauses: nil}", processed_content: nil, index: nil, status: :complete, role: :user, name: nil, tool_calls: [], tool_results: nil}}
    (langchain 0.3.0-rc.0) lib/chains/llm_chain.ex:533: LangChain.Chains.LLMChain.process_message/2
    (langchain 0.3.0-rc.0) lib/chains/llm_chain.ex:349: LangChain.Chains.LLMChain.do_run/1
    (langchain 0.3.0-rc.0) lib/chains/llm_chain.ex:322: LangChain.Chains.LLMChain.run_while_needs_response/1
    (langchain_json_processor 0.1.0) lib/langchain_json_processor.ex:64: LangChainJsonProcessor.run/0
    iex:13: (file)

13:43:39.249 [error] Exception raised in processor #Function<1.24809653/2 in LangChain.MessageProcessors.JsonProcessor.new!/1>

If you comment out this line regarding the preprocessor it will work.

The final output is:

{:ok, "```json\n{\n  \"location\": \"drawer\"\n}\n```"}

So, to circle back to my previous response I am left with two questions:

  1. Is this a bug?
  2. If this error is to be expected and the two should not be used together, how should handle the response or build the LangChain differently?

PS: thank you for the library, working with the custom functions and also getting the verbose output during dev is so cool!

@brainlid
Copy link
Owner

@jeroenbourgois,

But elaborating on your initial response, you say that if you are working with a model that support tool calls, you don't need a JSON processor? I am having trouble to understand that - being also relatively new to the whole LangChain concept.

I use the tool to allow the model to answer a very context specific question. The chat config itself requests json_object as an output. But in the end (and even more since using LangChain) the model tends answer in text, sometimes elaborating on its reasoning (which can be interesting, especially during testing) and then the eventual answer is wrapper in the know some textjson{...}\n other text sometimes `` text we all know.

Should I do something different when using tools then?

When you define a tool, you define the Function and FunctionParams. These are turned into a JSON Schema and provided to the LLM. I use them regularly with OpenAI and Anthropic. Your LLM may vary.

The Function has an associated Elixir function (your Elixir code) and it receives the argument to that function as an already parsed Elixir Map. So no JSON processor is needed there. You can pass those arguments into an Elixir changeset for instance, and return changeset errors (LangChain.Utils.changeset_error_to_string helps you there).

When the LLM executes the tool, it creates an assistant Message with one or more tool_calls in it. The JSON processor is not needed or used for parsing this data.

This all means that if the LLM natively supports tools and functions, then all the conversion from JSON text into an Elixir map is handled for you and is available for processing in the Function's Elixir code.

Hopefully this guide is up-to-date: https://hexdocs.pm/langchain/0.3.0-rc.1/custom_functions.html

@brainlid
Copy link
Owner

brainlid commented Dec 16, 2024

@jeroenbourgois,

Thanks for the example!

I have put a simple example on Github: https://github.com/jeroenbourgois/langchain_json_preprocessor/blob/main/lib/langchain_json_processor.ex

So running the module will give an error:

** (CaseClauseError) no case clause matching: {:halted, %LangChain.Message{content: "ERROR: An exception was raised! Exception: %FunctionClauseError{module: Regex, function: :run, arity: 3, kind: nil, args: nil, clauses: nil}", processed_content: nil, index: nil, status: :complete, role: :user, name: nil, tool_calls: [], tool_results: nil}}
    (langchain 0.3.0-rc.0) lib/chains/llm_chain.ex:533: LangChain.Chains.LLMChain.process_message/2
    (langchain 0.3.0-rc.0) lib/chains/llm_chain.ex:349: LangChain.Chains.LLMChain.do_run/1
    (langchain 0.3.0-rc.0) lib/chains/llm_chain.ex:322: LangChain.Chains.LLMChain.run_while_needs_response/1
    (langchain_json_processor 0.1.0) lib/langchain_json_processor.ex:64: LangChainJsonProcessor.run/0
    iex:13: (file)

13:43:39.249 [error] Exception raised in processor #Function<1.24809653/2 in LangChain.MessageProcessors.JsonProcessor.new!/1>

If you comment out this line regarding the preprocessor it will work.

The final output is:

{:ok, "```json\n{\n  \"location\": \"drawer\"\n}\n```"}

So, to circle back to my previous response I am left with two questions:

  1. Is this a bug?
  2. If this error is to be expected and the two should not be used together, how should handle the response or build the LangChain differently?

PS: thank you for the library, working with the custom functions and also getting the verbose output during dev is so cool!

I understand now what you were seeing. The LLM was executing a Function. The result of that function was being returned to the LLM and it "told" you what was returned. Partly because you instructed it to return the results as JSON. Then you were trying to parse out the returned JSON results.

There is a more direct way to do what I think you want that should work better for you. See my gist.

I just published v0.3.0-rc.1. You'll want to use that for this next feature.

I created an updated version in this gist:
https://gist.github.com/brainlid/e2831970661092fea1ac7884c3f33e0e

@jeroenbourgois
Copy link
Author

jeroenbourgois commented Dec 16, 2024

@brainlid ok thank you for the gist and the clarification about the functions. We are looking at using a JSON schema for all output, so the preprocessor might not be needed after all.

However, given I would still like to use it... Although I understand functions will cause the LLM to respond with JSON directly for the answer to that function, what if the final result differs from the called function result? Then I would maybe still make sense to use the JsonPreprocessor no?

In the gist you supplied (again: thank you for taking the time and effort!), I see how you output the result of the tool call. This is a good help, but - at least in our case - this is not the final result we are looking for, as the LLM does some manipulations after retrieving a result from the function.

When the LLM executes the tool, it creates an assistant Message with one or more tool_calls in it. The JSON processor is not needed or used for parsing this data.

So in your gist, the output we get is %{the_thing: "hairbrush"}. However, the actual question was 'Where is the hairbrush located?' So the function helps to get 'the thing' but not where it it. Let's assume the main question was slightly more complex and the LLM needed to do some more work, like 'Where is the hairbrush located and translate it into French'. The translation part would be no custom function, but just the LLM at work. Then we would need the final result, and there is a possibility (not using JSON schema) the LLM responds something like:

You asked me about the hairbrush, which is the_thing. We need to translate it into French which is "brosse à cheveux". The final result is: json\n\n{\"the_thing\":\"brosse à cheveux\"}

I don't know if what I am explaining is getting across good, but in essence: the tooling might just be a small step for the LLM to take within the total response. In those cases it might still be useful to use the JSON processor?

@brainlid
Copy link
Owner

brainlid commented Dec 16, 2024

@jeroenbourgois,

Thanks for taking the time to bring me up to speed with what you're doing. I get your point. It is valid for the LLM to execute a function in your application before giving a final JSON formatted response. That makes sense.

Yes, we should support that use case.

As a side-note, I started a ChangesetProcessor that I haven't gotten back to. I didn't get far enough where there's anything valuable to share. The idea is it uses the JsonProcessor to get it into a JSON format, then it processes it through an Ecto changeset where you may have additional validation checks that you care about. The real benefit is getting it into an Elixir struct format and enforcing other validations like length, enum, etc.

That approach might be helpful too.

But to your point, yes, I see the need for the JsonProcessor also working in addition to tools. The fact that they don't both work is probably a bug.

NOTE: I don't know if OpenAI can have tools and a specified response_format and have them both work. 🤔 It depends on how strictly it enforces it. Like it can't call a tool if the response format takes precedence. Interesting. I'd love to hear how your experiments go!

UPDATE: See #215

UPDATE 2: I'll have to think and look into it more for having a JsonProcessor work in a conversation with tool calls. Because the JsonProcessor will be run on the assistant response with the tool call and there's not JSON in the text content to process, so it that an error? Or does it pass through with no JSON being processed? Unsure.

@jeroenbourgois
Copy link
Author

@brainlid thank you so much for your elaborate response! For now, we have removed the tool calls in our application by splitting the task into several separate, smaller prompts. We were able to do the logic of the function outside the LLM.

After removing the tool calls I could apply the JsonProcessor again and there was no CoT anymore. The LLM responded with near perfect JSON which was then put on the processed_content of the last message as expected.

So for now, we are good. Feel free to close the issue, but maybe mention it in the docs that what I initially did is not expected?

Note: it was my co-founder who pointed out the possible CoT in the messages when using prompts, which was not that surprising coming to think about it. But this also leads to slightly leads predicable output. Without the tool calls it is much more succinct, being mostly limited to outputting JSON. We think that tool calls can be very useful if the output of that tool result is used, just as in your gist earlier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants