Question Classifcation function in workflow maybe abnormal #8430
tigflanker
started this conversation in
General
Replies: 2 comments 2 replies
-
In addition, I replaced the Question Classifcation module with a general large model + prompt, and it operates quite smoothly with a high accuracy rate. 👍 Another idea I have is to collect bad cases using a knowledge base approach to create a mapping from questions to classifications. Therefore, this issue is not too urgent. |
Beta Was this translation helpful? Give feedback.
2 replies
-
Perhaps it's time to enhance this classifier, as it currently seems very poorly functioning to me, with most of the time allocating randomly. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Self Checks
Dify version
v0.7.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Hello experts, I conducted a comparison test in version v0.7.3.
On the left is the regular model invocation, and on the right is the issue classification within the workflow.
The green box represents the same prompt, the yellow box is the test question and the response, and the red box on the right indicates potential issues.
In this code(https://github.com/langgenius/dify/blob/main/api/core/workflow/nodes/question_classifier/template_prompts.py), it seems that some prompt words were not cleaned up and removed, which might be the reason for the inaccurate classification. Please check this issue.
✔️ Expected Behavior
No response
❌ Actual Behavior
No response
Beta Was this translation helpful? Give feedback.
All reactions