-
Notifications
You must be signed in to change notification settings - Fork 731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add Anthropic support #288
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @ocss884. This is a very well prepared PR. Let's sort out what is wrong with the two system messages in the main code. Also we will need tests for the code that you added including examples. Please wrap the code that would crash if not Anthropic key provided into skipif as here:
@pytest.mark.skipif(not check_server_running(DEFAULT_SERVER_URL), |
Please run code coverage as in CONTRIBUTING.md and for all the code that you have added make sure that it covered.
camel/utils/token_counting.py
Outdated
ret = system_prompt | ||
for msg in messages[1:]: | ||
role, content = msg["role"], msg["content"] | ||
# Claude does not perform well if the system message RULE OF USER/ASSISTANT is sent twice. (see role_playing/RolePlaying.init_chat()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for spotting. Did you discover a bug of sending the system prompt twice? We never observed it. Could you create a bug ticket to reproduce it and fix? We'd better fix the erroneous behavior in the main code than create a detour here in your code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is an expected behavior since in init_chat() the doc_string says "..sending the system message again to the agents using chat messages". The full doc_string is as below:
camel/camel/societies/role_playing.py
Lines 351 to 355 in 3ba1754
def init_chat(self) -> Tuple[BaseMessage, List[BaseMessage]]: | |
r"""Initializes the chat by resetting both of the assistant and user | |
agents, and sending the system messages again to the agents using | |
chat messages. Returns the assistant's introductory message and the | |
user's response messages. |
I should say it is confusing for me since in role_play
, this init_chat
function is used to get the first input_assistant_msg
and often it is just a modified version of user_sys_msg
. Look at the first return value:
camel/camel/societies/role_playing.py
Lines 366 to 379 in 3ba1754
assistant_msg = BaseMessage.make_assistant_message( | |
role_name=self.assistant_sys_msg.role_name, | |
content=(f"{self.user_sys_msg.content}. " | |
"Now start to give me instructions one by one. " | |
"Only reply with Instruction and Input.")) | |
user_msg = BaseMessage.make_user_message( | |
role_name=self.user_sys_msg.role_name, | |
content=f"{self.assistant_sys_msg.content}") | |
assistant_response = self.assistant_agent.step(user_msg) | |
if assistant_response.terminated or assistant_response.msgs is None: | |
raise ValueError(f"Assistant agent terminated unexpectedly. " | |
f"Error info: {assistant_response.info}") | |
return assistant_msg, assistant_response.msgs |
For example in the
open_source_example
, it is used as input_assistant_msg
input_assistant_msg, _ = role_play_session.init_chat() |
and then this modified
user_sys_msg
is send to usr
once again during the first round of chattingcamel/camel/societies/role_playing.py
Line 433 in 3ba1754
user_response = self.user_agent.step(assistant_msg) |
So during the first API call, the
assistant_msg
is just the whole user_sys_msg
append with two more instructions:camel/camel/societies/role_playing.py
Lines 366 to 370 in 3ba1754
assistant_msg = BaseMessage.make_assistant_message( | |
role_name=self.assistant_sys_msg.role_name, | |
content=(f"{self.user_sys_msg.content}. " | |
"Now start to give me instructions one by one. " | |
"Only reply with Instruction and Input.")) |
That makes what I find a "duplicate system messages". I'm happy to create a ticket and solve it if you can confirm this is not desired.
For me, I would prefer to make
camel/camel/societies/role_playing.py
Lines 369 to 370 in 3ba1754
"Now start to give me instructions one by one. " | |
"Only reply with Instruction and Input.")) |
only as the very first
assistant_msg
. It is more intuitive to send the rule once and then instruct the user_model
to go first using simple instructions. Also, I recommend adding more comments to init_chat
if you want to keep it. It is confusing of making a variable called assistant_msg
with the content of usr_sys_msg
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great code! Leave some comments.
Anthropic have a more openai-like 'Message completion' in beta now to replace their current
I will update once they are done. |
Hey @ocss884 , I plan to do the review later this week, would you like to solve the conflicts and update this PR with latest master branch? Thanks a lot! |
Important Auto Review SkippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
Hi @Wendong-Fan I just updated this PR. Due to an openai recent release, I limit the package version #474 . Some test cases are failed cuz this PR was opened when I was not a member and that makes the OPENAI_API_KEY secret empty (you can check the log). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution, overall looks great, left some comments, after your update we can merge it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution! Please feel free to resolve the conflicts and merge it.
Going to add Claude3 support before merge it. |
@Wendong-Fan @dandansamax Hi Wendong and Tianqi. Claude3 is ready now. But it seems I cannot approve the PR myself. @dandansamax could you help take a look? Thanks! |
This reverts commit 25c8e42.
Hi, @ocss884 , I found this PR was merged without approval and the pipelines were not passed. It is the wrong approach, and it must be reverted. Hi, @Wendong-Fan . You have to see this, please. ❗ And I created a PR to revert the last commit in master branch. #518 |
Returns: | ||
int: Number of tokens in the messages. | ||
""" | ||
prompt = messages_to_prompt(messages, self.model_type) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems messages_to_prompt
does not support anthropic models yet. How does it work?
camel/camel/utils/token_counting.py
Line 39 in 916ab5e
def messages_to_prompt(messages: List[OpenAIMessage], model: ModelType) -> str: |
camel/utils/token_counting.py
Outdated
elif model.is_anthropic: | ||
|
||
# use XML tag to decorate prompt | ||
# https://docs.anthropic.com/claude/docs/constructing-a-prompt#mark-different-parts-of-the-prompt | ||
# https://docs.anthropic.com/claude/docs/roleplay-dialogue | ||
# https://docs.anthropic.com/claude/docs/human-and-assistant-formatting | ||
system_message = str(system_message) | ||
ret = f"\n{system_message}\n" | ||
for msg in messages[1:]: | ||
role, content = msg["role"], msg["content"] | ||
# Claude does not perform well if the system message RULE OF USER/ASSISTANT is sent twice. (see role_playing/RolePlaying.init_chat()) | ||
# Here is a special treatment for Claude to remove the redundant system message. | ||
if not isinstance(content, str): | ||
raise ValueError("Currently multimodal context is not " | ||
"supported by the token counter.") | ||
if content.startswith(system_message): | ||
ret = content | ||
continue | ||
if role == "user": | ||
ret += f"{HUMAN_PROMPT} {content}" | ||
elif role == "assisstant": | ||
ret += f"{AI_PROMPT} {content}" | ||
return ret | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why did this part deleted? @ocss884
cc @lightaime
Description
Describe your changes in detail.
Motivation and Context
close #283
Types of changes
What types of changes does your code introduce? Put an
x
in all the boxes that apply:Implemented Tasks
Checklist
Go over all the following points, and put an
x
in all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!