New development goes to chatapi_toolkit
该仓库已迁移至 chatapi_toolkit
A Python wrapper for OpenAI API, supporting multi-turn dialogue, proxy, and asynchronous data processing.
pip install openai-api-call --upgrade
Method 1, write in Python code:
import openai_api_call
openai_api_call.api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
openai_api_call.base_url = "https://api.example.com"
Method 2, set environment variables in ~/.bashrc
or ~/.zshrc
:
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export OPENAI_BASE_URL="https://api.example.com"
Example 1, simulate multi-turn dialogue:
# first chat
chat = Chat("Hello, GPT-3.5!")
resp = chat.getresponse()
# continue the chat
chat.user("How are you?")
next_resp = chat.getresponse()
# add response manually
chat.user("What's your name?")
chat.assistant("My name is GPT-3.5.")
# save the chat history
chat.save("chat.json", mode="w") # default to "a"
# print the chat history
chat.print_log()
Example 2, process data in batch, and use a checkpoint file checkpoint
:
# write a function to process the data
def msg2chat(msg):
chat = Chat(api_key=api_key)
chat.system("You are a helpful translator for numbers.")
chat.user(f"Please translate the digit to Roman numerals: {msg}")
chat.getresponse()
checkpoint = "chat.jsonl"
msgs = ["%d" % i for i in range(1, 10)]
# process the data
chats = process_chats(msgs[:5], msg2chat, checkpoint, clearfile=True)
# process the rest data, and read the cache from the last time
continue_chats = process_chats(msgs, msg2chat, checkpoint)
Example 3, process data in batch (asynchronous), print hello using different languages, and use two coroutines:
from openai_api_call import async_chat_completion, load_chats
langs = ["python", "java", "Julia", "C++"]
chatlogs = ["print hello using %s" % lang for lang in langs]
async_chat_completion(chatlogs, chkpoint="async_chat.jsonl", ncoroutines=2)
chats = load_chats("async_chat.jsonl")
This package is licensed under the MIT license. See the LICENSE file for more details.
Current version 1.0.0
is a stable version, with the redundant feature function call
removed, and the asynchronous processing tool added.
- Since version
0.2.0
,Chat
type is used to handle data - Since version
0.3.0
, you can use different API Key to send requests. - Since version
0.4.0
, this package is mantained by cubenlp. - Since version
0.5.0
, one can useprocess_chats
to process the data, with a customizedmsg2chat
function and a checkpoint file. - Since version
0.6.0
, the feature function call is added.