Skip to content

Commit

Permalink
Changes from pull
Browse files Browse the repository at this point in the history
  • Loading branch information
daz-williams committed Oct 8, 2023
1 parent c2093ff commit 7df5a38
Show file tree
Hide file tree
Showing 4 changed files with 138 additions and 160 deletions.
128 changes: 127 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,133 @@ You should only use reputable information sources, ideally peer reviewed scienti
I want you to summarize your findings in a document named metformin.md and includes links to reference and resources you used to find the information.
Additionally, the last section of your document you should provide a recommendation for a 43 year old male, in good health and who regularly exercises as to whether he would benefit from taking Metformin.
You should explain your recommendation and justify it with sources.
Finally, you should highlight potential risks and tradeoffs from taking the medication."""
Finally, you should highlight potential risks and tradeoffs from taking the medication.
```
#### Command Line Arguments
The following arguments can be passed on the command line to change how the **BondAI** CLI tool works.
- **--enable-dangerous** - Allows potentially dangerous Tools to be loaded (i.e. ShellTool and PythonREPLTool)
- **--enable-prompt-logging log_dir** - Turns on prompt logging which will write all prompt inputs into the specified directory. If no directory is provided **BondAI** will defaul to *logs* within the current directory.
- **--load-tools my_tools.py** - If this option is specified no tools will be loaded by default. Instead **BondAI** will load the specified Python file and look for a function named **get_tools()**. This function should return a list of Tools.
- **--quiet** - Suppress agent output. Unless specified the agent will print detailed information about each step it's taking.
```bash
bondai --enable-dangerous --enable-prompt-logging logs --load-tools my_tools.py
```
#### Default CLI Tools
By default the **BondAI** CLI command will automatically load the following tools:
- **DuckDuckGoSearchTool** - Allows the model to use DuckDuckGo to search the web.
- **WebsiteQueryTool** - Allows the model to query content of websites. By default this is delegated to gpt-3.5-16k but if the content is too large for the model's context it will automatically use embeddings and semantic search.
- **FileQueryTool** - Allows the model to query the content of files. By default this is delegated to gpt-3.5-16k but if the content is too large for the model's context it will automatically use embeddings and semantic search.
- **DownloadFileTool** - Allows the model to download files locally from the web. This is useful for many research tasks.
- **FileWriteTool** - Allows the model to write content to files. This is useful for saving work or exporting the results of a research or generation task to a file.
#### CLI Environment Variables
An OpenAI API Key is required.
```bash
export OPENAI_API_KEY=XXXXXXX
```
If the GOOGLE_API_KEY and GOOGLE_CSE_ID environment variables are provided the **BondAI** CLI will load the *GoogleSearchTool* instead of the *DuckDuckGoSearchTool*.
```bash
export GOOGLE_API_KEY=XXXXXXX
export GOOGLE_CSE_ID=XXXXXXX
```
If the ALPACA_MARKETS_API_KEY and ALPACA_MARKETS_SECRET_KEY environment variables are provided the **BondAI** CLI will load the *CreateOrderTool*, *GetAccountTool*, and *ListPositionsTool*.
```bash
export ALPACA_MARKETS_API_KEY=XXXXXXX
export ALPACA_MARKETS_SECRET_KEY=XXXXXXX
```
#### Gmail Integration
[Check here](https://www.geeksforgeeks.org/how-to-read-emails-from-gmail-using-gmail-api-in-python/) for information on generating a **gmail-token.pickle** file with credentials for accessing your gmail account. If this file is present in the root directory where the **BondAI** CLI is running it will load the *ListEmailsTool* and *QueryEmailsTool* tools automatically.
#### Langchain Tools
When the **BondAI** CLI starts it will check to see if LangChain is installed. If it is it will automatically load the following LangChain tools:
- **ShellTool** - This allows the model to generate and run arbitrary bash commands.
- **PythonREPLTool** - This allows the model to generate and run arbitrary Python commands.
**Warning: Both of these tools are considered dangerous and require that the --enable_dangerous argument is specified when running starting the CLI. Is is strongly recommended that these are run within a containerized environment.**
## Docker
It is highly recommended that you run **BondAI** from within a container if you are going to use tools with file system access. There's two options for running Docker:
> Download Docker Desktop for a UI based experience, which gives you quick access to the log viewer and the terminal for the bondai container.
https://www.docker.com/products/docker-desktop/
### Docker CLI
From the command line follow the steps below to build and run the **BondAI** container. A directory named 'agent-volume' will be created which will be used as the working directory for execution of the CLI tool on the container.
```bash
cd docker
./build-container.sh
./run-container.sh OPENAI_API_KEY=XXXXX ENV1=XXXX ENV2=XXXX --arg1 --arg2
```
### Docker Compose
The docker-compose.yml file which is located in the `./docker` directory, makes use of a .env file and a pre-configured **volume** which is mapped to an `./agent-volume` directory
There's two options with Docker Compose. From the command line with this command:
```bash
cd ./docker
docker-compose up
```
Or if you use vsCode, install the official Docker Extension, then right click on the `./docker/docker-compose.yml` file and select `Compose Up`
> Don't forget to open sample.env, add your Environment Keys and save as `.env`
## APIs
#### Agent
The Agent module provides a flexible interface for agents to interact with different tools and functions. The Agent makes decisions based on given tasks, uses available tools to provide a response, and handles exceptions smoothly.
**init:** Instantiate a new Agent
- *prompt_builder* (default=DefaultPromptBuilder): Responsible for building the prompts at each step.
- *tools* (default=[]): The list of tools available to the Agent.
- *llm* (default=MODEL_GPT4_0613): The primary model used by the Agent.
- *fallback_llm* (default=MODEL_GPT35_TURBO_0613): Secondary model the Agent falls back on when an appropriate response was not received by the primary model.
- *final_answer_tool* (default=DEFAULT_FINAL_ANSWER_TOOL): Tool that provides the final answer/response back to the user.
- *quiet* (default=False): If true, the Agent will suppress most print messages.
**run(task='', task_budget=None):** Continuously runs the agent until the task is completed or the budget is exceeded. The Agent will keep track of the cost for all API calls made to OpenAI. If this budget is exceeded the Agent will raise a BudgetExceededException.
**run_once(task=''):** Executes a single step of the agent's process. Returns an instance of AgentStep.
**reset_memory():** Clears the agent's memory of previous steps.
#### Custom Tools
```python
# Required imports
from pydantic import BaseModel
from bondai.tools import Tool
# Define Tool metadata
TOOL_NAME = 'my_unique_tool_name'
TOOL_DESCRIPTION = "A thorough description of what my Tool implementation does. Better explanations lead to better tool usage."
# Describe the parameters your tool accepts. It is recommended but not required to have a 'thought' parameter.
class Parameters(BaseModel):
param1: str
param2: int
thought: str
Agent(tools=[
DuckDuckGoSearchTool(),
Expand Down
12 changes: 11 additions & 1 deletion bondai/util/model_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,24 @@ def write_file(filename, content):
class ModelLogger:
def __init__(self, logging_dir='./logs'):
self.logging_dir = logging_dir
self.tail_log_file = './logs/bondai.log' # Dedicated log file to tail

def log(self, prompt, response, function=None):
instance_path = get_instance_dir(self.logging_dir)
instance_path = get_instance_dir(self.logging_dir)

# Existing logic to write to individual files
write_file(f"{instance_path}/prompt.txt", prompt)
if response:
write_file(f"{instance_path}/response.txt", response)
if function:
f_str = json.dumps(function)
write_file(f"{instance_path}/function.txt", f_str)


# New logic to append to dedicated tail log file
with open(self.tail_log_file, 'a') as f:
f.write(f"User: {prompt}\n")
if response:
f.write(f"AI: {response}\n")
if function:
f.write(f"Function: {json.dumps(function)}\n")
152 changes: 0 additions & 152 deletions docs/GETTING_STARTED.md

This file was deleted.

6 changes: 0 additions & 6 deletions docs/TODO.txt

This file was deleted.

0 comments on commit 7df5a38

Please sign in to comment.