-
Notifications
You must be signed in to change notification settings - Fork 563
Skeleton-plugin Code structure helper #180
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportPatch coverage has no change and project coverage change:
Additional details and impacted files@@ Coverage Diff @@
## master #180 +/- ##
==========================================
- Coverage 58.52% 53.92% -4.61%
==========================================
Files 36 38 +2
Lines 2122 2303 +181
Branches 222 244 +22
==========================================
Hits 1242 1242
- Misses 858 1039 +181
Partials 22 22
☔ View full report in Codecov by Sentry. |
@@ -78,6 +78,7 @@ You can also see the plugins here: | |||
| Twitter | Auto-GPT is capable of retrieving Twitter posts and other related content by accessing the Twitter platform via the v1.1 API using Tweepy. | [autogpt_plugins/twitter](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/twitter) | | |||
| Wikipedia Search | This allows Auto-GPT to use Wikipedia directly. | [autogpt_plugins/wikipedia_search](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/wikipedia_search) | | |||
| WolframAlpha Search | This allows AutoGPT to use WolframAlpha directly. | [autogpt_plugins/wolframalpha_search](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/wolframalpha_search)| | |||
| Skeleton Plugin | This allows AutoGPT to use WolframAlpha directly. | [autogpt_plugins/skeleton](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/skeleton)| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wolfram Alpha
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put this in the right place alphabetically
messages=[ | ||
{ | ||
"role": "system", | ||
"content": f"You are an assistant that generates descriptions of Python code files. Please describe the following file: {file}", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
extract this to a easily editable file
files = [file for file in files if file not in code_structure] | ||
|
||
model = os.getenv("SKELETON_MODEL", os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo")) | ||
max_tokens = os.getenv("SKELETONM_TOKEN_LIMIT", os.getenv("FAST_TOKEN_LIMIT", 1500)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
os.getenv("SKELETONM_TOKEN_LIMIT"
I think you have a typo here in the variable name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh didn't notice, apparently thats why the results were so short
|
||
By default, the plugin uses whatever your `FAST_LLM_MODEL` environment variable is set to. If none is set it will fall back to `gpt-3.5-turbo`. You can set it individually to a different model by setting the environment variable `SKELETON_MODEL` to the model you want to use (example: `gpt-4`). | ||
|
||
Similarly, the token limit defaults to the `FAST_TOKEN_LIMIT` environment variable. If none is set it will fall back to `1500`. You can set it individually to a different limit for the plugin by setting `SKELETON_TOKEN_LIMIT` to the desired limit (example: `7500`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd probably encourage smart over fast, but that's not super clear the results of that
content: str | ||
|
||
|
||
class SkeletonPlugin(AutoGPTPluginTemplate): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CodeStructurePlugin seems better and more clear, lets use that everywhere if possible
model = os.getenv("SKELETON_MODEL", os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo")) | ||
max_tokens = os.getenv("SKELETONM_TOKEN_LIMIT", os.getenv("FAST_TOKEN_LIMIT", 1500)) | ||
temperature = os.getenv("SKELETON_TEMPERATURE", os.getenv("TEMPERATURE", 0.5)) | ||
prompt_prefix = os.getenv("SKELETON_PROMPT_PREFIX", os.getenv("PROMPT_PREFIX", "You are an assistant that generates descriptions of Python code files. Please describe the following file: {file}")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make the env more clearly tied to the function
Will continue in a few days, I got flooded with work outside of this plugin. |
note: this contains also the telegram changes from the other pr, I am working on both.
This plugin is based on the planner and allows auto-GPT to write and edit coding projects now.