You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
llm.summarize() works totally as expected. llm.chunked_summarize() just returns the original input data without any processing behind the scenes (it returns output instantly, so I don't think it's sending to OpenAI)
My code:
importrequestsfromthinkgpt.llmimportThinkGPTllm=ThinkGPT(model_name='gpt-3.5-turbo', temperature=0)
URL='https://raw.githubusercontent.com/jina-ai/thinkgpt/main/README.md'input_data=requests.get(URL).textsummary=llm.chunked_summarize(
input_data,
instruction_hint='Rewrite the following into an informal, SEO-friendly blog post in markdown format',
max_tokens=4096,
)
It returns:
# ThinkGPT 🧠🤖
<a href="https://discord.jina.ai"><img src="https://img.shields.io/discord/1106542220112302130?logo=discord&logoColor=white&style=flat-square"></a>
ThinkGPT is a Python library aimed at implementing Chain of Thoughts for Large Language Models (LLMs), prompting the model to think, reason, and to create generative agents.
The library aims to help with the following:
* solve limited context with long memory and compressed knowledge
* enhance LLMs' one-shot reasoning with higher order reasoning primitives
* add intelligent decisions to your code base
And then the rest of the readme
However, if I change that to llm.summarize(), I get:
Introducing ThinkGPT: A Python Library for Large Language Models
ThinkGPT is a Python library that implements Chain of Thoughts for Large Language Models (LLMs), prompting the model to think, reason, and create generative agents. The library aims to solve limited context with long memory and compressed knowledge, enhance LLMs' one-shot reasoning with higher order reasoning primitives, and add intelligent decisions to your code base.
Key Features:
- Thinking building blocks: Memory, Self-refinement, Compress knowledge, Inference, and Natural Language Conditions
- Efficient and Measurable GPT context length
- Extremely easy setup and pythonic API thanks to DocArray
And then the rest of a short blog post
The text was updated successfully, but these errors were encountered:
llm.summarize()
works totally as expected.llm.chunked_summarize()
just returns the original input data without any processing behind the scenes (it returns output instantly, so I don't think it's sending to OpenAI)My code:
It returns:
And then the rest of the readme
However, if I change that to
llm.summarize()
, I get:And then the rest of a short blog post
The text was updated successfully, but these errors were encountered: