如果只是写代码的话,可以在prompt的message数组里面,把目标代码在第一个role为assistant的元素中,这样就给它提供了一个简单的上下文,每次请求都将代码更新。
但是现在gpt3.5-turbo的baseapi只能支持到2Ktoken,换算下来1500左右的单词量,根据具体内容,大概也就在100-400行左右的代码,就很憋屈。
Codex api虽然能到4K但是在playground里面效果有点糟糕。
GPT4api 上了之后能支持8k和32k,但应该价格也会贵不少。
When considering session and conversation history I’ve taken to prefixing each new Prompt thus:
‘On the topic of: ‘&<Natural Language Processing> &’ analyse the Prompt using NLP and return topic, context, named entities, keywords and sentiment and then respond to the Follow Up question :’& <Last_Response> &<FollowUp>
In this case the previous Prompt was: what does named entities and sentiment mean?
Note: Emboldened terms above are variables inserted under script control.
The italicised blocks below are the Last Response.
In this case the returned Response was :
START
Topic: Natural Language Processing
Context: Analyzing the prompt using NLP
Named entities: None mentioned in the prompt
Keywords: Natural Language Processing, NLP, analyze, prompt, named entities, sentiment
Sentiment: None mentioned in the prompt
----- Named entities refer to specific entities that are mentioned in a piece of text, such as names of people, organizations, locations, or products. In NLP, named entity recognition is the process of identifying and classifying these named entities in text.
----- Sentiment analysis is a technique used in NLP to determine the emotional tone or attitude of a piece of text. It involves analyzing the words and phrases used in the text to determine whether the overall sentiment is positive, negative, or neutral.
Follow up: What is the purpose of sentiment analysis in NLP?
The purpose of sentiment analysis in NLP is to determine the emotional tone or attitude of a piece of text. This information can be used for a variety of purposes, such as understanding customer feedback, monitoring brand reputation, or analyzing public opinion on a particular topic. Sentiment analysis can also be used to improve the accuracy of other NLP tasks, such as text classification or information retrieval.
(Source: gpt-3.5-turbo Temperature: 0.7)
END
Each Prompt and Response are absorbed into a Pivot Table. With the idea of building a history of NLP terms in each interaction both Prompt and Response are subjected to analysis by Google Cloud Natural Language using its Api. The idea, clearly, is to be able to pick up where the chat left off when returning to a topic by prefixing each new prompt with an NLP synopsis of the chat so far. The Pivot Table presently holds 516 interactions across 24 topics.
if you are using node.js, get the latest chatgpt package from npm chatgpt - npm
Now it supports parentMessageId
param so you should be able to track the conversation
The parameter parentMessageId is not exist. So the api is not working.
Does this actually work?
It’s weird that it has this parameter but it cannot be found in OpenAI API official document.
It’s not an official package, but I’ve tested it with my friend and the conversation works, here referring to the issue in the repo Persist conversation state · Issue #296 · transitive-bullshit/chatgpt-api · GitHub. For example, asking for tourist attractions will return an ordered list of items, and the following prompt referring to any specific item with only a serial number works for us. It’s not a thorough test I admit.
首先这确实不是一个官方的包,我觉得应该是他们自己封装了功能做了个字段,参考issue - Persist conversation state · Issue #296 · transitive-bullshit/chatgpt-api · GitHub 我和我朋友试了还可,第一个prompt问推荐的旅游景点,第二个prompt用“第几个景点”这样的序数词指代可以得到正常的反馈。
So i have found numerous threads and websites explaining that no sessions are remembered and that you have to pass the entire conversation each time. Ok sure, except the knowledge i want GPT to remember is a lot of information that even when optimised/reduced/compressed is still several K tokens.
When using the chat interface, i can just teach the knowledge using several messages and it works great and the AI can reason about said knowledge. This is currently impossible to do using the open AI API.
Is there any information if a conversation_id or similar will be added in the near future? Is there any devs on here that give answers? it just seems the same ‘send history on each prompt’ solution which doesn’t work for our case. Not to mention that it seems very inefficient and a waste of tokens.
This is in my opinion and extreme limitation and it is making me actively find an alternative to open AI.
I we implemet in this suggested way for next create call the tokens used also gets appended, it processes previous prompts as well again.
Do we have flexibility to having tokens used on latest message ?
GitHub - dustinandrews/gptFlaskByGpt-3.5-turbo: A flask app for chatting with gpt-3-turbo written primarily by gpt-3-turbo Has a simple example. See the summary branch for a simple summarizer to keep up context.
API means application programming interface. In other words you are expected to write your own app.
I wanted to use ChatGPT for the game, I thought that I could tie requests to a certain sessionId, feed it with a text algorithm, with a description of game objects, location data and a bunch of other parameters, but it turns out that I have to fit all this into 4096 tokens, this unreal. It’s a pity.
I DO this is the just VERY RIGHT approach, use another model to summarize prompts & completions , sending a pack of short summaries in right format.
All people asking for API as chatGPT, assume the chatGPT developers just build a web?