Hi,
I’m trying to use Completions API to extract action items from a conversation/discussion which I’m submitting using OpenAI JavaScript library’s createCompletion().
let transcript = "..."; // Transcript content here
let myPrompt = "Generate all action items from below discussion: \n";
myPrompt += transcript;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt: myPrompt,
temperature: 0,
max_tokens: 3200,
});
The response I’m getting always includes one choice in choices
array. Value of that choice
’s text
property always begins with a sentence fragment, which is the problem I need to resolve:
blurbs that we have around that, unless we have a good resource that we’ve all pulled together
The rest of the completion consists of full sentences and is coherent. But I’m wondering why the first sentence is always a non-sensical fragment. The full transcript text I’m submitting I’m attaching at the end of this question.
Below is the contents of completion.data
member of the response that I’m getting for this API call:
{
id: 'cmpl-7ahtGf9aE2V1JWV9N2CvpAXs2qAiR',
object: 'text_completion',
created: 1688982878,
model: 'text-davinci-003',
choices: [
{
text: " blurbs that we have around that, unless we have a good resource that we've all pulled together.\n" +
'Action Items:\n' +
'1. Work with GTM teams to support corporate events. \n' +
'2. One person to pick the one that GTM teams needs to back. \n' +
'3. Connect with GTM teams on Slack and Github issues. \n' +
'4. Keep track of people signing up for KubeCon and for other events. \n' +
'5. Add link to Rolls product announcement on Github. \n' +
'6. Create a resource that list all major improvements since 130.',
index: 0,
logprobs: null,
finish_reason: 'stop'
}
],
usage: { prompt_tokens: 783, completion_tokens: 118, total_tokens: 901 }
I’ve seen for other prompts also, completion.data.choice[0].text
of response I’m getting consistently begins with a sentence fragment:
e.g.
- when the prompt is:
Prepare a summary of below discussion: \n<long_text_of_conversation>
choices[0].text begins with:
use cases or things that we should put in there that are kind of consistent. Speaker A and the team are discussing initiatives for corporate events…
- when the prompt is:
Identify the full list of dependencies or prerequisites for any action items mentioned in following discussion: \n<long_text_of_conversation>
choices[0].text begins with:
projects, so maybe there’s people who remember Hahha" Dependencies or prerequisites: - Restructuring - Work with GTM teams - Regular sync - CICD on Google Next - GitOps on KubeCon - Commitment from campaign managers - Big improvements since 13.0
It gives the impression that the completion text is truncated at the front. I need to know if this is due to values for any parameters I’m sending (or not sending)? Do you have any ideas on how to resolve this? Thanks in advance.
Appendix:
Text of conversation/discussion:
SPEAKER A\nCan record and we don’t have a ton of items to get to, and I might be able to do one that might be fun if we have a little bit of time. So corporate events, I think I saw a little I put this in Slack and I saw a little bit of kind of noise around it, which was good. The nutshell here is, as we’ve kind of restructured and tried different things, the event support that we need isn’t as nailed down as it needs to be. So the current tactic that we’re going with is go to market, team signs up and kind of sponsors that event. So you support as a PMM, your campaign manager does the campaigns for that event, et cetera, et cetera, et cetera. I don’t see anyone in the maybe there are comments in the issue. I don’t see the header updated yet.\nSPEAKER B\nI thought we had in Slack sort of farmed each one of them out.\nSPEAKER A\nIt looks like Ty put in some folks. It looks like this looks good. Let’s see. Need support from GTM teams. So I guess the ask would be to work with your GTM team. Let me ask. I saw some Slack, I think, in our Slack, but were you all able to connect with your GTM teams on.\nSPEAKER C\nSlack only and on the issue? Actually, yeah.\nSPEAKER D\nSame not in real time, but source feedback. I got one person to respond so far, so it may end up Cindy being you, and I just picking the one that we want to do and then they can back us up.\nSPEAKER A\nIf.\nSPEAKER D\nWe don’t get any more feedback.\nSPEAKER A\nYeah. Does anybody have a regular sync still? Are those all been canceled or is there.\nSPEAKER E\nYeah, we’re on like the two week cadence.\nSPEAKER A\nOkay.\nSPEAKER C\nGithubs has been canceled after the enablement.\nSPEAKER A\nCool. So I’m just trying to catch up with the thread. So it looks like maybe platform on Reinvent, CICD on Google Next and GitOps on KubeCon. Does that sound right?\nSPEAKER D\nYeah, that’s where we were last I heard.\nSPEAKER A\nCool. So then I think we can help the Corp events team. They do a lot of cat herding and keep on tracking people down. So I think if this team can take the mission to try to help track that down so if you get the commitment specifically from your campaign managers, hey, we’re signing up for KubeCon, can you comment on the issue that, yes, I can commit to this, et cetera, et cetera, et cetera, just so that they can get that event support? But that looks good and I appreciate thanks for the link too, sonya on the Rolls product announcements. So I appreciate Brian for adding this. I probably should have added it, but.\nSPEAKER B\nI had two questions about that one. One is over what time frame are we looking at?\nSPEAKER A\nIn theory, this could be the same as the GitLab 14 launch, where we’re saying basically since 130, what kind of big improvements have we made? Candidly, this is always a little bit tough.