Text completion task with text-ada-001

Hi, I am trying to use text-ad-001 model for text completion task. I want to give some content on specified topic in prompt for openai api. My requirement is to get uinque rewritten paragraph on mentioned topic. But the received content is not relevant the content what i have given in prompt. sometimes its random content and its repeated content sometimes. How do i attain my requirement with text-ad-001 model? Its about the prompt to the selective model or this model is not suitable for text completion. if the model is correct selection for my task, how to use the prompt? or else do i need to use any other model for the text completion task. its confusing.
please guide me. Thanks in advance.

Welcome to the forum!

ada-001 is quite an old model, have you tried gpt-3.5-turbo? it is a chat completion model that is based from a much larger dataset and parameter count.

here is a small demo code segment in python

import openai

openai.api_key = 'your-api-key'

response = openai.ChatCompletion.create(
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},


you can also use the Playground to trial various models


Since, gpt-3.5-turbo is a chat completion model, shall i use it for text completion? Because, my requirement is to regenerate the unique content based on given content in prompt. Can you please specify the model under GPT-3.5 for text completion.

Sure, you can try changing the prompt to “How would you complete this text? (please only give the completion) ###{$input_text}###”

See what sort of results you get.

Geez, ada feels like a few generations ago at this point. It’s now far below anyone’s expectations, especially those exposed to AI via ChatGPT.

Actually the text- models aren’t really text completion either. They’re instruct models, as in you give it instructions. If you want completion, it would simply be ada.

Ada is quite outdated, though there are uses. At the minimum, I would recommend curie. davinci is still the gold standard, better than gpt-3.5-turbo but more expensive.

text-davinci-003 is an instruct model, more similar to gpt-3.5-turbo. I’d recommend the plain `davinci.

Hi, I am using chat completion model as mentioned earlier in this discussion, but the response is empty. while i am checking the account usage dashboard for daily usage breakdown, no request found on specified date. I have used
$apiEndpoint = ‘/v1/chat/completions’;
$requestData = [
‘model’ => ‘gpt-3.5-turbo’,
‘prompt’ => $messages,
‘max_tokens’ => $cnt,
‘temperature’ => 0.7
in my api request code. But, in the playground, received the desired result with selecting chat mode and gpt-3.5-turbo model. my sample prompt message given in playground is below
“From below content, write unique content about the topic How rich is Bachchan family?
The total net worth of Mr. Amitabh Bachchan is estimated to be around $410 Million, which in Indian Currency is approximately 3390 Crore INR.”
i am confusing that the discrepancy between prompt using in api request and playground with same model selection. Please help me to clear on this.

I believe the chat/completions endpoint doesn’t have a prompt field. It should be using the messages array field instead: OpenAI Platform

You can check the errors in the response to see what went wrong

1 Like

Thanks a lot for notifying the mistake i had done. yes, you’re right. i was using the prompt key instead of messages. it resolves the issue. Thanks again

Hi, need one more clarification on using gpt-3.5-turbo model. i have written the model in code as below
$requestData = [
‘model’ => ‘gpt-3.5-turbo’,
‘messages’ => $messages,
‘temperature’ => 0.7,
‘max_tokens’ => $cnt
but in the response json, the model as
response {
“id”: “xxxxxx”,
“object”: “chat.completion”,
“created”: 1690359331,
“model”: “gpt-3.5-turbo-0613”,
“choices”: [
why the model in response be gpt-3.5-turbo-0613 though i am using the model gpt-3.5-turbo in request. is gpt-3.5-turbo and gpt-3.5-turbo-0613 are same? Kindly clarify this. Thanks in advance

The name gpt-3.5-turbo is just an alias to the currently-preferred version of the model.

For example, it still pointed to (or was) the prior active gpt model for a few weeks after the introduction of 0613.


Hi, how do i get plagiarism free content with gpt-3.5-turbo model? how do i frame the messages field for this…

The AI can’t really understand “plagiarism”. To it, the very language that it writes is a form of plagiarism, copying the style of language that is most appropriate for a question.

If you ask it to tell you the first lines of a Edgar Allen Poe poem, it plagiarizes.

If you ask it to write a piece of code, that may be directly informed by a little piece that was on github in its training data.

Go to a very specific and obscure topic on wikipedia, and ask about that, your answer will likely be sourced from the knowledge of wikipedia.

So knowledge and sources are intertwined. If I write an economics paper after checking five books out of the library and reading, I’m only plagiarizing if I copy the text without effort, which is generally not the behavior of the AI models.

Hi, I am using gpt-3.5-turbo chat completion model for my text completion task. Generally i have form for taking inputs from users and form the prompt to make api request with openai api.

In chat completion model, the messages array is formed with input value which has given by user.
For eg, if the user wanted to regenerate given paragraph on given topic, the messages array formed based on that. My problem is here, when i combine multiple input values to messages array to get response with single api call, it does not work. How do i make messages array to chat completion model for more than one inputs from form page? this way i can avoid separate api call for every inputs given by user i believe. Please guide me on this. Thanks in advance.

Note: in playground also, i have got any one regenerated content though i combine two paragraphs with two titles. it does not give regenerated paragraph on both the titles.

The messages parameter is an array so you can send conversation history, not multiple prompts to be completed. Doing these as separate requests is likely the best route (and probably fastest if you can call the API asynchronously). Remember you pay by tokens, not by request, so you’d only be saving tokens on whatever instructions/system message you send.

Otherwise you can experiment with combining all of the inputs in one prompt. Things may go off the rails though if there are a large number of inputs, more likely the model could get confused and lose track of the goal.

Many Thanks. I am clear now. One more doubt, the input token count is mismatched between while i am counting it by gpt_encode method before making api request and manually checking via api tokenizer platform. i am using php.