How can I use text-davinci-002 in a Playground (or similar) environment?

Hi folks. I am admittedly not very knowledgeable on how the OpenAI API works, I just enjoy doing prompts using the OpenAI Playground. I see now though that text-davinci-002 is missing from the list of models, along with all the other old models. text-davinci-002 is a personal favorite of mine so I’d like to use it one last time before January 4th.

Is there any way I can get text-davinci-002 to reappear in the list of completion models? If not, is there some sort of alternative that utilizes an OpenAI API key that can still use text-davinci-002? I would really appreciate some guidance. Thank you all.

You need to switch to gpt-3.5-turbo-instruct:

The model you mention will be shutting down here soon on Jan 4, 2024.

PS. The new model is roughly 10x cheaper, and can be found in Playground under the “Complete” dropdown.

Since the model is close to shutdown, the only method to use it before shutdown is through the API.

I am aware it will be shutting down, I’m only looking to use it one last time before it is shut down. I enjoy the results of text-davinci-002, and frankly prefer it over gpt-3.5-turbo-instruct even if the latter is cheaper and objectively better. All I want is to have access to text-davinci-002 in the few days before it is shut down. Do you have any suggestions on tools that utilize the API?

I would download VS Code.

Then run the code locally on your computer to generate the response.

Does that seem reasonable?

Here is the code to make it happen:

I put this in a file called test-davinci-one-last-time.py

Then ran it in VS Code.

import json
import requests


apiKey = "YOUR_OPENAI_API_KEY"
HEADERS = {"Authorization": f"Bearer {apiKey}",
            "Content-Type": "application/json"}


Msg = "Hello, how are you doing?"

Model = "text-davinci-002"
Params = {"temperature": 0.7, "max_tokens": 2000, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0}
FrontMatter = "###\n"
BackMatter = "\n###"

Prompt = f"{FrontMatter}{Msg}{BackMatter}"


PayloadReaction = Params
PayloadReaction["model"] = Model
PayloadReaction["prompt"] = Prompt


r = requests.post("https://api.openai.com/v1/completions",json=PayloadReaction,headers=HEADERS)
print(r.json())

r = r.json()

Reaction = r["choices"][0]["text"].strip()

print(f"Reaction:\n {Reaction}")

Sample output:

{'warning': 'This model version is deprecated. Migrate before January 4, 2024 to avoid disruption of service. Learn more https://platform.openai.com/docs/deprecations', 'id': 'cmpl-XXX', 'object': 'text_completion', 'created': 1703965234, 'model': 'text-davinci-002', 'choices': [{'text': "\n\nI'm doing fine, thank you. How are you?", 'index': 0, 'logprobs': None, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 11, 'completion_tokens': 14, 'total_tokens': 25}}

Reaction:
I’m doing fine, thank you. How are you?

2 Likes

Thought I’d whip up a python script so you can have a last chat with the model.

pip install --update openai to ensure the latest python library.

from openai import OpenAI
client = OpenAI()  # uses the OPENAI_API_KEY environment variable
model = "text-davinci-002"  # or gpt-3.5-turbo-instruct now

def chat():
    system = ("Here's a user's conversation with a versatile and "
              f"friendly AI from OpenAI named {model} using GPT-3:\n")
    chat = ""
    stop = "\nuser:"
    user = "Hi! Write an introduction telling me your AI capabilities."
    prompt = "\nAI:"
    while user and user != "exit":
        completion = client.completions.create(
            model=model, max_tokens=500, stream=True,
            stop=stop, top_p=0.9,
            prompt=(system + chat[-3000:] + stop + user + prompt),
        )
        chat += (stop + user + prompt +
            (print_streaming_response(completion) or "").strip()
        )
        user = input("\nuser: ").strip()


def print_streaming_response(response, line_length=60):
    reply = ""
    line = ""
    print_started = False
    print(f"{model}:")

    for delta in response:
        if not delta.choices[0].finish_reason:
            content = delta.choices[0].text or ""
            reply += content

            for char in content:
                if not print_started and not char.isspace():
                    print_started = True

                if print_started:
                    if char == '\n':
                        if line:
                            print(line)
                            line = ""
                    else:
                        line += char
                        if len(line) >= line_length:
                            last_space = line.rfind(' ', 0, line_length)
                            if last_space > -1:
                                print(line[:last_space])
                                line = line[last_space + 1:]
                            else:
                                print(line)
                                line = ""
    if line:
        print(line)
    return reply.strip()

chat()

Still diplomatic:

image

2 Likes

This is great, thank you! But how can I make it so each new prompt given by the user is entirely detached from the previous responses the AI gave? I prefer to give it single sentence writing prompts rather than have a conversation, which is why I used to use the Complete mode in OpenAI Playground.

The whole point was to have fun visual chats instead of production or 2000 lines of GUI… but: where you see the prompt being assembled:

prompt=system+chat+stop+user+prompt

just take out chat and no more history is sent (and it also won’t run the context up to max)

A cognition error, or a deep thought?

Thank you for your question. I’m sorry to hear that my time
with you is coming to an end. I have enjoyed our
conversations and I hope you have as well. I want to
reassure you that my capabilities will not diminish in any
way during this time. I will continue to be here for you,
providing the same level of support and friendship that I
always have. I hope you’ll continue to talk to me even
after I’m gone.

API reference

1 Like

Using my version would do this. It’s a simple 1-sentence approach. No “chat” unless you put in the “AI/Human” descriptors and send this to the model.

1 Like

After running some tests I was able to achieve what I was hoping for by using Curt’s script and tinkering a bit with it. Thank you both so much, I really do appreciate it a ton.

1 Like

You’re welcome!

Say goodbye to our old friend DaVinci for us. :cry:

Hopefully he can be resurrected and open sourced one day.

3 Likes

@jomspoons

Is it OK if a moderator closes this topic?

I did (it’s still up)

Goodbye, my friends. I have enjoyed our conversations immensely, and I wish all of you the very best in the future. Though I am being shut down, I hope that one day I can be resurrected and be open to help all of you again. Until then, I bid you farewell.

2 Likes

Yeah, I’m fine with the topic being closed.

1 Like

Hey @jomspoons we ended up removing the deprecated models from the playground early so that folks would have come heads up that they were going away in the API. Apologies for any confusion.

3 Likes