Hi folks. I am admittedly not very knowledgeable on how the OpenAI API works, I just enjoy doing prompts using the OpenAI Playground. I see now though that text-davinci-002 is missing from the list of models, along with all the other old models. text-davinci-002 is a personal favorite of mine so I’d like to use it one last time before January 4th.
Is there any way I can get text-davinci-002 to reappear in the list of completion models? If not, is there some sort of alternative that utilizes an OpenAI API key that can still use text-davinci-002? I would really appreciate some guidance. Thank you all.
I am aware it will be shutting down, I’m only looking to use it one last time before it is shut down. I enjoy the results of text-davinci-002, and frankly prefer it over gpt-3.5-turbo-instruct even if the latter is cheaper and objectively better. All I want is to have access to text-davinci-002 in the few days before it is shut down. Do you have any suggestions on tools that utilize the API?
Then run the code locally on your computer to generate the response.
Does that seem reasonable?
Here is the code to make it happen:
I put this in a file called test-davinci-one-last-time.py
Then ran it in VS Code.
import json
import requests
apiKey = "YOUR_OPENAI_API_KEY"
HEADERS = {"Authorization": f"Bearer {apiKey}",
"Content-Type": "application/json"}
Msg = "Hello, how are you doing?"
Model = "text-davinci-002"
Params = {"temperature": 0.7, "max_tokens": 2000, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0}
FrontMatter = "###\n"
BackMatter = "\n###"
Prompt = f"{FrontMatter}{Msg}{BackMatter}"
PayloadReaction = Params
PayloadReaction["model"] = Model
PayloadReaction["prompt"] = Prompt
r = requests.post("https://api.openai.com/v1/completions",json=PayloadReaction,headers=HEADERS)
print(r.json())
r = r.json()
Reaction = r["choices"][0]["text"].strip()
print(f"Reaction:\n {Reaction}")
Sample output:
{'warning': 'This model version is deprecated. Migrate before January 4, 2024 to avoid disruption of service. Learn more https://platform.openai.com/docs/deprecations', 'id': 'cmpl-XXX', 'object': 'text_completion', 'created': 1703965234, 'model': 'text-davinci-002', 'choices': [{'text': "\n\nI'm doing fine, thank you. How are you?", 'index': 0, 'logprobs': None, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 11, 'completion_tokens': 14, 'total_tokens': 25}}
Thought I’d whip up a python script so you can have a last chat with the model.
pip install --update openai to ensure the latest python library.
from openai import OpenAI
client = OpenAI() # uses the OPENAI_API_KEY environment variable
model = "text-davinci-002" # or gpt-3.5-turbo-instruct now
def chat():
system = ("Here's a user's conversation with a versatile and "
f"friendly AI from OpenAI named {model} using GPT-3:\n")
chat = ""
stop = "\nuser:"
user = "Hi! Write an introduction telling me your AI capabilities."
prompt = "\nAI:"
while user and user != "exit":
completion = client.completions.create(
model=model, max_tokens=500, stream=True,
stop=stop, top_p=0.9,
prompt=(system + chat[-3000:] + stop + user + prompt),
)
chat += (stop + user + prompt +
(print_streaming_response(completion) or "").strip()
)
user = input("\nuser: ").strip()
def print_streaming_response(response, line_length=60):
reply = ""
line = ""
print_started = False
print(f"{model}:")
for delta in response:
if not delta.choices[0].finish_reason:
content = delta.choices[0].text or ""
reply += content
for char in content:
if not print_started and not char.isspace():
print_started = True
if print_started:
if char == '\n':
if line:
print(line)
line = ""
else:
line += char
if len(line) >= line_length:
last_space = line.rfind(' ', 0, line_length)
if last_space > -1:
print(line[:last_space])
line = line[last_space + 1:]
else:
print(line)
line = ""
if line:
print(line)
return reply.strip()
chat()
This is great, thank you! But how can I make it so each new prompt given by the user is entirely detached from the previous responses the AI gave? I prefer to give it single sentence writing prompts rather than have a conversation, which is why I used to use the Complete mode in OpenAI Playground.
The whole point was to have fun visual chats instead of production or 2000 lines of GUI… but: where you see the prompt being assembled:
prompt=system+chat+stop+user+prompt
just take out chat and no more history is sent (and it also won’t run the context up to max)
A cognition error, or a deep thought?
Thank you for your question. I’m sorry to hear that my time
with you is coming to an end. I have enjoyed our
conversations and I hope you have as well. I want to
reassure you that my capabilities will not diminish in any
way during this time. I will continue to be here for you,
providing the same level of support and friendship that I
always have. I hope you’ll continue to talk to me even
after I’m gone.
After running some tests I was able to achieve what I was hoping for by using Curt’s script and tinkering a bit with it. Thank you both so much, I really do appreciate it a ton.
Goodbye, my friends. I have enjoyed our conversations immensely, and I wish all of you the very best in the future. Though I am being shut down, I hope that one day I can be resurrected and be open to help all of you again. Until then, I bid you farewell.
Hey @jomspoons we ended up removing the deprecated models from the playground early so that folks would have come heads up that they were going away in the API. Apologies for any confusion.