I need to use "Gpt 3.5 turbo instruct", How I do this?

The completions Playground in the platform site was shut off. The endpoint continues.

You’ll have to make API calls.

Bonus

Here’s a Python starter script, for which a use-case persists:

  • completions prompting classifier (movie ratings from knowledge)
  • presenting a non-chat model’s logprobs (closer to pretraining)
  • completing a response within a JSON context you provide
  • stop sequences for obtaining just data and not structured container
  • using “with-raw-response” (for obtaining headers with rate limits)
  • (then typing more original prompts in a non-chat setting.)
import openai
import math

movie = "Clueless"
prompt = f"""
Here is a conversation between ChatGPT, an AI powered by GPT-3, and a user:

user: Provide a movie review rating score of "{movie}", from 1-10, in a JSON please.
ChatGPT: {{"topic":"{movie} movie review", "rating": "
""".strip()

print("START: sending this text to find out how it is completed by AI:\n" +
      "-"*20 + f"\n{prompt}\n" + "-"*20)
while prompt not in ["EXIT"]:
    # Initialize the OpenAI client
    client = openai.Client()

    try:
        # Request completion with raw response from the API
        response = client.completions.with_raw_response.create(
            model="gpt-3.5-turbo-instruct",
            prompt=prompt,
            max_tokens=25,
            top_p=1e-9,
            logprobs=15,
            stop=['"}', '\nuser:']
        )
        # Parse the choice and retrieve log probabilities
        choice0 = response.parse().model_dump()['choices'][0]
        logprob = choice0['logprobs']['top_logprobs'][0]
        print("\n=== top first token probabilities ===\n")
        # Iterate over each token and its log probability
        for token, log_prob in logprob.items():
            # Convert log probability to normal probability and format as percentage
            probability = math.exp(log_prob) * 100
            # Escape non-printable characters or represent them as byte literals
            escaped_token = ''.join([f"\\x{ord(c):02x}" if not c.isprintable() else c for c in token])
            # Print token and its probability in formatted output
            print(f"'{escaped_token}': {probability:.2f}%")
        print(f"\n=== What the AI wanted to write ===\n\n{choice0['text']}\n")
        prompt = input("Another prompt? (or just EXIT to finish)\n>>>").strip() or input_value

    except Exception as e:
        print('ERROR:', e)

execution output:

START: sending this text to find out how it is completed by AI:
--------------------
Here is a conversation between ChatGPT, an AI powered by GPT-3, and a user:

user: Provide a movie review rating score of "Clueless", from 1-10, in a JSON please.
ChatGPT: {"topic":"Clueless movie review", "rating": "
--------------------

=== top first token probabilities ===

'8': 61.95%
'9': 33.67%
'7': 3.58%
'10': 0.69%
'6': 0.05%
'I': 0.04%
'5': 0.01%
' ': 0.01%
'4': 0.00%
'1': 0.00%
'The': 0.00%
'3': 0.00%
'eight': 0.00%
'Overall': 0.00%
'nine': 0.00%

=== What the AI wanted to write ===

8/10

Another prompt? (or just EXIT to finish)
>>>
1 Like