Availability of OpenAI api doc dump

In the recent deep dive video, an „OpenAI api Wizard“ GPT is created using a markdown dump of the entire openai api documentation.

How may one come by such an openai api documentation dump in a nice format?

Any insights appreciated!

3 Likes

+1, especially to work with the new Assistants API. The API reference in a document format would be a massive accelerator, since the model does not have any inherent knowledge of it and its docs use JS, thus cannot be properly scraped by a GPT.

It would be a bit of a challenge to parse effectively. For example, the various language examples are script-driven.

It’s also a megabyte of html on one page, markdown is 1/6th the size, to 47000 tokens, but still can’t even be posted.

Body is limited to 32000 characters; you entered 169864.

30000 characters of html-to-markdown, click to destroy the forum view

Getting started

IntroductionAuthenticationMaking requests

Endpoints

AudioChatEmbeddingsFine-tuningFilesImagesModelsModerations

Beta

AssistantsThreadsMessagesRuns

Legacy

CompletionsEditsFine-tunesThe fine-tune objectCreate fine-tuneList fine-tunesRetrieve fine-tuneCancel fine-tuneThe fine-tune event objectList fine-tune events

[

Introduction

](/docs/api-reference/introduction)

You can interact with the API through HTTP requests from any language, via our official Python bindings, our official Node.js library, or a community-maintained library.

To install the official Python bindings, run the following command:

pip install openai

To install the official Node.js library, run the following command in your Node.js project directory:

npm install openai@^4.0.0

[

Authentication

](/docs/api-reference/authentication)

The OpenAI API uses API keys for authentication. Visit your API Keys page to retrieve the API key you’ll use in your requests.

Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service.

All API requests should include your API key in an Authorization HTTP header as follows:

Authorization: Bearer OPENAI_API_KEY

[

Organization (optional)

](/docs/api-reference/organization-optional)

For users who belong to multiple organizations, you can pass a header to specify which organization is used for an API request. Usage from these API requests will count as usage for the specified organization.

Example curl command:

1
2
3
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "OpenAI-Organization: org-ZHb7VWRPmJiJA45Tdxu3TeT8"

Example with the openai Python package:

1
2
3
4
5
6
from openai import OpenAI

client = OpenAI(
  organization='org-ZHb7VWRPmJiJA45Tdxu3TeT8',
)
client.models.list()

Example with the openai Node.js package:

1
2
3
4
5
6
7
import { Configuration, OpenAIApi } from "openai";
const configuration = new Configuration({
    organization: "org-ZHb7VWRPmJiJA45Tdxu3TeT8",
    apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const response = await openai.listEngines();

Organization IDs can be found on your Organization settings page.

[

Making requests

](/docs/api-reference/making-requests)

You can paste the command below into your terminal to run your first API request. Make sure to replace $OPENAI_API_KEY with your secret API key.

1
2
3
4
5
6
7
8
curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

This request queries the gpt-3.5-turbo model (which under the hood points to the latest gpt-3.5-turbo model variant) to complete the text starting with a prompt of “Say this is a test”. You should get a response back that resembles the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
    "id": "chatcmpl-abc123",
    "object": "chat.completion",
    "created": 1677858242,
    "model": "gpt-3.5-turbo-1106",
    "usage": {
        "prompt_tokens": 13,
        "completion_tokens": 7,
        "total_tokens": 20
    },
    "choices": [
        {
            "message": {
                "role": "assistant",
                "content": "\n\nThis is a test!"
            },
            "finish_reason": "stop",
            "index": 0
        }
    ]
}

Now that you’ve generated your first chat completion, let’s break down the response object. We can see the finish_reason is stop which means the API returned the full chat completion generated by the model without running into any limits. In the choices list, we only generated a single message but you can set the n parameter to generate multiple messages choices.

[

Audio

](/docs/api-reference/audio)

Learn how to turn audio into text or text into audio.

Related guide: Speech to text

[

Create speech

](/docs/api-reference/audio/createSpeech)

post https://api.openai.com/v1/audio/speech

Generates audio from the input text.

Request body

model

string

Required

One of the available TTS models: tts-1 or tts-1-hd

input

string

Required

The text to generate audio for. The maximum length is 4096 characters.

voice

string

Required

The voice to use when generating the audio. Supported voices are alloy, echo, fable, onyx, nova, and shimmer. Previews of the voices are available in the Text to speech guide.

response_format

string

Optional

Defaults to mp3

The format to audio in. Supported formats are mp3, opus, aac, and flac.

speed

number

Optional

Defaults to 1

The speed of the generated audio. Select a value from 0.25 to 4.0. 1.0 is the default.

Returns

The audio file content.

Example request

python

Select librarycurlpythonnode

Copy‍

1
2
3
4
5
6
7
8
9
10
from pathlib import Path
import openai

speech_file_path = Path(__file__).parent / "speech.mp3"
response = openai.audio.speech.create(
  model="tts-1",
  voice="alloy",
  input="The quick brown fox jumped over the lazy dog."
)
response.stream_to_file(speech_file_path)

[

Create transcription

](/docs/api-reference/audio/createTranscription)

post https://api.openai.com/v1/audio/transcriptions

Transcribes audio into the input language.

Request body

file

file

Required

The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

model

string

Required

ID of the model to use. Only whisper-1 is currently available.

language

string

Optional

The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.

prompt

string

Optional

An optional text to guide the model’s style or continue a previous audio segment. The prompt should match the audio language.

response_format

string

Optional

Defaults to json

The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

temperature

number

Optional

Defaults to 0

The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.

Returns

The transcribed text.

Example request

python

Select librarycurlpythonnode

Copy‍

1
2
3
4
5
6
7
8
from openai import OpenAI
client = OpenAI()

audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
  model="whisper-1", 
  file=audio_file
)

Response

Copy‍

1
2
3
{
  "text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger. This is a place where you can get to do that."
}

[

Create translation

](/docs/api-reference/audio/createTranslation)

post https://api.openai.com/v1/audio/translations

Translates audio into English.

Request body

file

file

Required

The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

model

string

Required

ID of the model to use. Only whisper-1 is currently available.

prompt

string

Optional

An optional text to guide the model’s style or continue a previous audio segment. The prompt should be in English.

response_format

string

Optional

Defaults to json

The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

temperature

number

Optional

Defaults to 0

The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.

Returns

The translated text.

Example request

python

Select librarycurlpythonnode

Copy‍

1
2
3
4
5
6
7
8
from openai import OpenAI
client = OpenAI()

audio_file = open("speech.mp3", "rb")
transcript = client.audio.translations.create(
  model="whisper-1", 
  file=audio_file
)

Response

Copy‍

1
2
3
{
  "text": "Hello, my name is Wolfgang and I come from Germany. Where are you heading today?"
}

[

Chat

](/docs/api-reference/chat)

Given a list of messages comprising a conversation, the model will return a response.

Related guide: Chat Completions

[

The chat completion object

](/docs/api-reference/chat/object)

Represents a chat completion response returned by model, based on the provided input.

id

string

A unique identifier for the chat completion.

choices

array

A list of chat completion choices. Can be more than one if n is greater than 1.

Show properties

created

integer

The Unix timestamp (in seconds) of when the chat completion was created.

model

string

The model used for the chat completion.

system_fingerprint

string

This fingerprint represents the backend configuration that the model runs with.

Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

object

string

The object type, which is always chat.completion.

usage

object

Usage statistics for the completion request.

Show properties

The chat completion object

Copy‍

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-3.5-turbo-0613",
  "system_fingerprint": "fp_44709d6fcb",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?",
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

[

The chat completion chunk object

](/docs/api-reference/chat/streaming)

Represents a streamed chunk of a chat completion response returned by model, based on the provided input.

id

string

A unique identifier for the chat completion. Each chunk has the same ID.

choices

array

A list of chat completion choices. Can be more than one if n is greater than 1.

Show properties

created

integer

The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.

model

string

The model to generate the completion.

system_fingerprint

string

This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

object

string

The object type, which is always chat.completion.chunk.

The chat completion chunk object

Copy‍

1
2
3
4
5
6
7
8
9
10
11
12
13
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}

{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}

{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}

....

{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":" today"},"finish_reason":null}]}

{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":"?"},"finish_reason":null}]}

{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}

[

Create chat completion

](/docs/api-reference/chat/create)

post https://api.openai.com/v1/chat/completions

Creates a model response for the given chat conversation.

Request body

messages

array

Required

A list of messages comprising the conversation so far. Example Python code.

Show possible types

model

string

Required

ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.

frequency_penalty

number or null

Optional

Defaults to 0

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.

See more information about frequency and presence penalties.

logit_bias

map

Optional

Defaults to null

Modify the likelihood of specified tokens appearing in the completion.

Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.

max_tokens

integer or null

Optional

Defaults to inf

The maximum number of tokens to generate in the chat completion.

The total length of input tokens and generated tokens is limited by the model’s context length. Example Python code for counting tokens.

n

integer or null

Optional

Defaults to 1

How many chat completion choices to generate for each input message.

presence_penalty

number or null

Optional

Defaults to 0

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.

See more information about frequency and presence penalties.

response_format

object

Optional

An object specifying the format that the model must output.

Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

Show properties

seed

integer or null

Optional

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.

stop

string / array / null

Optional

Defaults to null

Up to 4 sequences where the API will stop generating further tokens.

stream

boolean or null

Optional

Defaults to false

If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. Example Python code.

temperature

number or null

Optional

Defaults to 1

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

We generally recommend altering this or top_p but not both.

top_p

number or null

Optional

Defaults to 1

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

tools

array

Optional

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.

Show properties

tool_choice

string or object

Optional

Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function.

none is the default when no functions are present. auto is the default if functions are present.

Show possible types

user

string

Optional

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.

function_call

Deprecated

string or object

Optional

Deprecated in favor of tool_choice.

Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function.

none is the default when no functions are present. `auto`` is the default if functions are present.

Show possible types

functions

Deprecated

array

Optional

Deprecated in favor of tools.

A list of functions the model may generate JSON inputs for.

Show properties

Returns

Returns a chat completion object, or a streamed sequence of chat completion chunk objects if the request is streamed.

Default‍Image input‍Streaming‍Functions‍

Example request

gpt-3.5-turbo

gpt-3.5-turbo-16kgpt-4-0314gpt-4-0613gpt-3.5-turbo-16k-0613gpt-4gpt-3.5-turbo-1106gpt-3.5-turbo-0613gpt-4-1106-previewgpt-3.5-turbo-0301gpt-3.5-turbo

python

Select librarycurlpythonnode.js

Copy‍

1
2
3
4
5
6
7
8
9
10
11
12
from openai import OpenAI
client = OpenAI()

completion = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

Response

Copy‍

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-3.5-turbo-0613",
  "system_fingerprint": "fp_44709d6fcb",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?",
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

[

Embeddings

](/docs/api-reference/embeddings)

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

Related guide: Embeddings

[

The embedding object

](/docs/api-reference/embeddings/object)

Represents an embedding vector returned by embedding endpoint.

index

integer

The index of the embedding in the list of embeddings.

embedding

array

The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the embedding guide.

object

string

The object type, which is always “embedding”.

The embedding object

Copy‍

1
2
3
4
5
6
7
8
9
10
{
  "object": "embedding",
  "embedding": [
    0.0023064255,
    -0.009327292,
    .... (1536 floats total for ada-002)
    -0.0028842222,
  ],
  "index": 0
}

[

Create embeddings

](/docs/api-reference/embeddings/create)

post https://api.openai.com/v1/embeddings

Creates an embedding vector representing the input text.

Request body

input

string or array

Required

Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for text-embedding-ada-002), cannot be an empty string, and any array must be 2048 dimensions or less. Example Python code for counting tokens.

Show possible types

model

string

Required

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

encoding_format

string

Optional

Defaults to float

The format to return the embeddings in. Can be either float or base64.

user

string

Optional

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.

Returns

A list of embedding objects.

Example request

python

Select librarycurlpythonnode.js

Copy‍

1
2
3
4
5
6
7
8
from openai import OpenAI
client = OpenAI()

client.embeddings.create(
  model="text-embedding-ada-002",
  input="The food was delicious and the waiter...",
  encoding_format="float"
)

Response

Copy‍

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "embedding": [
        0.0023064255,
        -0.009327292,
        .... (1536 floats total for ada-002)
        -0.0028842222,
      ],
      "index": 0
    }
  ],
  "model": "text-embedding-ada-002",
  "usage": {
    "prompt_tokens": 8,
    "total_tokens": 8
  }
}

[

However: destined for failure. I couldn’t make a GPT-4 code writer just pasting the relevant section.

Dare you to try to emulate the March GPT-4 developer chat line-by-line and see what the model outputs now…

1 Like

To make the dump of OpenAI API docs I’ve wrote this code:

from selenium import webdriver
from bs4 import BeautifulSoup
import time

def scrape_openai_docs(url):
    # Initialize a browser instance using Selenium
    driver = webdriver.Chrome()  # or webdriver.Firefox() if using Firefox

    # Open the URL
    driver.get(url)

    # Wait for some time to allow the page to load and bypass any checks
    time.sleep(10)  # Adjust the sleep time as necessary

    # Get the page source and parse it with BeautifulSoup
    soup = BeautifulSoup(driver.page_source, 'html.parser')
    driver.quit()

    # Example: Find all paragraphs
    paragraphs = soup.find_all('p')

    # Convert to MDX
    mdx_content = ""
    for p in paragraphs:
        mdx_content += f"<p>{p.get_text()}</p>\n"

    return mdx_content

# URL of the OpenAI documentation
url = 'https://platform.openai.com/docs/api-reference'
mdx_content = scrape_openai_docs(url)

# Output the MDX content to a file
with open('openai_docs.mdx', 'w') as file:
    file.write(mdx_content)

Feel free to use it and modify.
Pavel from Roboticated.com

1 Like

Another thing that works against you for making a “Code with GPT” - it would also have to understand the full library your coding for.

Documentation doesn’t doesn’t tell you:

apiresponse = client.chat.completions.with_raw_response.create()

returns an APIresponse object, then get into methods like contents, http_headers, http_response, you might find a apiresponse.parse() useful. Which gives you a stream object if streaming. But a httpx-based raw stream that needs to have read() set on it, and then you can make a generator with some checking that it will hasattr(apiresponse.parse().response, 'iter_lines').

Not a json output though, the raw chunk lines, half of which are empty, the other strings with "data: ". Parse out a json, then handle the empty content chunks. Then work on functions, tools emitted, finish_reasons, recording headers into future rate limit idling, etc. Way beyond GPT-4.

1 Like