What is the correct url endpoint to make post request to OpenAI API for code interpretation?

Codex was on old model. GPT isn’t very accurate with providing knowledge on its own API, since it has changed a lot since training. Take a look at the GPT 3.5 guide, it’s pretty straightforward.

I am able to make requests to /v1/engines/text-curie-001/completions, but the problem for me now is that I either get back an empty string for the text property, in other words no interpretation of the code I sent it or it tells me it does not like receiving an object with req.body.text and req.body.language and I am pretty sure it can or should.

I have seen the data inside of text-curie-001, its data.choices[0].text, but I am stumped on how to access it, inside of there exists the response, the explanation of the code I send it, but I cannot seem to get it sent back to my application for the above reasons.

So here is an example of the data object I send to text-curie-001

{
language: "javascript"
text: "const kilobyteFormatter = new Intl.NumberFormat('en', {\n  style: 'unit',\n  unit: 'kilobyte',\n  minimumFractionDigits: 2,\n  maximumFractionDigits: 2,\n});"
}

And here is what I believe I should be getting bask as a response:

{
    "warning": "This model version is deprecated. Migrate before January 4, 2024 to avoid disruption of service. Learn more https://platform.openai.com/docs/deprecations",
    "id": "cmpl-7sFtACNWHnfg1dQbVwxnxBgNccD3O",
    "object": "text_completion",
    "created": 1693165144,
    "model": "text-curie-001",
    "choices": [
        {
            "text": "\n\nThis code creates a custom number format for use with the Intl.NumberFormat constructor. The style property sets the style of the format, and the unit property sets the unit of the format. The minimumFractionDigits and maximumFractionDigits properties set the minimum and maximum number of digits that the format will use, respectively.",
            "index": 0,
            "logprobs": null,
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 75,
        "completion_tokens": 70,
        "total_tokens": 145
    }
}

The standard completions endpoint and models are being shut down in mere in months. You should adapt your application to the new completions endpoint if wanting a base untrained model, or for gpt-3.5-turbo for prompts needing instruction following.

Legacy example while you can still play with the smaller curie model and the instruction-following version text-curie-001:

curl https://api.openai.com/v1/completions
-H “Content-Type: application/json”
-H “Authorization: Bearer $OPENAI_API_KEY”
-d ‘{
“model”: “text-curie-001”,
“prompt”: “Say this is a test”,
“max_tokens”: 7,
“temperature”: 0
}’

For talking about and writing code, your best option is gpt-4, although it now is reduced in quality to where gpt-3.5-turbo is not far behind. Both require the chat completions endpoint and special message formatting.

Curie is not going to be good at interpretation. I’d suggest updating your code to use GPT3.5

Okay, so can you confirm that you get something back when you make a post request to v1/engines/davinci/completions because I now get a 200 OK but I get no interpretation that comes back, my api request looks like so:

app.post(“/api/explain”, async (req, res) => {
try {
const { data } = await axios.post(
process.env.OPEN_API_URL + “/v1/engines/davinci/completions”,
{
prompt: req.body.text,
max_tokens: 100,
temperature: 0.7,
top_p: 1.0,
n: 1,
stop: “\n”,
temperature: 0.7,
},
{
headers: {
“Content-Type”: “application/json”,
Authorization:
Bearer ${OPEN_API_TOKEN},
},
}
);

res.send(data);

Let me ask a simpler question, which engine or model in the Open AI API take the following in a post request:

{
text: req.body.text,
language: req.body.language,
}

I need the model or engine of OpenAI API, that can take a post request with the above object and return the interpretation of req.body.text based on what it gathered of its language in req.body.language.

Take a look at API Reference, it’s really not that big and documents everything. Where’d you get that example structure?

Is the cited url for interpreting a block of code such as rust or javascript? That’s the url I am looking for.

There is no Code Interpreter endpoint, you must write your own.

1 Like

I have written a working api endpoint for text-curie-001, but it seems to not like it taking an object with text and language and I don’t think thats right, it should be able to receive { req.body.text, req.body.language } and come back with the results of text. from data.choices[0].text, but that does not seem to be happening for me. Anyone out there good at writing Express APIs? The other confusing thing is that my express api endpoint makes a request to an actual url, but some documentation out there talks about doing const configuration = new Configuration() and that just does not work for me, having received errors saying that Configuration is not a constructor.

I don’t know what an “express API is”, so no.

I can write API code that shows curie is not your ideal choice, though (even with lots of unseen prompt making it a coding assistant).

image

1 Like

Did you get an answer for that question you posted back in March? Obviously not here, but I mean somewhere else? I have the same issue. If you ask ChatGPT about that endpoint url, it will then tell you there is no davinci-codex. Let me know as I am having the same issue.

There is no Code Interpreter endpoint, you must write your own if you want to run code.

I believe I have the correct endpoint for the engine text-curie-001, so its /v1/engines/text-curie-001/completions, but I get nothing back when to my application when I send it the object of

{
text: req.body.text,
language: req.body.language
}

In terminal I jus get this:

{

object: ‘text_completion’,
created: 1693160769,
model: ‘text-curie-001’,
choices: [ { text: ‘’, index: 0, logprobs: null, finish_reason: ‘stop’ } ],
usage: { prompt_tokens: 55, total_tokens: 55 }
}

when I use Postman I get this:

{
“error”: {
“message”: “Unrecognized request arguments supplied: language, text”,
“type”: “invalid_request_error”,
“param”: null,
“code”: null
}
}

So I am thinking the issue has to be how my post request is written, not the url nor the header, but maybe this:

{
prompt: req.body.text,
max_tokens: 100,
temperature: 0.7,
top_p: 1.0,
n: 1,
stop: “\n”,
temperature: 0.7,
},

Okay, I think this is a good topic, because I am very confused by what API to use, more specifically what URL to use if I want to make a post request to OpenAI to ask it to interpret a block of javascript code or whatever. Can anyone help? I had asked Chat GPT itself and it goes and gives me this url endpoint that has davinci-codex as the name of the engine, then when it does not work, it later tells me that davinci-codex does not exist, so its like why did it give that to me in the first place.

Then there is another article out there saying we are no longer using v1 forward slash engines, forward slash, name of engine and forward slash completions but instead just forward slash v1 forward slash completions, can someone help me identify the correct url for post request for interpreting code?

Everyone is telling you and you’re ignoring them.

Stop asking ChatGPT. Consider it as a Reasoning engine. Not a hand-holder for blind code creation. Especially when using constantly changing libraries

1 Like

Greetings, I was not ignoring anyone, I am just getting back to catching up to these messages. With ChatGPT I was checking in on how to best put this together since I was not getting the results I was expecting with the routing logic I had originally developed and for me the documentation on API endpoints was not entirely clear. For example, I was expecting something like for GET - /v1/repositories, for POST - /v1/repositories/name/content and so on, but of course all documentation may and do differ from project to project.

I gathered from another answer that text-curie-001 will soon be deprecating in a few months and I also gathered from a lot of reading I did today that it is indeed an ever changing library. Thanks for the reprimand sir. Anyway, with all this information I was able to make an informed decision on what direction to take with the project and I thank many of you for your input.

1 Like

You’re asking the same questions. There is no API to interpret your code unless you mean to ask for a model that can help you with your coding.

The API reference that was linked is your source of truth. You can be confident that whatever you read there is true…mostly… More than anywhere else. Especially from ChatGPT (Not slamming it, it’s just not useful in this situation).

I hope you continue to build and ask questions. I just wanted to clarify that your message which I responded to was redundant. All your questions were already answer.

1 Like

Just want to point out I rustled three or four posts from the OP in old/dead threads and tacked them all here. Discourse should note it under the post, but it’s easy to miss.

1 Like