Who is still using Codex?

Yes they are discontinuing support in about 3 days

They recommend you switch to GPT 3.5 Turbo. You will have to swap to the new chat format of sending prompts when you use the new endpoint

1 Like

Ohh, the max_token for code-davinci is 8012 but for gpt 3.5 its 4096. Plus fine-tuning is also not an option.

WIll have to think because my prompt size is > 5000

oh wowā€¦ so, was I right or was I right? :upside_down_face:

Iā€™m really pissed off at OpenAI right nowā€¦

6 Likes

Yeah, you have an amazing crystal ball, @nunodonato You nailed it!

Do you think gpt-4 will be the model for GitHub Copilot now?

:slight_smile:

1 Like

Iā€™m honestly totally lost in understanding this move. I donā€™t see how forcing us to use ā€œchatā€ will work well for code completions.
AFAIK the ā€œinsertā€ mode was used by Copilot, it was great for code completions. Chat doesnā€™t have thatā€¦

Answering the question from the topicā€¦

Yes, I still use Codex because:

  1. Itā€™s free.
  2. When I was developing my product, Iā€™ve made it work for Codex and switching to ChatGPT is not a simple change like just changing the model because Iā€™ve put effort into finding out what works with Codex. I tested it with ChatGPT and it doesnā€™t work out of the box.
  3. Text-davinci-003 just doesnā€™t work for my use case. Gpt-turbo doesnā€™t work well for my use case too because itā€™s too much fine-tuned for the specific purpose / use case.

GPT-4 is the only hope now, but itā€™s expensive.

remember, there is no ā€œgpt-4ā€, only ā€œchatgpt-4ā€. that sweet sweet conversation when all you want is a few lines of code

2 Likes

In our early testing, it wasnā€™t too bad to get ChatGPT to just return code: Use ChatGPT instead of Codex for Code Generation | The Inner Join

I think the chat format can actually be really good for the kind of one-shot prompt you showed above, too, because you can send a user message/assistant message pair with the response and format youā€™d like before sending further user messages.

I do wish we had more notice, though, as weā€™re still using Codex in production and wouldnā€™t mind a longer timeline to get switched over!

1 Like

After rereading the full email on this from OpenAI, they may only be deprecating codex models via the public dev API.

However, it may still be possible a different codex model will continue for GitHub Copilot? @logankilpatrick will GitHub Copilot still use a codex version and not migrate to gpt-4 ?

:slight_smile:

3 Likes

Strange, I havenā€™t received this email yet given how this isnā€™t even a waitlist - which would make sense.

Here ya go @sps

On March 23rd, we will discontinue support for the Codex API. All customers will have to transition to a different model. Codex was initially introduced as a free limited beta in 2021, and has maintained that status to date. Given the advancements of our newest GPT-3.5 models for coding tasks, we will no longer be supporting Codex and encourage all customers to transition to GPT-3.5-Turbo.

About GPT-3.5-Turbo

GPT-3.5-Turbo is the most cost effective and performant model in the GPT-3.5 family. It can both do coding tasks while also being complemented with flexible natural language capabilities.

You can learn more through:

Models affected

The following models will be discontinued:

  • code-cushman:001
  • code-cushman:002
  • code-davinci:001
  • code-davinci:002

We understand this transition may be temporarily inconvenient, but we are confident it will allow us to increase our investment in our latest and most capable models.

ā€”The OpenAI team

3 Likes

Ask yourself why their latest models are all locked into the chat api when the only difference between that and the normal text completion endpoint is that we have less control over how it behaves.

I suspect their ultimate goal is to phase out text completion entirely so they can put more guardrails around what we can do with it in their never-ending pursuit of providing a safe and unusable product.

1 Like

Iā€™m slightly frustrated in not having the freedom to prompt without following a strict structure as well.

However, ChatML so far has done everything that Iā€™m looking for, and it also prevents prompt injections - which is huge. It also helps with formatting which is nice. I havenā€™t needed a stop sequence since using it.

I imagine theyā€™re dropping it simply because they donā€™t want to be supporting it alongside ChatML when ChatML ideally does it all better, and safer. Of course, being a part of the ā€œin-betweenā€ of something thatā€™s actively being developed will have some issues. All part of the ride Iā€™d say.

@semlar I fear the sameā€¦ and the fact that they are not even clear about their roadmaps regarding these models makes me very uncomfortable with developing with them (same with codex). Right now I have been fine-tuning curie models for a project, if they pull the plug on this ability in a few months, what happens?

Today, for the first time, I started to seriously look into alternatives and signed up for the Claude waiting list.

1 Like

Email notice today: "On March 23rd, we will discontinue support for the Codex API. All customers will have to transition to a different model. Codex was initially introduced as a free limited beta in 2021, and has maintained that status to date. Given the advancements of our newest GPT-3.5 models for coding tasks, we will no longer be supporting Codex and encourage all customers to transition to GPT-3.5-Turbo.

About GPT-3.5-Turbo

GPT-3.5-Turbo is the most cost effective and performant model in the GPT-3.5 family. It can both do coding tasks while also being complemented with flexible natural language capabilities."

I think youā€™re looking at this the wrong way.

This is similar to when text-embedding-ada-002 replaced 5 separate models because it performed as well, or better. I also use lesser models such as Ada for my own work. I would be blown away if they decided to completely remove them without some sort of way to transfer our training.

1 Like

Itā€™s best to follow what openai.com is compelled to do with whatever its investment partner, Microsoft does regarding your post.
Microsoft ā€”>github newsā€”>co-pilotā€”>Codex.

Personally Iā€™m appalled at the entire monopoly and am looking at NVidea.

Iā€™ve been using GPT-4 for coding since Friday, and I think I may revisit Codex.

GPT-4 is like a ā€œCoder Buddy,ā€ but you DO need to know what youā€™re doing.

It tried to get me to drop EVAL statements in my code last night until I pointed out security heh

That said, GPT-4 has been a super useful tool for me. You just have to know what information to give it and how to ask questions. Itā€™s really sped up my dev time, and Iā€™m not a real coder.

All that said, Iā€™m thinking of firing up Codex again because Iā€™ve been hearing good things. I donā€™t code a lot, but when I do, I like to get in the zone and get it done. GPT-4 has helped immensely with that.

ETAā€¦ and I just read more of the thread and realized Codex is going away? hahaā€¦

3 Likes

funny, I just hit the exact same roadblock last night. Tried to get it to write some code and it just refused. Even after I said it was just a local tool and there were no security issues.
Some HAL9000 sh*tā€¦

1 Like

Sam Altman tweeted an hour or so ago, that based on feedback, they have decided to keep supporting the Codex endpoint for ā€œresearchersā€.

There is no indication what ā€œresearchersā€ means at this time.

1 Like