Who is still using Codex?

Yeah, you have an amazing crystal ball, @nunodonato You nailed it!

Do you think gpt-4 will be the model for GitHub Copilot now?

:slight_smile:

I’m honestly totally lost in understanding this move. I don’t see how forcing us to use “chat” will work well for code completions.
AFAIK the “insert” mode was used by Copilot, it was great for code completions. Chat doesn’t have that…

Answering the question from the topic…

Yes, I still use Codex because:

  1. It’s free.
  2. When I was developing my product, I’ve made it work for Codex and switching to ChatGPT is not a simple change like just changing the model because I’ve put effort into finding out what works with Codex. I tested it with ChatGPT and it doesn’t work out of the box.
  3. Text-davinci-003 just doesn’t work for my use case. Gpt-turbo doesn’t work well for my use case too because it’s too much fine-tuned for the specific purpose / use case.

GPT-4 is the only hope now, but it’s expensive.

remember, there is no “gpt-4”, only “chatgpt-4”. that sweet sweet conversation when all you want is a few lines of code

2 Likes

In our early testing, it wasn’t too bad to get ChatGPT to just return code: Use ChatGPT instead of Codex for Code Generation | The Inner Join

I think the chat format can actually be really good for the kind of one-shot prompt you showed above, too, because you can send a user message/assistant message pair with the response and format you’d like before sending further user messages.

I do wish we had more notice, though, as we’re still using Codex in production and wouldn’t mind a longer timeline to get switched over!

1 Like

After rereading the full email on this from OpenAI, they may only be deprecating codex models via the public dev API.

However, it may still be possible a different codex model will continue for GitHub Copilot? @logankilpatrick will GitHub Copilot still use a codex version and not migrate to gpt-4 ?

:slight_smile:

2 Likes

Strange, I haven’t received this email yet given how this isn’t even a waitlist - which would make sense.

Here ya go @sps

On March 23rd, we will discontinue support for the Codex API. All customers will have to transition to a different model. Codex was initially introduced as a free limited beta in 2021, and has maintained that status to date. Given the advancements of our newest GPT-3.5 models for coding tasks, we will no longer be supporting Codex and encourage all customers to transition to GPT-3.5-Turbo.

About GPT-3.5-Turbo

GPT-3.5-Turbo is the most cost effective and performant model in the GPT-3.5 family. It can both do coding tasks while also being complemented with flexible natural language capabilities.

You can learn more through:

Models affected

The following models will be discontinued:

  • code-cushman:001
  • code-cushman:002
  • code-davinci:001
  • code-davinci:002

We understand this transition may be temporarily inconvenient, but we are confident it will allow us to increase our investment in our latest and most capable models.

—The OpenAI team

2 Likes

Ask yourself why their latest models are all locked into the chat api when the only difference between that and the normal text completion endpoint is that we have less control over how it behaves.

I suspect their ultimate goal is to phase out text completion entirely so they can put more guardrails around what we can do with it in their never-ending pursuit of providing a safe and unusable product.

1 Like

I’m slightly frustrated in not having the freedom to prompt without following a strict structure as well.

However, ChatML so far has done everything that I’m looking for, and it also prevents prompt injections - which is huge. It also helps with formatting which is nice. I haven’t needed a stop sequence since using it.

I imagine they’re dropping it simply because they don’t want to be supporting it alongside ChatML when ChatML ideally does it all better, and safer. Of course, being a part of the “in-between” of something that’s actively being developed will have some issues. All part of the ride I’d say.

@semlar I fear the same… and the fact that they are not even clear about their roadmaps regarding these models makes me very uncomfortable with developing with them (same with codex). Right now I have been fine-tuning curie models for a project, if they pull the plug on this ability in a few months, what happens?

Today, for the first time, I started to seriously look into alternatives and signed up for the Claude waiting list.

1 Like

Email notice today: "On March 23rd, we will discontinue support for the Codex API. All customers will have to transition to a different model. Codex was initially introduced as a free limited beta in 2021, and has maintained that status to date. Given the advancements of our newest GPT-3.5 models for coding tasks, we will no longer be supporting Codex and encourage all customers to transition to GPT-3.5-Turbo.

About GPT-3.5-Turbo

GPT-3.5-Turbo is the most cost effective and performant model in the GPT-3.5 family. It can both do coding tasks while also being complemented with flexible natural language capabilities."

I think you’re looking at this the wrong way.

This is similar to when text-embedding-ada-002 replaced 5 separate models because it performed as well, or better. I also use lesser models such as Ada for my own work. I would be blown away if they decided to completely remove them without some sort of way to transfer our training.

1 Like

It’s best to follow what openai.com is compelled to do with whatever its investment partner, Microsoft does regarding your post.
Microsoft —>github news—>co-pilot—>Codex.

Personally I’m appalled at the entire monopoly and am looking at NVidea.

I’ve been using GPT-4 for coding since Friday, and I think I may revisit Codex.

GPT-4 is like a “Coder Buddy,” but you DO need to know what you’re doing.

It tried to get me to drop EVAL statements in my code last night until I pointed out security heh

That said, GPT-4 has been a super useful tool for me. You just have to know what information to give it and how to ask questions. It’s really sped up my dev time, and I’m not a real coder.

All that said, I’m thinking of firing up Codex again because I’ve been hearing good things. I don’t code a lot, but when I do, I like to get in the zone and get it done. GPT-4 has helped immensely with that.

ETA… and I just read more of the thread and realized Codex is going away? haha…

3 Likes

funny, I just hit the exact same roadblock last night. Tried to get it to write some code and it just refused. Even after I said it was just a local tool and there were no security issues.
Some HAL9000 sh*t…

1 Like

Sam Altman tweeted an hour or so ago, that based on feedback, they have decided to keep supporting the Codex endpoint for “researchers”.

There is no indication what “researchers” means at this time.

1 Like

Sorry, but I don’t understand the FUD in this thread.

For those who are interested in transitioning from Codex to GPT 3.5, it did not take me very much effort to recreate my simple code completion attempts, as indicated in this thread.

Did Open AI shut down Codex on a short notice? Yes. Could they have done a better job on the transition docs before notifying users? IMO yes. Does the change justify bitter FUD posting in this forum? IMO no.

1 Like

Now we know, github copilot is going with chat as well :slight_smile:

2 Likes

Apparently as of this morning, plug has been pulled. :frowning: