...
-- Table = "store", columns = [store_id integer, manager_staff_id smallint, address_id smallint, last_update timestamp without time zone]
-- be sure to properly format and quote identifiers.
###
-- Instruction: list all spanish cities
-- Query: select all cities that are from the country spain
SELECT city FROM city WHERE country_id = (SELECT country_id FROM country WHERE country = 'Spain');
###
-- Instruction:: What are the names of all the action films
-- Query: select all films that are in the action category
SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_category WHERE category_id = (SELECT category_id FROM category WHERE name = 'Action'));
my prompt ended after the second “Query:” so I let it workout the better english description, before the actual query.
I’m honestly totally lost in understanding this move. I don’t see how forcing us to use “chat” will work well for code completions.
AFAIK the “insert” mode was used by Copilot, it was great for code completions. Chat doesn’t have that…
When I was developing my product, I’ve made it work for Codex and switching to ChatGPT is not a simple change like just changing the model because I’ve put effort into finding out what works with Codex. I tested it with ChatGPT and it doesn’t work out of the box.
Text-davinci-003 just doesn’t work for my use case. Gpt-turbo doesn’t work well for my use case too because it’s too much fine-tuned for the specific purpose / use case.
I think the chat format can actually be really good for the kind of one-shot prompt you showed above, too, because you can send a user message/assistant message pair with the response and format you’d like before sending further user messages.
I do wish we had more notice, though, as we’re still using Codex in production and wouldn’t mind a longer timeline to get switched over!
After rereading the full email on this from OpenAI, they may only be deprecating codex models via the public dev API.
However, it may still be possible a different codex model will continue for GitHub Copilot? @logankilpatrick will GitHub Copilot still use a codex version and not migrate to gpt-4 ?
On March 23rd, we will discontinue support for the Codex API. All customers will have to transition to a different model. Codex was initially introduced as a free limited beta in 2021, and has maintained that status to date. Given the advancements of our newest GPT-3.5 models for coding tasks, we will no longer be supporting Codex and encourage all customers to transition to GPT-3.5-Turbo.
About GPT-3.5-Turbo
GPT-3.5-Turbo is the most cost effective and performant model in the GPT-3.5 family. It can both do coding tasks while also being complemented with flexible natural language capabilities.
We understand this transition may be temporarily inconvenient, but we are confident it will allow us to increase our investment in our latest and most capable models.
Ask yourself why their latest models are all locked into the chat api when the only difference between that and the normal text completion endpoint is that we have less control over how it behaves.
I suspect their ultimate goal is to phase out text completion entirely so they can put more guardrails around what we can do with it in their never-ending pursuit of providing a safe and unusable product.
I’m slightly frustrated in not having the freedom to prompt without following a strict structure as well.
However, ChatML so far has done everything that I’m looking for, and it also prevents prompt injections - which is huge. It also helps with formatting which is nice. I haven’t needed a stop sequence since using it.
I imagine they’re dropping it simply because they don’t want to be supporting it alongside ChatML when ChatML ideally does it all better, and safer. Of course, being a part of the “in-between” of something that’s actively being developed will have some issues. All part of the ride I’d say.
@semlar I fear the same… and the fact that they are not even clear about their roadmaps regarding these models makes me very uncomfortable with developing with them (same with codex). Right now I have been fine-tuning curie models for a project, if they pull the plug on this ability in a few months, what happens?
Today, for the first time, I started to seriously look into alternatives and signed up for the Claude waiting list.