Here we gooooooo

More words

Sone extra words more and extra good one try lol

Next more words

This isgetting silly

2 Likes

Aannd there you go! :innocent: ($500 API credits for all attendees gifted by an AI)

Does “JSON Mode” limit what tokens the language model can output (during nucleus sampling), similar to the LMQL programming language (lmql.ai)?
Or it just validates the output? Is it possible that the model generates an answer (e.g. some gibberish) that cannot succesfully be converted into JSON?
Or how does it work? Thanks!

1 Like

That was a great event! I wish I could have been there! :slight_smile: I haven’t seen gpt-4-turbo on my playground, I think it will take a while…

So did anyone who runs a chat based bot just get wiped out?

1 Like

Do we know if the 128K is in the ChatGPT context, or just through the GPT-4 Turbo 128K context?

Also is it was said to ask in the forum, so I’m catching up here but do we need to apply for the “OpenAI-hosted tools API” or Code Interpreter API Beta, or is that ready to beta test out of the box?

Whoa what a day! Sam wasn’t kidding when he said it feels like Christmas Eve :slight_smile:

What’s the timeline for the rollout for customGPTs I’m dying to try it and explore this

OOoh pretty damn cool. Anyone interested in working on a sustainability/travel startup? We’re early stage, and have a travel API agreement in place. Will be for European travel initially. And, of course, we’re using ChatGPT’s API! :slight_smile: You will be in the first 10 people, and joining me in the UK/Berlin/Málaga, Spain. Remote is possible. Sign up for the wait list, and we’ll go from there - see username :wink:

I think if i recall correctly, Sam said that Turbo will be rolled out to ChatGPT (plus).

1 Like

Omg, I’m so hyped about all the new stuff. Christmas came early this year! :sparkles:

Wow, amazing keynote!

How as developers we get access to start building and testing GPTs?

Best,

1 Like

It is rolling out to users of Plus or Enterprise.
https://chat.openai.com/create

2 Likes


Hopefully soon, but exciting!

2 Likes

What exactly was the thing about if you want to train the custom model (and have deep pockets?) call us, and we’ll work something out?

The game should be fair for all the customers, right? Would be nice to hear clarification from OpenAI on this. Specifically they not going to sell bigBoys :tm: special custom models :tm:

So, he said they would work with some clients (with deep, deep pockets), to get their data into every layer of the architecture, basically. This will tie up a lot of ressources at OpenAI, so until they have a streamlined process for this, it will be expensive and they will not have enough people to handle many clients at once.

Cookbook has been updated: https://cookbook.openai.com/

1 Like

it seems like “GPTs” is simply a system prompt wizard. It asks you questions about how the GPT should interpret a typical input (complete with some extra data). I could see it just generating a huge system prompt from this information and stapling it to the beginning of any input from the user. Based on its responses, I can’t seem to get much higher quality out of it that I couldn’t get from a good system prompt. Is there more to it?

1 Like

These new tools are amazing, they will take the vast majority of the hard work out of making things work smoothly.

  • But as someone who’s spent the past 3 months building systems to do these very same things… I find myself sorta reeling from this afternoons livestream.

Is this the new state of things you think?

  • Spend hundreds of hours working on something…
  • To then abandon all your work for the newest thing?
  • Then again next year?
  • And again… 6 months from then?

I guess this is something that I’ll just have to come to grips with. The new normal will be to spend all ones time and efforts on something, only to have it nullified at some point in the very close future.

  • Good time to learn detachment I guess :rofl:

I find myself looking forward, with my developer hat on, and I have no idea how to operate like that.

  • If there were some “openness” here, and I knew that in 3 months OpenAi was going to pull the rug… maybe I wouldn’t have wasted my time?

Makes me wonder if I should spend my time working on anything with these new tools - because it seems like it’s just going to go to waste, like it has for so many others. (ie. PDF chatting)


Todays has been quite the whirlwind… super excited and amazing to hear about all the new advancements, only to have the reality of things sink in once I started looking at the documentation. I’m super excited, don’t get me wrong, things look like they’ll progress much faster.

  • I guess I just wish I would have known not to waste my time is all I’m both excited and depressed.

#CognitiveDissonance lol

Anyone else in the same boat?

1 Like

We’re excited to share major new features and updates that were announced at our first conference, OpenAI DevDay. You can read the full details on our blog, watch the keynote recording, or check out the new @OpenAIDevs Twitter, but here a brief summary:

New GPT-4 Turbo:

  • We announced GPT-4 Turbo, our most advanced model. It offers a 128K context window and knowledge of world events up to April 2023.
  • We’ve reduced pricing for GPT-4 Turbo considerably: input tokens are now priced at $0.01/1K and output tokens at $0.03/1K, making it 3x and 2x cheaper respectively compared to the previous GPT-4 pricing.
  • We’ve improved function calling, including the ability to call multiple functions in a single message, to always return valid functions with JSON mode, and improved accuracy on returning the right function parameters.
  • Model outputs are more deterministic with our new reproducible outputs beta feature.
  • You can access GPT-4 Turbo by passing gpt-4-1106-preview in the API, with a stable production-ready model release planned later this year.

Updated GPT-3.5 Turbo:

  • The new gpt-3.5-turbo-1106 supports 16K context by default and that 4x longer context is available at lower prices: $0.001/1K input, $0.002/1K output. Fine-tuning of this 16K model is available.
  • Fine-tuned GPT-3.5 is much cheaper to use: with input token prices decreasing by 75% to $0.003/1K and output token prices by 62% to $0.006/1K.
  • gpt-3.5-turbo-1106 joins GPT-4 Turbo with improved function calling and reproducible outputs.

Assistants API:

  • We’re excited to introduce the beta of our new Assistants API, designed to help you build agent-like experiences in your applications effortlessly. Use cases range from a natural language-based data analysis app, a coding assistant, an AI-powered vacation planner, a voice-controlled DJ, a smart visual canvas—the list goes on
  • This API enables the creation of purpose-built AI assistants that can follow specific instructions, leverage additional knowledge, and interact with models and tools to perform various tasks.
  • Assistants have persistent Threads for developers to hand off thread state management to OpenAI and work around context window constraints. They can also use new tools like Code Interpreter, Retrieval, and Function Calling.
  • Our platform Playground allows you to play with this new API without writing code.

Multimodal capabilities:

  • GPT-4 Turbo now supports visual inputs in the Chat Completions API, enabling use cases like caption generation and visual analysis. You can access the vision features by using the gpt-4-vision-preview model. This vision capability will be integrated into the production-ready version of GPT-4 Turbo when it comes out of preview later this year.
  • You can also integrate DALL·E 3 for image generation into your applications via the Image generation API.
  • We released text-to-speech capabilities through the newly introduced TTS model, which will read text for you using one of six natural sounding voices.

Customizable GPTs in ChatGPT:

  • We launched a new feature called GPTs. GPTs combine instructions, data, and capabilities into a customized version of ChatGPT.
  • In addition to the capabilities built by OpenAI such as DALL·E or Advanced Data Analysis, GPTs can call developer-defined actions as well. GPTs let developers control a larger portion of experience. We purposefully architected plugins and actions very similarly, and it takes only a few minutes to turn an existing plugin into an action. Read the docs for details.

We’re excited to see how these updates help open up new avenues for leveraging AI in your projects.

—The OpenAI team

2 Likes