OpenAI Dev-Day 2023: Dev-Day Discussion!

Whoa what a day! Sam wasn’t kidding when he said it feels like Christmas Eve :slight_smile:

What’s the timeline for the rollout for customGPTs I’m dying to try it and explore this

OOoh pretty damn cool. Anyone interested in working on a sustainability/travel startup? We’re early stage, and have a travel API agreement in place. Will be for European travel initially. And, of course, we’re using ChatGPT’s API! :slight_smile: You will be in the first 10 people, and joining me in the UK/Berlin/Málaga, Spain. Remote is possible. Sign up for the wait list, and we’ll go from there - see username :wink:

I think if i recall correctly, Sam said that Turbo will be rolled out to ChatGPT (plus).

1 Like

Omg, I’m so hyped about all the new stuff. Christmas came early this year! :sparkles:

Wow, amazing keynote!

How as developers we get access to start building and testing GPTs?

Best,

2 Likes

It is rolling out to users of Plus or Enterprise.
https://chat.openai.com/create

3 Likes


Hopefully soon, but exciting!

3 Likes

What exactly was the thing about if you want to train the custom model (and have deep pockets?) call us, and we’ll work something out?

The game should be fair for all the customers, right? Would be nice to hear clarification from OpenAI on this. Specifically they not going to sell bigBoys :tm: special custom models :tm:

1 Like

So, he said they would work with some clients (with deep, deep pockets), to get their data into every layer of the architecture, basically. This will tie up a lot of ressources at OpenAI, so until they have a streamlined process for this, it will be expensive and they will not have enough people to handle many clients at once.

1 Like

Cookbook has been updated: https://cookbook.openai.com/

2 Likes

it seems like “GPTs” is simply a system prompt wizard. It asks you questions about how the GPT should interpret a typical input (complete with some extra data). I could see it just generating a huge system prompt from this information and stapling it to the beginning of any input from the user. Based on its responses, I can’t seem to get much higher quality out of it that I couldn’t get from a good system prompt. Is there more to it?

1 Like

These new tools are amazing, they will take the vast majority of the hard work out of making things work smoothly.

  • But as someone who’s spent the past 3 months building systems to do these very same things… I find myself sorta reeling from this afternoons livestream.

Is this the new state of things you think?

  • Spend hundreds of hours working on something…
  • To then abandon all your work for the newest thing?
  • Then again next year?
  • And again… 6 months from then?

I guess this is something that I’ll just have to come to grips with. The new normal will be to spend all ones time and efforts on something, only to have it nullified at some point in the very close future.

  • Good time to learn detachment I guess :rofl:

I find myself looking forward, with my developer hat on, and I have no idea how to operate like that.

  • If there were some “openness” here, and I knew that in 3 months OpenAi was going to pull the rug… maybe I wouldn’t have wasted my time?

Makes me wonder if I should spend my time working on anything with these new tools - because it seems like it’s just going to go to waste, like it has for so many others. (ie. PDF chatting)


Todays has been quite the whirlwind… super excited and amazing to hear about all the new advancements, only to have the reality of things sink in once I started looking at the documentation. I’m super excited, don’t get me wrong, things look like they’ll progress much faster.

  • I guess I just wish I would have known not to waste my time is all I’m both excited and depressed.

#CognitiveDissonance lol

Anyone else in the same boat?

1 Like

We’re excited to share major new features and updates that were announced at our first conference, OpenAI DevDay. You can read the full details on our blog, watch the keynote recording, or check out the new @OpenAIDevs Twitter, but here a brief summary:

New GPT-4 Turbo:

  • We announced GPT-4 Turbo, our most advanced model. It offers a 128K context window and knowledge of world events up to April 2023.
  • We’ve reduced pricing for GPT-4 Turbo considerably: input tokens are now priced at $0.01/1K and output tokens at $0.03/1K, making it 3x and 2x cheaper respectively compared to the previous GPT-4 pricing.
  • We’ve improved function calling, including the ability to call multiple functions in a single message, to always return valid functions with JSON mode, and improved accuracy on returning the right function parameters.
  • Model outputs are more deterministic with our new reproducible outputs beta feature.
  • You can access GPT-4 Turbo by passing gpt-4-1106-preview in the API, with a stable production-ready model release planned later this year.

Updated GPT-3.5 Turbo:

  • The new gpt-3.5-turbo-1106 supports 16K context by default and that 4x longer context is available at lower prices: $0.001/1K input, $0.002/1K output. Fine-tuning of this 16K model is available.
  • Fine-tuned GPT-3.5 is much cheaper to use: with input token prices decreasing by 75% to $0.003/1K and output token prices by 62% to $0.006/1K.
  • gpt-3.5-turbo-1106 joins GPT-4 Turbo with improved function calling and reproducible outputs.

Assistants API:

  • We’re excited to introduce the beta of our new Assistants API, designed to help you build agent-like experiences in your applications effortlessly. Use cases range from a natural language-based data analysis app, a coding assistant, an AI-powered vacation planner, a voice-controlled DJ, a smart visual canvas—the list goes on
  • This API enables the creation of purpose-built AI assistants that can follow specific instructions, leverage additional knowledge, and interact with models and tools to perform various tasks.
  • Assistants have persistent Threads for developers to hand off thread state management to OpenAI and work around context window constraints. They can also use new tools like Code Interpreter, Retrieval, and Function Calling.
  • Our platform Playground allows you to play with this new API without writing code.

Multimodal capabilities:

  • GPT-4 Turbo now supports visual inputs in the Chat Completions API, enabling use cases like caption generation and visual analysis. You can access the vision features by using the gpt-4-vision-preview model. This vision capability will be integrated into the production-ready version of GPT-4 Turbo when it comes out of preview later this year.
  • You can also integrate DALL·E 3 for image generation into your applications via the Image generation API.
  • We released text-to-speech capabilities through the newly introduced TTS model, which will read text for you using one of six natural sounding voices.

Customizable GPTs in ChatGPT:

  • We launched a new feature called GPTs. GPTs combine instructions, data, and capabilities into a customized version of ChatGPT.
  • In addition to the capabilities built by OpenAI such as DALL·E or Advanced Data Analysis, GPTs can call developer-defined actions as well. GPTs let developers control a larger portion of experience. We purposefully architected plugins and actions very similarly, and it takes only a few minutes to turn an existing plugin into an action. Read the docs for details.

We’re excited to see how these updates help open up new avenues for leveraging AI in your projects.

—The OpenAI team

2 Likes

I got the email too. I guess it was set out to everyone with an account.

3 Likes

Y’all I seriously can’t contain my excitement this is so COOOL
Question though; will there be integration between GPT Store’s bots and bots made from an API directly? Will we get to place bot projects we build via API on the GPT Store? Furthermore, will it be possible to have equal functionality to build something on the GPT store that could be built via python API? I’m sure libraries might make that an issue, but considering the implications (and the lines beginning to blur here), I’m deeply curious to know if that’s something OpenAI is looking at.

1 Like

I’m hearing some devs were being walked through a process to convert from plugin to GPT … I don’t have a lot of details, but I’ve also heard plugins will be around for a while longer… There’s a lot of problems with plugins, though, so I imagine we’re headed toward all “GPT”…

1 Like

Honest to god I forgot about plugins anyway (because I haven’t found any that are outstanding apart from the tools OpenAI already built), so that makes sense. I was more thinking about what limitations a GPT made through this new Canva thing over building an agent-wise tool/app simply via python using the API.
I guess what I would love to see is something like our own app store, where both could integrate well. Have no coding experience but still want to build a cool agent? Use the Canva process. Have something that utilizes agents in a unique way outside of simple bot interfacing? You could put that up too in the same space! I’m hoping that’s where this is leading to at least! This way everyone gets to showcase their work (and more easily adapt to changes in what they’re building when OpenAI throws us a curveball like this, reducing friction to build new tools and capabilities at an exponential pace)

1 Like

When the 128k context, threadsId are going to be released?, how to be eligible to test it first ??.
We are building an interesting chatBot and different products related.

btw, exiting news.

{REMOVED} Extra characters, extra sentence.

I am talking more, because there is more to be said from attendees (hint) about what other experiences were had at the event!

Were breakout discussions and round tables just a guy at an “API” booth answering questions and attendees milling about?