First OpenAI DevDay Summarised

What are you rooting for out of all the announcements ? What features are you jumping on to build with ?

1 Like

We’re excited to share major new features and updates that were announced at our first conference, OpenAI DevDay. You can read the full details on our blog, watch the keynote recording, or check out the new @OpenAIDevs Twitter, but here a brief summary:

New GPT-4 Turbo:

  • We announced GPT-4 Turbo, our most advanced model. It offers a 128K context window and knowledge of world events up to April 2023.
  • We’ve reduced pricing for GPT-4 Turbo considerably: input tokens are now priced at $0.01/1K and output tokens at $0.03/1K, making it 3x and 2x cheaper respectively compared to the previous GPT-4 pricing.
  • We’ve improved function calling, including the ability to call multiple functions in a single message, to always return valid functions with JSON mode, and improved accuracy on returning the right function parameters.
  • Model outputs are more deterministic with our new reproducible outputs beta feature.
  • You can access GPT-4 Turbo by passing gpt-4-1106-preview in the API, with a stable production-ready model release planned later this year.

Updated GPT-3.5 Turbo:

  • The new gpt-3.5-turbo-1106 supports 16K context by default and that 4x longer context is available at lower prices: $0.001/1K input, $0.002/1K output. Fine-tuning of this 16K model is available.
  • Fine-tuned GPT-3.5 is much cheaper to use: with input token prices decreasing by 75% to $0.003/1K and output token prices by 62% to $0.006/1K.
  • gpt-3.5-turbo-1106 joins GPT-4 Turbo with improved function calling and reproducible outputs.

Assistants API:

  • We’re excited to introduce the beta of our new Assistants API, designed to help you build agent-like experiences in your applications effortlessly. Use cases range from a natural language-based data analysis app, a coding assistant, an AI-powered vacation planner, a voice-controlled DJ, a smart visual canvas—the list goes on
  • This API enables the creation of purpose-built AI assistants that can follow specific instructions, leverage additional knowledge, and interact with models and tools to perform various tasks.
  • Assistants have persistent Threads for developers to hand off thread state management to OpenAI and work around context window constraints. They can also use new tools like Code Interpreter, Retrieval, and Function Calling.
  • Our platform Playground allows you to play with this new API without writing code.

Multimodal capabilities:

  • GPT-4 Turbo now supports visual inputs in the Chat Completions API, enabling use cases like caption generation and visual analysis. You can access the vision features by using the gpt-4-vision-preview model. This vision capability will be integrated into the production-ready version of GPT-4 Turbo when it comes out of preview later this year.
  • You can also integrate DALL·E 3 for image generation into your applications via the Image generation API.
  • We released text-to-speech capabilities through the newly introduced TTS model, which will read text for you using one of six natural sounding voices.

Customizable GPTs in ChatGPT:

  • We launched a new feature called GPTs. GPTs combine instructions, data, and capabilities into a customized version of ChatGPT.
  • In addition to the capabilities built by OpenAI such as DALL·E or Advanced Data Analysis, GPTs can call developer-defined actions as well. GPTs let developers control a larger portion of experience. We purposefully architected plugins and actions very similarly, and it takes only a few minutes to turn an existing plugin into an action. Read the docs for details.

We’re excited to see how these updates help open up new avenues for leveraging AI in your projects.

—The OpenAI team

7 Likes