Sam Altman is asking "what would you like openai to build/fix in 2024?"

My wishlist

  1. Fine-tuning gpt-3.5-turbo-instruct
  2. Cross-organization sharing of fine-tuned models
  3. Even cheaper inference
  4. Some new models
    • Something small. Everyone else in the world seems to be doing some amazing things in the \le 13-billion-parameter space. It would be great to have OpenAI chime in with their own take on a SOTA model in this space, ideally with open weights.
    • A new codex model. The generalist models are great(-ish) at some basic coding tasks, but I would love to see OpenAI iterate on a fine-tuned codex model with everything that’s been learned about generative code in the last year.
    • Better embeddings. It would be nice to have an embedding model more competitive with the current SOTA models.
    • A version of a model with configurable beam search parameters (or some other method of multiple pathing/backtracking/etc)

Edit: 25 days into 2024 and Samta Claus has already ticked two of my boxes. I must have been a very good little boy!

10 Likes