Sam Altman is asking "what would you like openai to build/fix in 2024?"

And the twitterverse is going wild:


My wishlist

  1. Fine-tuning gpt-3.5-turbo-instruct
  2. Cross-organization sharing of fine-tuned models
  3. Even cheaper inference
  4. Some new models
    • Something small. Everyone else in the world seems to be doing some amazing things in the \le 13-billion-parameter space. It would be great to have OpenAI chime in with their own take on a SOTA model in this space, ideally with open weights.
    • A new codex model. The generalist models are great(-ish) at some basic coding tasks, but I would love to see OpenAI iterate on a fine-tuned codex model with everything that’s been learned about generative code in the last year.
    • Better embeddings. It would be nice to have an embedding model more competitive with the current SOTA models.
    • A version of a model with configurable beam search parameters (or some other method of multiple pathing/backtracking/etc)

Edit: 25 days into 2024 and Samta Claus has already ticked two of my boxes. I must have been a very good little boy!


This is the current state of the survey run on Twitter.


Open the model weights (at least GPT 3.5) and be transparent about the training data used. Alternatively rename the company to ClosedAI.


You do realize that this would put GPT into the hands of the worst people you can possibly imagine, right?

Imagine scammers, hackers and dictators, all having full unchecked and unfettered access to some of the most powerful technology in the world. :thinking:


7 posts were split to a new topic: Discussion about releasing GPT-3.5 model weights

I would like:

  1. A large context (\ge 8k) embedding model with SOTA performance ( > 67 on MTEB) and < 1k dimensions
  2. Some open weights or open source goodies.
  3. Ability to fine-tune moderations. There are many things allowed by the model that shouldn’t be allowed in a business setting.

gpt-4-turbo is already outstanding… but…

My dreams…

  1. Improved prompt comprehension: The prompt can involve more intricate commands, incorporate multiple documents and variables, with the expectation that the AI responds with greater precision. This includes understanding directives, ideally allowing non-technical individuals to instruct the AI through system prompts.

  2. Enhanced image reading API or video reading API availability.


For me it would be handy to have metadata (such as seed id), to automatically come with every image that DALL-E creates. Just put an image tag on it displaying all the data.

Thank you Sam!

  1. Improvements to assistants would also be great. I see some great potential there but costs are currently just incredibly hard to manage. Some of it is due to their design. Changes could also include making it simpler to access specialized assistants through API calls. If I could draw on multiple custom assistants throughout a broader workflow that would open up a lot of interesting opportunities.

  2. A straightforward mechanism to inject domain-specific knowledge permanently would be quite a game changer as well.

  3. Perhaps the creation of user focus groups ahead of major new releases. I am not questioning whether features are tested enough prior to their release. But I think that drawing on the feedback of an interdisciplinary group of active OpenAI users could lead to improved and more user-oriented guidance at the point of release, quicker remediation of bugs that would otherwise get unnoticed and importantly would also enable OpenAI to get a sense check whether solutions cater to diversified set of use cases and whether there are perhaps “hidden” limitations.

1 Like
  1. I’d like for errors and such to not count against my premium message count.

  2. I’d like Dall.e to not finally fix something in an image and not imeediately break something else it was getting right from then on.

  3. I’d love it if GPT had inside knowledge of prompting.

  4. I’d be exctatic if Dall.e could change something in an image without changing the entire image.

1 Like


  • video maker and interpeter model
  • movement model
  • robots


chatgpt gpt store:

  • facilitate the creation of plugins, providing a domain, monetization options out of the box (such as paid features for the plugin)
  • add user interface to run code on a openai safe container or on a secured virtual machine straight out of gpt/plugins dev endpoint (so no more do you trust this website? and running approved code natively once accepted)
  • add dalle3 drawing board to chatgpt and update it with video
  • add text to speech and speech to text to plugins and/or gpts

Fix the stupid digital blur effect at the top of the chat on iPhone. It’s maddening. Thank you

Word count adherence. Instruction following in GPTs