Your vote for the Dev Day '24 London question for Sam Altman

It’s time to decide which question we will forward to Sam Altman on behalf of the community.
Each person can cast up to three votes.
The poll will close approximately an hour before the event.
In order for us to reach consensus, no new questions will be added to the poll.

A heartfelt thank you to everyone who submitted their questions! I think we could easily run the entire AMA ourselves!
@turbolucius , @afg @michael.m.dowling @supershaneski @platypus @aintertainment @PotatoTown @Herrado @aleksmilanov

Special shout-out to @platypus for the summaries! Keep an eye out for the next edition of AI Pulse, where we’ll publish the answer. It will be linked here.

Community Question for Sama’s AMA
  • AI-generated low-quality content: Concerns about the influx of low-quality AI content online (e.g., AI images, articles, videos) and whether OpenAI plans to address this issue, or if it’s outside their control.
  • Family Plan for ChatGPT: A request for a family subscription plan for ChatGPT Plus, with individual logins and shared prompt quotas, as current costs aren’t viable for families.
  • Pricing and Plan Options: Suggestions to introduce more affordable subscription options, such as paying for a limited number of prompts ($5/month for 500 prompts) instead of the current $20 flat fee.
  • Context length and memory: Questions about progress beyond the 128k context length and whether memory functionality (smart memory retrieval) will be integrated into APIs.
  • AI version of Sam Altman: Curiosity about whether Sam Altman is considering creating an AI bot version of himself with his voice and mannerisms
  • Universal Basic Income (UBI): Asking if UBI is viable due to AI-induced job displacement and how it could be implemented globally, especially considering U.S.-based AI companies profiting from international customers
  • New industries and economic models: What new industries or models Sam Altman foresees emerging as AI reshapes traditional sectors.
  • Custom GPTs: Concerns about the lack of updates for Custom GPTs, and whether they are still a priority for OpenAI.
  • Duties to original creators: Questioning whether OpenAI has a responsibility to the original creators (writers, musicians, etc.) whose work was used to train its models, often without consent.
  • Path to AGI: Requesting Altman’s thoughts on whether scaling up language models and improving algorithms will lead to AGI, or if entirely new technologies will be required to surpass LLM limitations.
0 voters
original question

Greetings, Community!

This year’s Dev Day will feature a virtual AMA with Sam Altman.

What questions would you like Sam Altman to answer?

We’ll do our best to include the most interesting ones on the list.

Interested to learn what interests you!

16 Likes

I have kind of a “controversal” question but it’s something I would genuinely love to hear Sam Altman’s opinion on:

The democratisation of AI tools has caused a massive influx of low-quality “AI slop” content on the internet. Bizarre DALL-E 3 images gathering millions of likes on Facebook, Wikipedia having to remove AI-generated articles, YouTube channels posting auto-generated content full of inaccuracies, AI-generated paintings popping up at the top of Google Images results when searching for the painter’s name, etc.

The accessibility of AI tools has profoundly altered the online landscape and as time goes on, AI generated content is going to keep infiltrating our shared database of human knowledge in ways that will become more difficult to detect. Does OpenAI plan to do anything to address this, or do you believe it’s outside of your company’s control, or responsibility?

13 Likes

Hi Sam,

I’d like to suggest introducing a Family Plan for ChatGPT Plus, similar to what platforms like Netflix or Spotify offer. Many families, including mine, would benefit from sharing a plan with individual logins, usage histories, and a shared prompt quota, but without needing to buy multiple separate memberships.

Currently, we only stick with the free version because subscribing to multiple Plus accounts isn’t cost-effective for us, even though we don’t use enough prompts to justify three separate memberships. A family plan would be a good solution.

This model would attract more subscribers who want to share the benefits within their households, and it could significantly boost adoption and loyalty. I have family plans for Prime, Netflix, Spotify, Office 365, etc. - where is my family plan for chat GPT? :sweat_smile:

On the other hand even the Plus plan seems way too expensive for normal users, who just need some sort of Google 2.0 and maybe try the advanced voice mode from time to time. I want to encourage to really think about introducing Plus & Family plans, which actually reflect the number of prompts. Customers should be able to choose how many prompts to buy, e.g. 500 prompts per $5. $5 per month seem reasonable for me - $20 unfortunately not - at least not for my usage profile.

Thank you for considering this idea!

11 Likes

Hi Sam, looking forward to next week.

Just two highly practical questions from me

Context length

  • when will we see progress with context length beyond the 128k, and more generally whats holding it back - is the testing simply just showing its unreliable beyond that context limit?

Memory

  • I love the endless possibilities of memory for enterprise development. Its now in ChatGPT, but is there a roadmap to get memory into APIs (as in, smart choosing of relevant memories from conversations with users to inform future answers).
6 Likes

I want to ask, with all the advances that OpenAI is making, if he was ever tempted in creating an AI version of himself, like a Sam AI bot, with his own voice, mannerism, etc.?

4 Likes

I have two economics related questions:

  • With the rise of AI and automation, do you think Universal Basic Income (UBI) is a viable solution for mass job displacement? If so, how should it be implemented?

  • What kinds of new industries or economic models do you predict will emerge as AI reshapes traditional industries?

6 Likes

Expanding the necessary UBI question:

How could UBI work globally when most of the cloud-based models are provided by US companies = everyone paying for access pays a US company? Should there be a special status for large AI providers to e.g. register with the UN or a similar international entity and instead of paying US taxes on all income taxes on income from non-US customers would be distributed to the home countries of the customers through that international entity (e.g. 5% of income was generated by German customers = 5% of [standardized] taxes go to Germany)?

5 Likes

What’re your Plans for Custom GPTs, especially when they’ve felt largely ‘deprioritized’, having zero updates since introduced last DevDay?

There’s far too much potential in them.

5 Likes

Dev Day in London is happening this Wednesday.

If you have any intriguing or straightforward questions for the CEO of OpenAI, please post them here.

We’ll do our best to ask on behalf of the community at the event.

4 Likes

I’d like to ask Sam if he feels OpenAI has a duty of care toward the writers, musicians, artists, journalists and creatives who made the original work that OpenAI models have been trained on, often without their consent?

5 Likes

I continue to read about the ongoing debate on how to achieve Artificial General Intelligence (AGI).

One camp advocates that scaling up large language models (LLMs) and enhancing their algorithms is the key. They advocate that LLMs’ powerful pattern-matching abilities, enriched with emergent abilities linked to higher-order cognitive functions, will lead to truly powerful AIs, aka AGIs.

Another camp contends that LLMs are fundamentally incapable of reaching true intelligence due to critical shortcomings; i.e. the same strengths are touted as a weakness - they are essentially limited to sophisticated pattern-matching based on their training data, and they often fail when asked nuanced questions outside of those patterns.

Much of the focus centres on defining what “intelligence” truly means. Setting aside that complex topic and the highly transformative potential of LLMs without achieving full “intelligence,” what are your thoughts on relying on scaling and algorithmic improvements within the domain of LLMs versus the absolute need for entirely new technologies to develop powerful AI capable of creating novel outputs beyond the argued limitations of LLMs?

5 Likes

Back in April you sidestepped answering directly a question about how OpenAI would have enough compute in future, saying: “That one I probably won’t answer in front of a camera but I am optimistic by treating that as a whole system problem we will really surprise the world on the upside.” This, together with the development of the desktop app suggests a shift to a devolved architecture, where each user has their own local LLM supervised by the model in the cloud. This would reduce the computational load on servers, enable the system to develop expertise as well as intelligence (by shifting personalised memory locally at the same time), exploit OpenAI’s huge customer base, and (if delayed until training data from NEO becomes available) make it possible for the local LLMs to have superior spatial orientation skills and operate a user’s software with a high success rate. Is this the direction of travel?

It is a developer conference. Things relevant to application developers.

I would expand on the “context” question in the poll, like “when will a developer model be released again with the qualities of gpt-4 originally, where elaborate system instruction procedures are not ignored, and its low-quality attention cannot be degraded out of a closed domain by long input context into basically being a overfitted ChatGPT entity - OpenAI’s product.”

Off-topic to developer concerns, “do employees quit bad companies, or bad management?”.

Thats never going to happen. Then businesses would just swith to the family plan. Maybe if it didnt include API access it could be a lower level membership.