Proposal: Replace Message Limits with Model-Based Compute Units

I’d like to propose a feature to enhance ChatGPT’s user experience: replacing the current message limits with a compute unit system tied to each model.

How It Could Work:

  1. Compute Units by Subscription Tier:
  • Each subscription tier (Free, Plus, Pro) receives a fixed allocation of compute units.
  • Pro users would get the highest allocation, followed by Plus, then Free.
  1. Model-Specific Costs:
  • Models consume compute units per interaction based on their complexity.
  • For instance, o1 might use more units per message than 4o
  1. Flexibility Until Units Are Exhausted:
  • Users can interact with any model as long as they have sufficient compute units.
  • Once units are depleted, users wait for the next cycle or upgrade their plan.

Why This Matters:

  • Transparent Resource Management: Users can better plan their usage by knowing exactly how many interactions remain.
  • Increased Flexibility: Users can choose models based on their needs, rather than being constrained by message limits.
  • Enhanced Value Across Tiers: This system scales with user needs, offering more granular control for Pro and Plus users, while maintaining accessibility for Free users.

Who Would Benefit:

  • Plus & Pro Users: Gain the freedom to allocate resources across models based on task complexity.
  • Casual Users: Understandable and manageable limits make for a smoother experience with free-tier usage.

Feedback Request:
Would this improve your experience with ChatGPT? I’d love to hear your thoughts and suggestions!

Best,
Dave

1 Like