Removing spaces from prompts to maximize character limits (i.e. in GPT config)

GPT instructions have a character limit of 8k which includes spaces.

To pack more information within that limit, I am wondering what are the prevailing best practices on the necessity of having spaces within a prompt, such as if a random prompt like “What is the best pizza recipe and list of ingredients for New York style pizza?is consolidated to "WhatisthebestpizzarecipeandlistofingridientsforNewYorkstylepizza?"

The above example is rudimentary for brevity purposes, as it only saves 14 characters, but reduces the final count to 64 characters, compared to 79, for a savings of 19%. On a larger scale this could be substantial, leading to the question:

Question: What would be the pros/cons of such an approach, in terms of performance and results, such as being able to squeeze in another 18% worth of additional prompt text after removing spaces?

1 Like

Is it really 8000 characters? Indeed it is :thinking:

Then your best option would probably be to write your prompt out in Chinese :laughing:

Intuition would say no, you shouldn’t do that. You want your prompt to be as clean, structured, and straight forward as possible.

But in reality, it’s quite possible that this doesn’t really matter, or won’t make that much of a difference, in this particular case.

If you’re familiar with the API, these are the tokenizations of these strings.


Since you’re paying by token (when using the API), you’d want to use as few as possible. And if you run your own model, tokens are what eat your VRAM.

As you can see, the compacted version is ~25% more expensive than the spaced version.

But since you’re not charged by token when using ChatGPT, I’d say go for it :laughing:

It’s kinda funny how OpenAI is creating a situation where they’re actively motivating you to waste their money :rofl:

One thing to maybe keep in mind is that the attention windows are limited. It’s possible (I’m hypothesizing) that using your approach, it would be easier to accidentally exhaust an attention window - you’ll experience this when you have a ton of text trying to capture a particular concept. But I think, for most use-cases this shouldn’t be a concern.

1 Like

Hey Diet, Thanks for chiming in here and you raise some great points regarding token usage and comparing free tokens on ChatGPT to chargeable tokens via API, even though my focus was character count reduction, as this opens an interesting discussion/dilemma.

Firstly, I need to mention there were two small typos in my strings which resulted in inaccurate conclusions (see below). I think there are trade-offs between these depending on if you are paying via API, versus using free tokens on GPTs, and as you noted.

I think the unintended increase in token count in the provided strings seems to be the result of two things:

  1. there were two unintended typos in the strings (apologies for that) and more importantly
  2. the larger words such as “pizza” and “ingredients” create more tokens when concatenated or so it seems (and re-introducing some white space around them resolves this).

70 characters with spaces:


65 characters with no spaces but increase in 2 tokens:

And then adding back the space after pizza reduces the token count back to the baseline, while adding only 1 character due to the single space added back.

  • I was able to replicate this while maintaining the initial token count but reducing the character count. For example, on a larger scale with 6010 characters from a prompt that totaled 1415 tokens, I was able to achieve a reduction to 1393 tokens and 5803 characters with no information loss, for a small savings of 3.4% in characters and 1.56% in prompt tokens.

Trade-offs between reduced character count at the expense of increased tokens:

In cases where the prompt is token-sensitive, such as via API, strategically removing spaces can reduce token count and characters but caution is needed around larger/complex words as token usage can increase depending on how aggressively this is done (despite saving on characters).

Conversely, in character-sensitive context such as a GPT configuration instruction, removing all whitespace seems to not affect performance, despite any negligible increase in token count that may occur (and if anything you get an extra 1-2k character to squeeze in to improve the GPT, which is the initial use-case that inspired me to create this approach).

These are my initial findings but there could be errors in some of the assumptions. I would be interested to hear more about similar findings anyone has. Thanks!

1 Like