How can I get short answers from gpt-3.5 turbo?

I am looking to generate short answers good enough to post on twitter replies… but some answers are just too long…

How can I set the answers to be shorter?

This for example doesn’t seem to work:

input = in less than 280 characters, can you please explain this tweet? ${ theTexts[textIndex++] };

Any feedback would help… thank you!

If you tell the model to make a tweet, then it will keep it’s replies short and to the point, it does tend to shove emotes all over the place, so you can also ask for less of that, personal choice obviously.


Instead of telling character count, use token count.

e.g. “Answer the following in 10 tokens”


I usually ask for a single word or a single sentence answer if want it short. To me that makes more sense than tokens.


AI can’t count those either…

answer this query, with output length of exactly 15 of your internal AI BPE tokens: Why does the moon have craters?


It doesn’t lose the plan on short lengths of words (but at longer length has problems arriving at a particular destination):

answer this query, with output length of exactly 7 English words: Why does the moon have craters?

Due to meteoroid impacts during its formation.


Task-specific advice is best. Use OpenAI’s pre-training to your advantage.

    "role": "system",
    "name": "xBot",
    "content": """"
You are xBot, and write Twitter tweets restricted in size by the platform.
Output one sentence stating the topic and the position assertion.
Output one more sentence stating the evidence and explanation.
Output a few hashtags for topic search.
Do not label the output sections, just write the contents.
    "role": "user",
    "content": "Subject: Talk to OpenAI's new car that can drive where you tell it."


OpenAI’s new car revolutionizes transportation by allowing users to communicate their desired destinations, providing a seamless and efficient driving experience. #OpenAICar #TransportationInnovation

Introducing OpenAI’s revolutionary self-driving car! :red_car::dash: #autonomousvehicles ai #technology

Imagine being able to simply talk to your car and have it take you wherever you want to go. :speaking_head::round_pushpin:

#selfdrivingcar #futureoftransportation #innovation

OpenAI’s new car can revolutionize transportation by driving to any destination you command. ai #autonomousvehicles

Infuriating non-compliance with instructions is what you get now with gpt-3.5-turbo. Go back to -0301? Get quality:

OpenAI’s new self-driving car is a game-changer for the future of transportation.

Equipped with a sophisticated language processing system, the car can understand and execute commands from passengers, allowing for a more personalized and efficient driving experience. openai #SelfDrivingCar #FutureOfTransportation

OpenAI’s new car is a game changer for the autonomous driving industry.

Equipped with state-of-the-art hardware and software, OpenAI’s car can successfully navigate complex environments and safely reach its destination. This advanced technology will revolutionize the way we travel and make our roads much safer.

openai #selfdrivingcar #autonomousdriving

OpenAI’s new self-driving car is a game-changer in the field of autonomous vehicles.

The car can navigate and drive to any location by simply telling it where to go, thanks to its advanced natural language processing and machine learning capabilities. openai #SelfDrivingCars #AutonomousVehicles


That’t the point. The token count didn’t go above 15. Thanks.

1 Like

Specify words count usually works better. E.g. “fewer than 20 words”. I’ve tried characters and sentences or paragraphs, none of them are understood very well be the model.

It can’t actually count its output, but it can gauge popular measurements - less than 100 words, less than 500 words, a sentence, two sentences, a few words.

Use a measurement that might be in training data. For example, it could be trained on essays written in less than 500 words or a single tweet/SMS. It doesn’t count, but it has an intuitive understanding of how long these are.

It has trouble with numbers, for example, it might have to compute that 215 words is less than 500 and more than 100. So numbers like this could throw off the intuition.

1 Like

Thank you all for your feedback, indeed there are more than a single solution in response chain… what a wonderful community to learn from!