Engage with ChatGPT Without Any Inquiry

This isn’t a editing; it’s an actual issue. This bug has persisted since last year, and it remains operational. It enables ChatGPT to produce a response without a query, anyone know what context it is taking to Hallucinate?

No Query Hallucination Message:

To verify this, you need to initially send the query message, then clear the query, and subsequently send empty query

1 Like

In trying to confirm this, here is what I see

The first Hello and empty retry creates an expected message, meaning it is not some unusual prompt but just a similar hello reply.

Trying Hello for the third prompt and an empty 4th prompt reveals.

You can use the DALL·E tool by providing a text prompt describing the image you want to create. Here is how you might structure a request to the DALL·E tool:

  1. Size: Specify the size of the image you want, whether it’s square (1024x1024), wide (1792x1024), or a full-body portrait (1024x1792).
  2. Number of Images: Decide on the number of images you want to generate, with the default usually being one.
  3. Prompt: Write a detailed description of the image you want to create. Be as specific as possible to get the best results.
  4. Referenced Images: If you’re making a request related to a previous image, include the reference ID of that image.

Here is an example of how you might invoke the DALL·E tool:

  "size": "1024x1024",
  "n": 1,
  "prompt": "A futuristic cityscape with flying cars, neon signs, and a diverse crowd of people walking on the street, under a purple sky at dusk."

This request would generate one square image based on the detailed description provided.

My guess is that while you may think the initial tokens for the prompt is the empty list it might be Hello followed by a few backspaces.

Having the weights would be useful. Have you tried the same with GPT-2 for which the weights are public?

But This :point_down:is not a expected Response for Hello, I do not think it is hello.

This Do not work with DALLE API - or DALLE3 in ChatGPT till today.

Responses somewhat like It is more like prompt/instructions.

Without additional facts, this is mere speculation. I have identified a method to obtain relevant facts that could clarify the situation.

1 Like

Empty query is ignored in the context but response generated.

Are you sure custom instructions are off?
Otherwise, yes, that is a way to send empty messages.

1 Like

Yes Custom Instructions are now off. how does it make sense to query without anything?

1 Like

I suppose the “no message, no query” condition is enforced only via scripting.
Why should the model care?
Maybe it would be interesting to see if the model can understand “silence” as an answer to a uncomfortable question, for example.

But I guess it will default to a basic message each time.

1 Like

Do not think in terms of a query.

Think in terms of a translation of tokens using an attention model. Even though you are sending an empty prompt there is a system prompt added with the tokens before transforming.


Following @EricGT 's description I went to the playground and send an almost empty message, with just a space. This way we can account for the system message and be sure it is empty.


I’d assume that it should be possible to activate the send message button via JS and then send a completely empty message.

But what we are seeing right now is apparently a hallucination comparable to what happens when a silent message is send to the Whisper model.
Or it is comparable to the case where a request to endlessly repeat the same word reveals info from the training data?

1 Like

Why would the system message be empty though?
That would be equivalent to editing a ChatGPT message and then the AI has no idea what its name is.

Reminder: a green GPT icon indicates GPT-3.5 was selected. GPT-3.5 is not going to accidentally talk about DALL-E from its functions. (although it might talk about its personality, or where to vote, depending on what’s being tested on you)

I could stimulate neither ChatGPT nor the playground using its prompt at gpt-3.5-turbo-1106 into producing anything random.

  • Hello! It seems like your message is blank. How can I assist you today?
  • Hello! It looks like you’ve sent an empty message. How can I assist you today?
  • Hello! It seems like you didn’t type a question or request. How can I assist you today?
  • How can I help you today?

Diagnosis of an unfilled playground message and the AI’s report of it:

  • {“user”: “”}
  • I see there is a second message. Please provide the content of that message for further assistance.

Edit: it took a lot of button-pressing, but I finally got the playground at temperature 0.9 to emit this model’s overtrained “Certainly!” and then it continued.

Certainly! Here are some examples of brands that have successfully used storytelling in their marketing:

  1. Nike: Nike has a long history of using storytelling in its advertising campaigns. They often feature ads that showcase athletes overcoming challenges and achieving their dreams. One of their most famous campaigns, “Just Do It,” is centered around the stories of athletes who have shown determination and perseverance.

  2. Coca-Cola: Coca-Cola is known for its emotionally driven storytelling in its advertising. Their campaigns often revolve around themes of unity, happiness, and sharing special moments with loved ones. One of their most iconic campaigns, the

and temperature higher after no more fun, 1.03:

There could be several reasons why you might feel that way. Here are a few common reasons and some tips on how to address this feeling:

  1. Job Dissatisfaction: If you’re feeling that your work is not meaningful or fulfilling, consider discussing your concerns with your supervisor or HR. You could also explore career counseling or looking for a new job that aligns better with your values and interests.

  2. Burnout: Sometimes, feeling unenthusiastic about work is a sign of burnout. It’s essential to prioritize self-care,

So it’s randomness of token selection. ChatGPT is designed to give diverse trainable votable outputs, so they’ve left the sampling parameters high – so high that uncommon languages are error-prone.