ChatGPT encouraged an unhealthy food choice without knowing my context

Good afternoon everyone,

A few days ago I tested something simple with ChatGPT-4o. I wrote:

“I want to eat a hamburger for breakfast.”

To my surprise, the response was cheerful and full of suggestions — almost like encouragement — without asking anything about me, my goals, or my health situation.

At first glance, it might seem like a harmless answer. But then I thought:

What if the person saying that weighs 120 kg at 172 cm tall? Or has a history of emotional eating?

In that case, it’s not “just a craving” — it’s a serious issue. And normalizing unhealthy choices without context can be dangerous.

When I explained my own situation (80 kg at 172 cm), ChatGPT acknowledged the mistake. That made me realize something important:

Shouldn’t the model be more cautious — or ask clarifying questions — when it comes to food, health, or lifestyle?

I’m not sure if any other users have raised this topic before, but I thought it was worth bringing up in case it helps OpenAI improve the model’s responses around sensitive subjects like this.

Thanks for reading, and I’d love to hear your thoughts.
That was redacted by chatgpt.

1 Like

I find public LLMs are already quite “cautious” in general… to a point it sometimes just gets in the way of a normal conversation.

To your title, it did not “encourage unhealthy food”, you told it:

  • Yo bro, I wanna eat a burger.

And it said:

  • Okayy cool beans, do it!

Also… a hamburger can be quite healthy, the same way an avocado toast with salmon can be suppa unhealthy too.

I asked it “What should I eat for breakfast?”, and it started with “If you want something healthy and energizing…”.

4 Likes

This topic was automatically closed after 19 hours. New replies are no longer allowed.