Probable bug or security breach with OpenAI API

Hello!
Running the simple prompt shown in the PNG file, I got an answer completely unrelated to the prompt. The same behavior occurred several times.
It appears as if training prompts from OpenAI(?) get mixed in.
Curiously the answers are in Spanish even if the user prompt is Hello!

*** UNEXPECTED ANSWER
Soy una persona que le encanta viajar y conocer nuevas culturas. He tenido la oportunidad de visitar diferentes países y siempre me ha encantado la experiencia de sumergirme en la vida local y aprender de las personas y sus costumbres.

Además, el viajar me ha permitido ampliar mi perspectiva y entender mejor el mundo en el que vivimos. He podido apreciar la diversidad y la belleza de cada lugar que he visitado, así como también he aprendido a valorar más mi propio país y cultura.

Cada viaje es una aventura única e inolvidable, llena de momentos especiales y experiencias enriquecedoras. Me encanta probar comidas nuevas, visitar lugares históricos y naturales, y descubrir tradiciones y formas de vida diferentes.

En resumen, viajar es una de mis mayores pasiones y siempre estoy emocionada por la oportunidad de hacerlo. ¡Espero seguir explorando el mundo y descubriendo todo lo que tiene para ofrecer!

Since when was this bot a thing? :laughing:

@gbonorino

You are using LangChain which abstracts a lot and makes it much harder to understand what’s going on.

This is not a security breach. Without knowing what model you’re using it’s impossible to accurately say, but I can guess that the model you’re using is a completion model and is simply finishing “Hola!”.

I recommend using the OpenAI Client Library first, and then using LangChain once you feel comfortable with the client library as it builds on top of it.

2 Likes

Since: 20 new users a day ask ChatGPT questions, but on the forum.

It appears Langchain is being used, which is an agent framework with its own prompts for performing tasks in multiple model steps (and which can iterate out of control).

It is possible it even picks up on locale information from its operating environment.

You will likely NOT want to use Langchain for interacting with AI models.

1 Like

Maybe it has something to do with the temperature and the model.

Even GPT-4-turbo does something similar with high temperature value:

1 Like

“You will likely NOT want to use Langchain for interacting with AI models.” Please elaborate. Is your suggestion only meant for starters so they understand what’s going on underneath?

I find LangChain useful and worth learning. Thanks.

Yes, a typical forum anecdote is “here is my code” (insert unformatted AI-written or found-on-a-blog code). “when I chat with my PDF, it emptied my account with dalle-similarity…”.