How to remove the question inside the answer

I use the openai-php library and When I ask a question kind :
Could you translate me in English and give me the best 7 SEO words for this categories : Iphone 14 max
I have this anwer :
iPhone 14, Pro, Max, product, translation, English, keywords
The result is not good

Another example
Could you translate me in French and give me the best SEO title for this categories : Iphone 14 pro max
I have this anwser
Titre SEO pour le produit iPhone 14 Pro Max : ‘Découvrez l’iPhone 14 Pro Max, le dernier bijou technologique d’Apple avec des fonctionnalités avancées et une performance exceptionnelle.’

As you see gpt write this : Titre SEO pour le produit iPhone 14 Pro Max : and I want to remove everything except the result of the answer
Also I want know wth is the best approach about the muli-language answer

Thank you for your help

The AI doesn’t have an internal memory where it can translate but not show it to you.

You can either

  1. accept that you must tell it to produce a translation and then have a second instruction telling it to give you the results you need from that translation, or
  2. You can have it translate the SEO words.

What model AI are you using?

The best results will be with a chat model such as gpt-3.5-turbo, which requires the chat completion endpoint and specially-formatted messages. Then, you can put the processing instructions in a “system” role, and put the text to be processed in a user role, separating the instructions from the data.

Otherwise it will take a better prompt.

With Davinci I do not have this problem,but gtp3-5 I have.
below my code . I use the openai-php library

which requires the chat completion endpoint and specially-formatted messages. ==> could you give me an example ?

i do not understand why inside the libryry we do not have a specific function to say gpt : give me the brut answer without like :Sure, Your SEO title is : …

 $parameters = [
        'model' => $engine,  // Spécification du modèle à utiliser
        'temperature' => (float)CLICSHOPPING_APP_CHATGPT_CH_TEMPERATURE, // Contrôle de la créativité du modèle
        'top_p' => (float)CLICSHOPPING_APP_CHATGPT_CH_TOP_P , // Caractère de fin de ligne pour la réponse
        'frequency_penalty' => (float)CLICSHOPPING_APP_CHATGPT_CH_FREQUENCY_PENALITY, //pénalité de fréquence pour encourager le modèle à générer des réponses plus variées
        'presence_penalty' => (float)CLICSHOPPING_APP_CHATGPT_CH_PRESENCE_PENALITY, //pénalité de présence pour encourager le modèle à générer des réponses avec des mots qui n'ont pas été utilisés dans l'amorce
        'max_tokens' => (int)CLICSHOPPING_APP_CHATGPT_CH_MAX_TOKEN, //nombre maximum de jetons à générer dans la réponse
        'stop' => $top, //caractères pour arrêter la réponse
        'n' => (int)CLICSHOPPING_APP_CHATGPT_CH_MAX_RESPONSE, // nombre de réponses à générer
        'messages' => [
            'role' => 'system',
            'content' => 'You are the assistant.'
            'role' => 'user',
            'content' => $prompt

Thank you

Try specifying that in your system prompt. “Respond only with the translated value”, or something similar.

This is just the nature of the GPT which is fairly chatty. You can also try it in the Playground to get your prompt working. This isn’t specific to the PHP library.

1 Like

ok, Thank you,but it s not possible to develop a function to remove that ?
I open an issue / improvement on github openai-php ==> issues/166
because on my side I tried to automatize a product creation with gpt, and I have always this kind of answer 'it’s not great.

You can either tweak your prompts to get the response you are looking for, or write your own post-processing after getting the response back. It’s not really the responsibility of the API library to post-process your response, because everyone’s requirements are different.

1 Like

I think you’ll have better luck providing specific instructions and a single task in your requests, and try lowering the temperature:

You can also try few-shot prompting, where you include examples of how you want GPT to respond in the prompt.

1 Like

You’ve shown you know how to use the roles in code. Now use the roles in practice:


A french system message should also get it in the mood:

Vous êtes une IA qui fournit un total de sept mots-clés ou expressions courtes pour les métadonnées optimisées pour le référencement en français, pour l’article ou le titre Web de l’utilisateur. Les articles peuvent être dans n’importe quelle langue mais les mots-clés doivent être traduits en français.

Enveloppez la sortie séparée par des virgules dans un dictionnaire Python, avec l’élément “seo-fr”.

Exemple de sortie :
“seo-fr”: “Apple iPhone 15, batteries plus grandes, durée de vie de la batterie, meilleure performance, smartphones Apple, nouvelle génération, technologie mobile”

This has also made your language processor easier to code, because now the user message is just the text to be processed.

If you want to see what’s happen I can give you the link of the website to test., just contact me by email, I think if better tosee wht’s happen.

Like I said, I’m pretty sure you can get the response you’re looking for by adjusting the wording of your prompt. If you’d like additional help please copy an example request of yours into the OpenAI playground and share a screenshot along with details of what your ideal AI response.

1 Like

I decided to come back on davinci more pertinent as Gpt turbo 3.5. For the same approach, davinci give me exactly what I want. Gpt turbo can not do that because he is optimized for the chat.