Gpt-4-turbo doesn't respond questions on AI models

I noticed that GPT models 3.5 or 4 using API calls don’t respond to questions on AI models where as with chat.openai I get answers.

A simple question like “what’re popular AI models?” or “what’s machine learning?”, doesn’t invoke a response.

I use the following query:
const response = await openai.chat.completions.create({
model: “gpt-4-turbo”,
messages: messages,
presence_penalty: 0.5,
frequency_penalty: 0.3,
max_tokens: 300,
tools: tools
});

I get answers for general topics clearly.

Hi @arun.rebala ,

there must be something wrong with your API or model parameters. I’ve tried your prompt and look at the results. IngestAI.io uses (obviously) API and the models do respond to the query you provided.

One more thing - I don’t think it’s a good idea to use presence_penalty: 0.5, and frequency_penalty: 0.3 penalizations if that’s not something you really care about as you’re framing the model to use tokens that are way less appropriate (probable).

Instead of a plug where you can’t see the parameters or prompt, here’s a screenshot from OpenAI’s playground, depicting API parameters, the OP input alone, and the AI response.

The system message was just a space character to clear the placeholder text. In practice, you would “program” the AI to act a particular way with a strong new identity, because gpt-4-turbo has been make particularly unsuitable for custom products. I can demonstrate again just like hours after “devday”.

Untitled-1

I agree with both of you. As I tune the parameters using the playground tool, I’m able to get the responses. Haven’t gotten to answer-all stage yet, but I’m sure the issue is in parameter settings. Thanks for your help. Cheers

1 Like