Announcing GPT-4o in the API!

yes,
can you told me?
how to send entire history in gpt-4o model.

see:

1 Like

That’s a neat idea.

If you hit that button, can you “turn down the temperature” so the response you get is similar to the first response you got and are trying to regen?

It seems the problem you describe is not giving the AI “Chat history”. The API model on the chat completions endpoint does not remember what you sent to it before, and the same input has high likelihood of producing the same output.

You must provide previous messages also:

system: you are a roast bot.
user: tell a funny joke!
assistant: Your face!
user: that’s not funny
assistant: You’re right - your face is no laughing matter.

The last user input is the new question. By seeing previous responses, the AI has context and understanding of prior responses.

You do this in your code that records user sessions.

1 Like

I am implementing this model in php. so i am use session for sent history to gpt-4o model and now its working.

¡Estoy fascinado con el nuevo modelo de OpenAI para la API!

Es un avance gigante en el campo de la inteligencia artificial y estoy seguro de que tendrá un impacto profundo en una amplia gama de aplicaciones. La capacidad del modelo para generar texto de alta calidad, traducir idiomas y responder preguntas de manera informativa es realmente impresionante.

Hi Thank you for the work you do Team OpenAI, this tool will change the world. I hope to test the audio and video module soon. I would love to see its implementation in different types of developments, especially in mods with video games, experiments xD.

1 Like

Pretty cool, I’m really hoping the audio features in the API comes soon, it will be perfect for my enterprise!

The multimodal capability is insanely great! :blush:

Have you tried it yet or are you just basing that off the demo?

1 Like

I do tried it for a while. Indeed I developed an app for macOS utilizing the multimodal capability of gpt-4o, just realizing the potential of it.

The text to speech and speech to text for Omni model aren’t available yet. Same thing for Omni model generating images. Did you just use the models that were already available or do you have beta access to all of Omni models capabilities?

No,none of these two available yet. In my case, only text to text and image to text is used for now. The app were used to interact with the screen content.

Brandon W via OpenAI Developer Forum <notifications@openai1.discoursemail.com>于2024年6月13日 周四22:11写道:

yet

So you are doing nothing more than everyone else.

Got excited there for a second. Shame!

Well, it is not my point confusing you. Developing the app was to address a specific problem effectively, and I believe it does well.

Regarding sharing links, it seems the community prevent posting them directly. You can search for “MacCopilot” and give it a shot if you are interested.