I meant to pull the stream from the ChatGPT response where you encountered the halluciantion.
I likewise turned off all the tools in ChatGPT within “customize”, so that it could not use DALL-E, resulting in the internal imaginings of producing an image when it was generating language tokens.
I second “gpt-4-sparse-4bit-20b.” Sublime tokens, indeed.
Im counting the days to get the new voice function with real time video feed I have so many ideas for use cases! Im also waiting to have the memory in EU… its really annoying that its available and im missing out cause of EU regulations -_- I hope the GDPR issue will be solved soon.
I’m not sure if this is an account issue. Can anyone please tell me why I’m receiving incoherent output from the OpenAI Assistant? This has been happening for the past two weeks. I have encountered the same issue on two different PCs, which may point to an account issue. It doesn’t seem to matter which model I’m using.
That looks like a temperature issue - what you’d get at the maximum. Try temperature:0.5
as an API parameter.
OMG! I think you’re absolutely right. Thank you!
That’s a little off Topic? This announcement is specific to the API.
I want that if I do another prompt, it will respond as per the previous prompt. in gpt-4o model api.
[image]
An API clearly does not have a UI, but you can quite easily do that with memory.
This is using Chat Completions and 4o
(aka “Cat Completions” )
You could provide a UI element too, instead of having to spell it out … but the UI is up to you … right? With sufficient temperature I doubt you’d get the same response twice and you could check that anyway?
I want a similar answer from gpt-4o API.
[image]
that is from the gpt-4o model via the Chat Completions API!
(note that gpt-4o
is a model not an API)
The UI to achieve that is up to you!
Yes but the response to the second prompt is not coming as per the first prompt.
in gpt-4o api.
[image]
Yes it is.
In my case the entire history is being sent, so GPT 4o knows the context.
Alternatively I could add a “regen” button and it would send the first two things again with a hidden prompt.
You can achieve what you want to achieve with the current API, you just have to add things to your UI and send an appropriate prompt combining what it has already output and your prior request.
yes,
can you told me?
how to send entire history in gpt-4o model.
see:
That’s a neat idea.
If you hit that button, can you “turn down the temperature” so the response you get is similar to the first response you got and are trying to regen?
It seems the problem you describe is not giving the AI “Chat history”. The API model on the chat completions endpoint does not remember what you sent to it before, and the same input has high likelihood of producing the same output.
You must provide previous messages also:
system: you are a roast bot.
user: tell a funny joke!
assistant: Your face!
user: that’s not funny
assistant: You’re right - your face is no laughing matter.
The last user input is the new question. By seeing previous responses, the AI has context and understanding of prior responses.
You do this in your code that records user sessions.
I am implementing this model in php. so i am use session for sent history to gpt-4o model and now its working.
¡Estoy fascinado con el nuevo modelo de OpenAI para la API!
Es un avance gigante en el campo de la inteligencia artificial y estoy seguro de que tendrá un impacto profundo en una amplia gama de aplicaciones. La capacidad del modelo para generar texto de alta calidad, traducir idiomas y responder preguntas de manera informativa es realmente impresionante.