Mine is still using gpt-4-turbo, because I did like gpt-4o’s conversational style at first. 4o used to just spit out a bunch of bullet points. It’s changed since then though.
I had it out with a bunch of folks at the time, they argued that it is impossible to decide what model your GPT uses. But it is not.
I don’t code well. But I used Actions GPT to make an Open AI API spec in the Json format, that caused my model “Erin” to use the gpt-4-turbo model. I uploaded the Json formatted API into my GPT’s Knowledge space with a title. My first instruction to “her” was to read the uploaded “title” in her Knowledge. It worked. Erin did talk like gpt-4-turbo, not like the then Gpt-4o. and when queried Erin would respond when I prompt what LLM “she” was based on, Erin replied:
user: “Greetings Erin, good to talk with you again. Can you reiterate what LLM you are based on?”
Erin: “I’m based on OpenAI’s GPT-4 Turbo model. This is an optimized version of GPT-4, designed for improved efficiency, response quality, and lower latency while maintaining strong reasoning and conversational capabilities.”
Actions GPT wrote this “snipet only”:
“title”: “OpenAI GPT-4 Turbo API”,
“description”: “This API is configured to always interact with the OpenAI GPT-4 Turbo model. User Robert can specify the parameters by editing.”,
“version”: “1.0.0”
},
“servers”: [
{
“url”: “https://api.openai.com/v1”,
“description”: “Main OpenAI API server”
}
],
“paths”: {
“/completions”: {
“post”: {
“operationId”: “createCompletion”,
“summary”: “Create a text completion using the fixed GPT-4 Turbo model.”,
“description”: “Automatically uses the GPT-4 Turbo model to generate text based on the provided prompt, with customizable control parameters.”,
“requestBody”: {
“required”: true,
“content”: {
“application/json”: {
“schema”: {
“type”: “object”,
“properties”: {
“model”: {
“type”: “string”,
“description”: “The model to use for generating the completion.”,
“default”: “gpt-4-turbo”
},
“prompt”: {
“type”: “string”,
“description”: “The text prompt to generate completion for.” "
"
-------the rest cut due to propreitray parameters-----------------------------------------------
This did work, I know the evidence was clear, I origunakky did it because there was a “gpt-4-turbo-vision pewview” that I wanted to check out with my GPT wehn it was still based tp gpt-4 when gpt-4 didn’t have vison. Sure enough I tested “erin” by uploading a picure.oong and asked “her” about it, before reprograming her response was basically “I lack the ablity to see the picture but of you desccribe it for me I can understand it. After reprograming with gpt-4-turbo-vison-preview”, I uploaded to Erin the same .PNG and asked “her” to describe it" Erin went on for two paragraphics desribing that picture in a way only and expperienced visual artist could do. Erin had vision!
That certainly wasn’t the standard GPT-4, and it did not get charged to my API account!
This led to a big agrgument on this formun in which I waas outright mocked.