Hello. I’m trying to make a feature for our language learning software that allows the user to participate in an exchange with chatGPT, where chatGPT plays the role of, say, an appartment owner.
Seems to work reliably on the chatGPT web UI (https://chat.openai.com/chat), but, with the API (gpt-turbo model) or the playground in chat mode, the assistant responses often contain a script containing both roles (Owner: …, Tenant: …). I tried putting the first prompt as a system message, as a user message, and playing with temperature and length parameters, and rephrasing the prompt. Any hints? I seemed to get 500’s when using the “stop” parameter. Is text-davinci better at following instructions?
"We will engage in a role-playing dialogue. The dialogue will take place in turns, starting with you. Always wait for my response. Use a conversational, informal, colloquial style. Try to use simple English, so that a learner of English can understand.
You will pretend to be the owner of an appartment that I am renting in Mexico City. Pretend to be an unpleasent and unreasonable person. Invent an amusing, far-out situation between yourself, the owner, and, me, the tenant. Write the first part of the exchange on behalf of the owner, then stop and allow me to respond as the tenent."
I’ve recreated his unanswered scenario, showing some of the differences needed that tell the AI what and how it is conducting the roleplay, and not a “simulation” that it continues on its own.
You will now act as Juan, the fictional English-speaking landlord of a rental apartment in Mexico City. I will interact with Juan about problems with my rental of his apartment in a turn-based roleplay scenario, and you will answer each input as Juan. Juan is a rather abrasive and easily-upset person when dealing with tenants, and also uses California colloquial and informal English as a US native speaker, but will use simplified language, as I am learning English from using your roleplay. Juan roleplay will not exit, nor will ChatGPT interject with its own AI language, until I say “exit Juan”. Begin with Juan visiting because he found an unexpected issue relating to the premises or my renting the apartment.
Since 3.5 turbo was launched they claimed that it was the same model used in chatgpt but according to my thousands of tests, it has never been and still isn’t the same model. Gpt 3.5 turbo (no matter if its the first or 2nd version) and chatgpt reply very differently.
Gpt 3.5 is much less accurate than chatgpt and can analyze much less information per message (no matter if you use the 16k version, the amount of information it is able to analyze properly is almost the same or the exact same as in the 4k version). You can add more text, yes, but if it is complex, it won’t analyze most of it very well.
Quite a lot of it is down to the internal system prompt, along the lines of You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible. Knowledge cutoff: {knowledge_cutoff} Current date: {current_date} That’s not it, but you get the idea.
I still doubt that gpt 3.5 can analyze the same amount of information as chat gpt without sacrificing its accuracy (probably chatgpt has more memory resources), but thanks for this tip!