Use Codex model generate code Problem

This is my playground settings.
image

And this is my playground input, A simple Unity C# Component.

I tried different Instructions submit, like:
Add a function that can copy the text.text to systemCopyBuffer
Add a separate function named Cpoy(), it can cache text.text to systemCopyBuffer
Add a function named Cpoy(), it cache _text.text to systemCopyBuffer,put it after awake function

It just get the same result.

I want to try to understand the difference between a base API request and a Codex API request in C#. Based on the results, it seems that the performance of the Codex API request is not very good in C#. Could it be that my instructions are leading to the same keyword?(grammar corrected by chat.openai haha)

Codex has better performance in some languages and less in others, this might be the case with C#.
However, keep in mind Codex has been discontinued and that’s because you can get better code from gpt-3.5 or 4, although you have to adapt your prompts to the chat-mode.

1 Like

Well, I got the point and thanks for your reply.

Actually the reason I use codex request is there were several inconveniences with chat requests before the GPT-4 API release. First, GPT-3.5 only receives a maximum of 2K tokens in one request. Second, it requires a manually constructed ‘chat history’ to have the capability of context, which means it can only handle a very limited length of code.In the common scenario of a chat request, it’s often necessary to modify the code generated by GPT multiple times to meet our customized needs, which requires support from the context.

Another issue is that, in fact, for developers who already have some coding skills, some comments and explanations are unnecessary, but chat requests do not seem to support such parameters. In contrast, Codex requests have the advantage of a maximum of 4k tokens, concise code editing, and the ability to avoid constructing complex historical contexts, which is particularly apparent.

Although I believe that the maximum 8K/32K tokens of GPT-4 can meet the vast majority of requirements, price is also an aspect that must be considered. For building secondary development applications, increasing the maximum number of tokens does not completely solve the issue of API requests.