Good morning.
I am testing OpenAI’s API to create suggestions for C# class unit tests. I have created a Python script that reads a .cs file and uses a prompt:
“role”: “system”, “content” : “You are an expert in .NET 6 and C#”
“role”: “user”, “content”: "Create a unit test with xUnit, Shouldly, and Moq for the following class: "
I am using the OpenAI library in python (openai.ChatCompletion.create) and the gpt-3.5-turbo model.
The issue is that sometimes it generates a test class, but the generated class may be different from one another. Other times, it provides totally random responses that are unrelated to the prompt. Sometimes, it says that it needs more information or that we haven’t provided the class (which is not true because the prompt is exactly the same). It even cites ethics for not being able to help me “cheat” sometimes.
Is there any way to improve this? I understand that it can generate different responses and not always do the same thing, but we do need some consistency in the generated results to have some logic with what we have asked (if we ask for tests, it should generate tests, even if they are different in each request).
Best regards.