Gpt3 model for spelling correction is not working perfectly

I have used GPT-3 model for correcting the spelling mistakes in questions we are providing to the bot. Used the below code for connecting to model,

var requestData = new                 
{                     
prompt = $"Correct the spelling in the following text: '{question}'",                     max_tokens = 50,                     
temperature = 0.7,                     
stop = "\n"                 
};  
                
string apiKey = "open ai key";             
string endpoint = "https://api.openai.com/v1/engines/davinci/completions";  

using (HttpClient client = new HttpClient())                 {                     client.DefaultRequestHeaders.Add("Authorization", $"Bearer   {apiKey}");                     var requestData1 = new                     
{                         
	prompt = prompt,                         
	max_tokens = 50,                     
};                     
var content = new StringContent(Newtonsoft.Json.JsonConvert.SerializeObject(requestData1), Encoding.UTF8, "application/json");                     

var response = await client.PostAsync(endpoint, content);

GPT-3 provides the corrected text for few words. But it does not provide the corrected text for everytime

Hi and welcome to the Developer Forum!

Can you provide some example outputs and input of where the problem occurs?

1 Like

You are using a deprecated base engine on a deprecated endpoint. That’s the kind of thing an AI from 2021 might suggest.

You will get far better instruction-following and output by using gpt-3.5-turbo and asking it “improve quality” if you actually want the power of AI and not a spell check.

Consult the API reference link on the left bar of the forum, going to “chat”, and see the way to use this model and its new format “messages”.

Also, you probably want a temperature of 0 or nearly so. Randomness => Misspeelings

I have given ‘smalest animel’ as the text and there is no text in the api result

can you pleasse show the API calling screen

You may have better luck with the chat api - the extra chat envelope tends to ensure a response, especially for very small prompts

1 Like