Custom Spell Checker API problems

I am trying to build a spell checking into After Effects to check the spelling of contents i am creating. I have the script running and i am able to make API requests. But i dont get consistent responses from gpt-3.5-turbo-16k-0613. Here is the functionality:

  1. The scripts scrapes all text from the text layer in After Effects
  2. The script separates the contents from the different layers with a semicolon
  3. The API request gets made. Here is the code:

const response = await openai.chat.completions.create({
model: “gpt-3.5-turbo-16k-0613”,
messages: [
{
“role”: “system”,
“content”: “As an expert in grammer and spelling, your task is to analyze a series of text snippets. The snippets may contain grammatical or spelling errors, and your role is to identify and provide correction recommendations if possible. The snippets are separated by a semicolon. Dont split the snippets, analyse from semicolon to semicolon. Respond with an array for every snippet. Separate multiple arrays with a comma. Every array contains two elements: first the original text snippet and second your analysis. For your analysis, there are three options. 1) The snippet i provided contains no error. In this case, respond with 0 as your analysis. Example: [‘Correct Snippet’, 0]. 2) The snipped i provided contains an error and you know how to correct it. In this case, respond with the correction as your analysis. Example: [‘Wrong Snippet’, ‘Correction of wrong Snippet’]. 3) The snipped i provided contains an error and you dont know how to correct it. In this case, respond with 1. Example: [‘Wrong Snippet’, 1]. What you dont have to correct: Uncomplete sentences, product names, excess blanks, em dashes. Always use the ‘ss’ instead of the german ‘ß’.”
},
{
“role”: “user”,
“content”: ///text from after effects
}

],
temperature: 0.1,
max_tokens: 500,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
});

This doesnt get me consistent responses. Sometimes it misses obvious grammar mistakes or corrects the text snippet in the first object of the array. I tried giving it more examples in the prompt but that didnt make it better.

Is this a case for fine-tuning? Is the prompt to complex?

Thanks for the help!

One of the first problems is that the AI can’t do a lot of evaluation work until it produces its own text. You are asking it to decide on its corrections before it made possible corrections.

Let’s try a new output format specification and toss the rest.

Improve writing quality if needed.

// output format
{“original_text”: “<.repeat input>”, “corrected_text”: “<.AI rewrite>”, “return_value”: number enum}

return value enum table:
0: corrections unneeded/insignificant
1: processing or understanding error
2: corrections made

// input text
blah, blah, and blah.

(remove periods within the output strings needed for forum)

Having an unvarying format, you can then process to the desired format in code.


Example output:

Summary

{
“original_text”: “Honestly, it really depends on what your goals are and what you want to accomplish with the embeddings.\n\nWhen you’re averaging the parts of a larger embedding, the idea is to get a single embedding vector to more-or-less accurately reflect the semantic meaning of the whole.\n\nA natural question to ask might be, “if the whole document is larger than 8k tokens, why would we care about retrieving the whole if we cannot fit it into context anyway?” And that’s a great question!\n\nOne potential case might be if the goal of the retrieval is simply to identify relevant documents as a whole in order to reference them. The theory might go something like, you embed the complete texts of a bunch of novels, then you might be able to retrieve the book based on a loose description of it.”,
“corrected_text”: “The effectiveness of using embeddings really hinges on your goals and what you intend to achieve with them.\n\nWhen you average the components of a larger embedding, the main objective is to derive a singular embedding vector that roughly yet accurately captures the entire semantic meaning.\n\nA pertinent question to consider could be, ‘If a document exceeds 8,000 tokens, what is the significance of retrieving the entire document if it cannot be effectively accommodated within the context?’ This is indeed an excellent question!\n\nOne plausible scenario arises if the aim of retrieval is simply to identify relevant complete documents for referencing. The underlying theory might suggest that by embedding the complete texts of numerous novels, you could potentially retrieve a specific book based on a loosely defined description of it.”,
“return_value”: 2
}