How to make use of GPT to rewrite text based on custom rules

I have text written by end users in English language. The text is quite technical in nature, contains design specs, system specs etc. Now we need to verify whether these text follows a set of guidelines. These guidelines span 100s of pages (won’t fit in the context window). How can we make use of GPT to rewrite the original text as per the guidelines?

Example:
Original text: The nut and bolt should be at the corner
As per guidelines there’s ambiguity regarding the position of the nut and bolt.
Rewritten text: The nut and bolt should be at the corner of the frame (for example)

Thanks in advance!

It’s an interesting question and I don’t think the answer is trivial or even possible at this point in time.

You might be able to vector embed the rules and then query the input text against that, it would pull back “similar” content from the corpus and you could run the GPT model with that as context and a prompt instructing it to check one against the other…

It would certainly need carful consideration and a fair bit of research to see if the idea is viable, unless someone else has already done it… not seen an example myself.

Would make for a very interesting research proposal.

1 Like

Foxabilo is on the right track, the context window being small will require you to find a way to get around this limitation. I don’t know if embedding will work because you the embeddings are going to look for the semantics of the text you are checking and may not correctly match that to the violation of a particular rule when trying to pull from a vector database.

Your best bet might be to use Langchain to chunk the rules individually. Then put those into a CSV. Each line is a rule. Then take a reasonable amount of text that you are examining (less than 8000 for the GPT-3.5-Turbo-16k model) and the have your code feed those tokens in and rule by rule ask ChatGPT if any of the rules mentioned were violated in the text presented.

To see if this is even viable in playground or ChatGPT proper paste like three of the rules and then a short segment of the text that violates one of these rules and ask if there are any rules that are not followed in the text presented. If you get pretty good results then looping through the entire context (which would be expensive-ish) a few rules at a time might do the trick.

You can even ask ChatGPT to respond in JSON to make the individual responses easier to parse later.

It’s not a task for the light hearted.

Hello Ivan!

A more ChatGPT-esque way to solve this issue would just be to speak to it like a human.

One thing I tried today (I’m currently working with a client to design an assistant that can reference all relevant code sections in a multitude of books on building standards based on a query) was appending an additional question after the prompt.

I’d be very interested to see what happens if you embed those rules, then append a question to the end of the user prompt something along the lines of “What guidelines should be used to correct the above text? Show me what the above text should look like after the guidelines are applied.”

The idea here is that the question will adjust the semantic meaning of the prompt to encourage it to pull better embeddings, and the following sentence is a directive telling the model what to do with the text and embeddings.

Rather than being concerned as to whether or not it’s referencing ALL the possible rules that could apply, you can work to adjust your prompting until the vast majority of your testing examples are acceptable. This will tell you that your model is understanding how to reference the rules in the same way that a human with knowledge of the entire set would not think about every single rule, but only the rules that would apply to the prompt.

1 Like

I’d add onto @Foxalabs and @amazingjoe’s responses that you should rely on the structure of language rather than the structure of documents. Paragraphs for example are usually written to contain one whole contextual piece of information. Sure it is related to the previous and next paragraphs, but it is usually a piece of information that can stand on its own.

If you break your document up by paragraphs, you can do a rolling window with pretty good contextual results. I’d start with that method first and then start getting more complicated as needed. Maybe you will have to scale up to subsections and pair it with chunk summaries. But I would experiment with the easy paragraphs first.

Hello Ivan, I want to add onto this particular comment because it’s been so successful for me.

The pattern of working with the LLM to refine your prompt is very valuable. The first few comments here will set your infrastructure up in a way to allow for efficient prompting, but Zachary’s point about asking the LLM for a better question will be very useful with the prompt engineering.

I started with a version of the prompt, the rule I wanted to test, and text I wanted to test against. Similar to what’s been mentioned, the text was chunks from a document. However, my prompt and the rule itself could have been better formed.
I asked GPT-4 to tell me how to rewrite the rule so that a specific example would break the rule. This really helped me working through the process of refining my rules.