Here’s a post I did on a new prompting technique I’m calling INSTRUCT. A programming language for getting prompts to do anything you want:
This is an evolution of the Better Chain-of-Thought ideas I’ve been exploring.
Here’s a post I did on a new prompting technique I’m calling INSTRUCT. A programming language for getting prompts to do anything you want:
This is an evolution of the Better Chain-of-Thought ideas I’ve been exploring.
Here’s a bonus INSTRUCT prompt that implements a generic Intent & Entity Recognizer. Features of the prompt:
Here’s the prompt for GPT-4 and a sample positive test run:
Intents Supported:
- BookFlight
- BookCar
- BookHotel
- None
Entities Supported:
- Origin - validation(must be a city)
- Destination - validation(must be a city)
- StartDate - validation(must be a date)
- EndDate - validation(must be a date)
Response Schema:
{"type":"recognized","intent": "<intent>", "entities":[{"<name>":"<value>"}]}
query:
I'd like to book a flight for 4 to seattle
program:
1: is the query related to the list of supported intents? If not use the None intent and jump to line 7.
2: recognize the queries intent and extract any entities.
3: compare the recognized intent and entities with the list of supported intents and entities.
4: were any additional intents or entities used?
5: run each recognized entities validation rule.
6: update the recognition results to only use available intents and entities if needed. Remove any invalid entities.
7: return the results as JSON. Return the tag <response> just before the JSON.
State each line of the program and show your work for performing that line.
1: is the query related to the list of supported intents? If not use the None intent and jump to line 7.
- The query is related to the BookFlight intent, so we do not use the None intent.
2: recognize the queries intent and extract any entities.
- The intent is BookFlight, and the entities are:
- Origin: none
- Destination: seattle
- StartDate: none
- EndDate: none
3: compare the recognized intent and entities with the list of supported intents and entities.
- The recognized intent and entities match the list of supported intents and entities.
4: were any additional intents or entities used?
- No, only the supported intent and entities were used.
5: run each recognized entities validation rule.
- Destination: seattle is a valid city, so it passes the validation rule.
6: update the recognition results to only use available intents and entities if needed. Remove any invalid entities.
- No update is needed, as the recognition results are valid and complete.
7: return the results as JSON. Return the tag <response> just before the JSON.
- <response>{"type":"recognized","intent": "BookFlight", "entities":[{"Destination":"seattle"}]}
And here’s an off topic example. Notice that it jumps straight from step 1 to step 7:
Intents Supported:
- BookFlight
- BookCar
- BookHotel
- None
Entities Supported:
- Origin - validation(must be a city)
- Destination - validation(must be a city)
- StartDate - validation(must be a date)
- EndDate - validation(must be a date)
Response Schema:
{"type":"recognized","intent": "<intent>", "entities":[{"<name>":"<value>"}]}
query:
where can I get a taxi?
program:
1: is the query related to the list of supported intents? If not use the None intent and jump to line 7.
2: recognize the queries intent and extract any entities.
3: compare the recognized intent and entities with the list of supported intents and entities.
4: were any additional intents or entities used?
5: run each recognized entities validation rule.
6: update the recognition results to only use available intents and entities if needed. Remove any invalid entities.
7: return the results as JSON. Return the tag <response> just before the JSON.
State each line of the program and show your work for performing that line.
1: is the query related to the list of supported intents? If not use the None intent and jump to line 7.
- The query is not related to any of the supported intents, so we use the None intent and skip to line 7.
7: return the results as JSON. Return the tag <response> just before the JSON.
- The results are:
<response>
{"type":"recognized","intent": "None", "entities":[]}
Here’s a version that’s been optimized for text-davinci-003:
Intents Supported:
- BookFlight
- BookCar
- BookHotel
- None
Entities Supported:
- Origin - validation(must be a city)
- Destination - validation(must be a city)
- StartDate - validation(must be a date)
- EndDate - validation(must be a date)
Response Schema:
{"type":"recognized","intent": "<intent>", "entities":[{"<name>":"<value>"}]}
query:
I'd like to book a flight for 4 to seattle
program:
- is the query related to the list of supported intents? If not use the None intent.
- recognize the queries intent and extract any entities.
- compare the recognized intent and entities with the list of supported intents and entities.
- were any additional intents or entities used?
- update the recognition results to only use available intents and entities if needed.
- run each recognized entities validation rule.
- update the recognition results to emove any invalid entities.
- return the results as JSON. Return the tag <response> just before the JSON.
State each step of the program and show your work for performing that step.
1: is the query related to the list of supported intents? If not use the None intent.
Yes, the query is related to the list of supported intents. The intent is "BookFlight".
2: recognize the queries intent and extract any entities.
The intent is "BookFlight" and the entities are "Origin" (4), "Destination" (Seattle), "StartDate" (null), and "EndDate" (null).
3: compare the recognized intent and entities with the list of supported intents and entities.
The intent and entities are supported.
4: were any additional intents or entities used?
No, no additional intents or entities were used.
5: update the recognition results to only use available intents and entities if needed.
No updates are needed.
6: run each recognized entities validation rule.
The "Origin" entity must be a city, and the "Destination" entity must be a city. The "StartDate" and "EndDate" entities are not required.
7: update the recognition results to remove any invalid entities.
No updates are needed.
8: return the results as JSON. Return the tag <response> just before the JSON.
<response>
{"type":"recognized","intent": "BookFlight", "entities":[{"Origin":"4"},{"Destination":"Seattle"},{"StartDate":null},{"EndDate":null}]}
Here’s a GitHub placeholder for my new Self-INSTRUCT project. Self-INSTRUCT will be similar to Auto-GPT but with better planning and reasoning abilities. GPT (even 4) is mediocre at best when it comes to planning. Humans can easily author better plans then GPT-4.
I’m making pretty good progress with getting GPT-4 to write INSTRUCT prompts by looking at fragments of human authored INSTRUCT sequences and then combining them to build a new prompt. For example, you can ask it to “create a prompt that collects flight details from a user and then calls a bookFlight action.” My prompt builder will spit out a new INSTRUCT based prompt that performs that task flawlessly, complete with hallucination guardrails. The same basic approach can be used in Self-INSTRUCT to generate much better plans than what Auto-GPT does today. I’ve watched some of its runs and its planning is generally horrible…
I’ll post back to the forums once I get something running but it may be a week or so. Tuesday I’m giving a talk at the Microsoft MVP Summit and I want to see if I can get my prompt builder working with text-davinci-003 as it’s cheaper, faster, and actually available to the public.
Have you tried ChatGPT-turbo - even cheaper! Hah.
Looking forward to your progress.
I have its horrible It couldn’t follow instructions for how to get out of a paper bag…
This is really brilliant! I often deal with very complex queries, so I’ve been desperate for some kind of a format or framework like this. Good luck with the self instruct project.
Any chance I could see the current version of your metaprompt? That would be the final piece of the puzzle for me.
Alex, what kind of meta prompt are you looking for? Happy to help…
I will say that I’m pretty close to backing off my use of the word “program:” in INSTRUCT and revert back to my original “steps:” word… seems subtle doesn’t it? I’m finding that the word “program:” leans the model towards trying to write python programs which its horrible at mock executing. One goal with INSTRUCT is to steer the model away from writing code.
This is brilliant and I could probably use this technique to organically build the NLP Query solution. However, how would you deal with the use case, when the number of intents and entities are too many to list in the prompt? TIA
You can use a vector database and semantic search (cosine similarity) to dynamically identify the intents & entities most likely relevant to the users message. You would need to describe in detail when an intent or entity would be used, generate an embedding over that, and store the resultant vector in a vector DB. Then when a user message comes in you generate an embedding for that message and then use that to query for the most similar intents and entities to include in the prompt. Set a token budget and shove as many into the prompt as will fit.
Likely you’ll just need to generate embeddings for your intents as the included intents will tell you which entities you also need to include.
For the embeddings you can use text-ada-002
and for the vector DB you could use Pinecone or I just started building a local vector DB called Vectra just for scenarios like this. Vectra uses files on the local machine to implement most of Pinecone but its fast and free My implementation is in TypeScript/JavaScript but maybe some industrious person will get GPT to create a python port of Vectra I’m about 50% finished and hope to have the code checked in this week.
Thanks ! Let us know when you are ready with Vectra and I will provide the port to Python. It would be really helpful to have embeddings in local files as I can then hold them in my cloud storage.
I will try to get something working this week. I wrote half of it this morning but we sold our house and movers come tomorrow so crazy next few days. Could be late Wednesday or Thursday. I’ve had way too much wine to finish it tonight
Vectra is published and I have a sample that works. Haven’t tested the metadata filtering yet.
Hey,
I’m implementing RAG, I have multiple documents in word format which also has table kind a structure with some information. Can you please suggest how should I implement that. I have implemented RAG before but only with single text document.
Thank you for your time.