How do you teach end-users how to prompt engineer?

In actuality, so far in testing with a very small group, gpt-4 has been performing remarkably well. But, this is a very interesting idea. I do keep track of all queries and responses, although I don’t currently have a way of telling which are good and which are bad – although the bad ones are normally the ones that could not be answered, and they all have similar responses based upon the system message directive.

This sounds like an interesting add-on optional feature – one that would probably be even more useful for gpt-3.5-turbo as the query model.

This is my current query flowchart:

Where would you insert this prompt-editor query?

1 Like