I have been designing prompts for small businesses & small business apps for around 3 years, and I will try and help with any problems you are experiencing as a small business owner attempting to design prompts for Open AI models to boost productivity/solve business problems.
I am unable to provide assistance with non-business related prompts/image prompts/API issues/ software architecture etc, as they are outside the scope of my daily work activities.
Please share your business type, and business goal/ current prompt(if you have one), and I will do my best to help you find a working solution.
hey there jeffinbounremouth - i’m a digital healthcare content designer at a large consulting firm, working on digital health solutions. i want to upskill for prompt engineering but there’s an avalanche of mediocre offerings. can you recommend robust sites, books, courses to learn this skill? this is a outside your stated offer - maybe we can trade resources?
I would be more than happy to help you in any way I can.
I have my own ideas on the best way forward for people wishing to get into prompt engineering, and it does not involve any courses (or any expense)
Happy to jump on a Zoom and explain if it could help you. It will be easier and quicker to explain verbally than to type pages
You can also jump straight in and start here: https://www.promptingguide.ai/
But a better strategy IMHO is to focus on one aspect of your own workflow - and begin your journey by creating a prompt/prompt sequence to automate one of your own repetitive tasks.
This way everything you learn is anchored in reality - a real use case.
Love the way you work @jeffinbournemouth
I have learned that it’s 10X easier to learn prompting if you begin by creating a solution to one of your own problems.
Anchored in reality.
When I teach programming, I focus a lot on pseudocode because if you can pseudocode, you can code in pretty much any language.
Good prompting (to me) feels like a proto-pseudocode.
There are (and will continue to be) people who say you shouldn’t need to be super specific, the computer should be smart enough to figure things out—and they’re not wrong, but we’re not there yet either.
Computers have a tendency to do exactly what you tell them to do, so when you’re programming it pays to be as specific as possible. With large language models they’ll try to do whatever you tell them, but they will also make decisions you might not like if your instructions are at all ambiguous.
So, I think the best way to learn to prompt is to think about how someone who is super-literal would fulfill your request and try to craft your prompt in such a way there is no ambiguity or assumptions remaining.
Here is a video I use to drive this point home,
Large language models have advanced by leaps and bounds over the last year, so maybe in 2–3 years it won’t be as important to be as precise, but for the time being I thinkmost prompting woes most people have can be solved by remembering that the it’s their responsibility to be as clear as possible for the model, not the model’s responsibility to divine their intention or desire.
First of all, thank you for taking the time to answer others questions!
I’m having trouble making a simple ‘avatar personality’ and I think I fit your description as we are a small business that manages vacation rentals and
Let me give some context.
A few month ago I have been working on creating prompt for chatgpt 3.5 where I was requesting to write an article about a specific topic using a “writing avatar”.
The prompt is around 4000 characters where basically I tell.
Instructions: write a topic about ‘Text’ and use the style of ‘avatar’
(Here I add the summary of the points I’m interested)
(I give a full description of the personality of how to write the article)
The avatar personality gives a coherent text written in as the avatar 70% accuracy.
I would like to know what would organize your avatar personality to get more accurate articles if I want to write 100 articles and be easy to identify the avatar’s writing?
Thank you in advance
Hi Gsc 19m,
Thanks for reaching out! You’re on the right track with your ‘avatar personality’. With the help of GPT-4, you can create a more accurate avatar in just 15 minutes. All you need is a good example article that captures the style, tone, and format you’re aiming for.
Once you’ve generated a couple of great articles, you can then use GPT-3.5 with these articles as examples in the prompt. This should significantly improve the accuracy of your avatar’s writing.
If you think it would be helpful, I’d be more than happy to jump on a Zoom call and show you how to do this, free of charge. Let me know if this works for you!
Thank you for making yourself available for questions.
I have an idea for a website which I would like to build and essentially resell the chat gpt content with a value added side.
Is that allowed?
Right now it takes up to 30 seconds for chat GPT 3.5 to respond, though many times it is as quick as five seconds.
If I develop a commercial application I assume I can have multiple queries with the same credentials?
I have searched the sites documentation on allowed uses, and though it seems to allow such a use, it is not explicitly stated, where i could find.
Thanks in advance for an info you might provide,
Thanks for reaching out with your questions. Yes, you can certainly use the OpenAI API to add functionality to your website tool. I’ve built an app myself, Jaina AI, and I can confirm that the response times from the API do vary, sometimes it’s quick, other times it can take up to 30 seconds.
As for your question about multiple queries with the same credentials, the answer is yes.
Also, it’s best to check the OpenAI terms and conditions. They provide detailed information on the types of apps allowed to be built on the API and other usage guidelines: Usage policies
I hope this helps and wish you the best of luck with your website project.
Thanks Jeff and much appreciated for the response.
That is a grate idea! I have some examples and will test this strategy with GPT-4.
Do you think that this would be enough and no need for fine-tuning or similar?
I personally use this strategy and it works well without any need for fine-tuning. However, for high volumes of requests, it might be beneficial to fine-tune a model. This could potentially increase API response speed and reduce cost.
Another question if you don’t mind. I am very confused by the wording in the Guide for the API where it says
“The user messages provide requests or comments for the assistant to respond to. Assistant messages store previous assistant responses, but can also be written by you to give examples of desired behavior.”
What does it mean 'Assistant messages store previous assistant response"?
From Chat GPT:
“This means that the dialogues or responses that the assistant has previously provided are saved or stored in the assistant messages. This way, the AI can reference those past messages in future interactions and maintain continuity. This helps in creating more engaging and conversational exchanges and context-aware responses.”
Oh, I get it. It is the context of the surrounding document and not a stand alone statement.
I was reading “Assistant Messages” to mean, “Messages from the assistant”.