I have been designing prompts for small businesses & small business apps for around 3 years, and I will try and help with any problems you are experiencing as a small business owner attempting to design prompts for Open AI models to boost productivity/solve business problems.
I am unable to provide assistance with non-business related prompts/image prompts/API issues/ software architecture etc, as they are outside the scope of my daily work activities.
Please share your business type, and business goal/ current prompt(if you have one), and I will do my best to help you find a working solution.
hey there jeffinbounremouth - iâm a digital healthcare content designer at a large consulting firm, working on digital health solutions. i want to upskill for prompt engineering but thereâs an avalanche of mediocre offerings. can you recommend robust sites, books, courses to learn this skill? this is a outside your stated offer - maybe we can trade resources?
But a better strategy IMHO is to focus on one aspect of your own workflow - and begin your journey by creating a prompt/prompt sequence to automate one of your own repetitive tasks.
This way everything you learn is anchored in reality - a real use case.
When I teach programming, I focus a lot on pseudocode because if you can pseudocode, you can code in pretty much any language.
Good prompting (to me) feels like a proto-pseudocode.
There are (and will continue to be) people who say you shouldnât need to be super specific, the computer should be smart enough to figure things outâand theyâre not wrong, but weâre not there yet either.
Computers have a tendency to do exactly what you tell them to do, so when youâre programming it pays to be as specific as possible. With large language models theyâll try to do whatever you tell them, but they will also make decisions you might not like if your instructions are at all ambiguous.
So, I think the best way to learn to prompt is to think about how someone who is super-literal would fulfill your request and try to craft your prompt in such a way there is no ambiguity or assumptions remaining.
Here is a video I use to drive this point home,
Large language models have advanced by leaps and bounds over the last year, so maybe in 2â3 years it wonât be as important to be as precise, but for the time being I thinkmost prompting woes most people have can be solved by remembering that the itâs their responsibility to be as clear as possible for the model, not the modelâs responsibility to divine their intention or desire.
Hi Jeff.
First of all, thank you for taking the time to answer others questions!
Iâm having trouble making a simple âavatar personalityâ and I think I fit your description as we are a small business that manages vacation rentals and
Let me give some context.
A few month ago I have been working on creating prompt for chatgpt 3.5 where I was requesting to write an article about a specific topic using a âwriting avatarâ.
The prompt is around 4000 characters where basically I tell.
Instructions: write a topic about âTextâ and use the style of âavatarâ
Text
<<>>
(Here I add the summary of the points Iâm interested)
Avatar
<<>>
(I give a full description of the personality of how to write the article)
The avatar personality gives a coherent text written in as the avatar 70% accuracy.
I would like to know what would organize your avatar personality to get more accurate articles if I want to write 100 articles and be easy to identify the avatarâs writing?
Thanks for reaching out! Youâre on the right track with your âavatar personalityâ. With the help of GPT-4, you can create a more accurate avatar in just 15 minutes. All you need is a good example article that captures the style, tone, and format youâre aiming for.
Once youâve generated a couple of great articles, you can then use GPT-3.5 with these articles as examples in the prompt. This should significantly improve the accuracy of your avatarâs writing.
If you think it would be helpful, Iâd be more than happy to jump on a Zoom call and show you how to do this, free of charge. Let me know if this works for you!
Thank you for making yourself available for questions.
I have an idea for a website which I would like to build and essentially resell the chat gpt content with a value added side.
Is that allowed?
Right now it takes up to 30 seconds for chat GPT 3.5 to respond, though many times it is as quick as five seconds.
If I develop a commercial application I assume I can have multiple queries with the same credentials?
I have searched the sites documentation on allowed uses, and though it seems to allow such a use, it is not explicitly stated, where i could find.
Thanks in advance for an info you might provide,
T Daniel
Thanks for reaching out with your questions. Yes, you can certainly use the OpenAI API to add functionality to your website tool. Iâve built an app myself, Jaina AI, and I can confirm that the response times from the API do vary, sometimes itâs quick, other times it can take up to 30 seconds.
As for your question about multiple queries with the same credentials, the answer is yes.
Also, itâs best to check the OpenAI terms and conditions. They provide detailed information on the types of apps allowed to be built on the API and other usage guidelines: Usage policies
I hope this helps and wish you the best of luck with your website project.
That is a grate idea! I have some examples and will test this strategy with GPT-4.
Do you think that this would be enough and no need for fine-tuning or similar?
I personally use this strategy and it works well without any need for fine-tuning. However, for high volumes of requests, it might be beneficial to fine-tune a model. This could potentially increase API response speed and reduce cost.
Another question if you donât mind. I am very confused by the wording in the Guide for the API where it says
âThe user messages provide requests or comments for the assistant to respond to. Assistant messages store previous assistant responses, but can also be written by you to give examples of desired behavior.â
What does it mean 'Assistant messages store previous assistant response"?
Thanks,
âThis means that the dialogues or responses that the assistant has previously provided are saved or stored in the assistant messages. This way, the AI can reference those past messages in future interactions and maintain continuity. This helps in creating more engaging and conversational exchanges and context-aware responses.â
Oh, I get it. It is the context of the surrounding document and not a stand alone statement.
I was reading âAssistant Messagesâ to mean, âMessages from the assistantâ.