I have been trying to do something very simple with openai’s api. I am asking it to generate text/product description in multiple languages using product attributes i provide it.
Most of the times it doesnt follow instruction, sometimes the returned json has errors, sometimes text is too short despite minimum length instruction, sometimes its just bad gateway error,etc. Around 30% of my tokens go wasted.
Since I have a very small and targetted use case can anyone suggest if there is an easy way to improve the efficiency of the system? I am using gpt3.5-0613 bcoz gpt3.5-1106 is even worse at following instructions.
Welcome to the forum.
What model are you using? What settings?
Do you have an example of your prompts?
Maybe share the set-up via a Playground link?
The models are not great at generating syntactically correct JSON; it generally works better if you ask for Markdown format or even YAML.
The models are also not great at paying attention to “everything” – the whole point of the transformer architecture is that the model decides what to pay attention to, through a smallish set of “attention heads” and it won’t be able to pay attention to the rest. When your task involves “look at all the data in detail,” you will typically want to use a more traditional data processing system, rather than a transformer model.
That being said, it sounds like the approach you’re suggesting should be possible to prompt correctly with a bit of trial and error. You have to try a few different phrasings, sets of instructions, and input/output formats before you hit on something that works.
Been using gpt3.5-0613 via api. Tried using 1106 but the descriptions got even smaller and gpt ignored almost all attributes so i had to use 0613 which is maybe 10-20 times slower.
This is a sample prompt:
For buyers write a very appealing description of a beautiful rug in de, fr, es, it, pt using rug attributes below.
Use every single rug attribute provided in your sentences. Output result in utf-8 compliant json. Each description must be a minimum of THREE HUNDRED WORDS. Emphasize about the rug’s longevity, craftmanship, unique design,manufacturing time, quality of materials, texture, investment value & the rug as a form of art.
sku:qk02091,rug genre:Chobi Ziegler,weight(kg):83.6,Manufacture type:Handmade,weave:Hand Knotted,pattern/motif:Floral,design:Chobi Ziegler,rug type:Area Rug,dye:Vegetable/Organic,knotting time(days):360,knotting type:Double,primary color:Maroon,secondary colors:Reddish Brown & Gold,pile material:Wool,Rug Shape:Rectangular,condition:New,width(cm):299,length(cm):439,origin:Pakistan,foundation material:Cotton,size category(cm):300x425
We have been using the traditional system for a very long time but it always sounds very robotic. All products sound the same and buyers just dont pay any attention to repetitive text.
Frankly we have gotten most of the data we needed out of gpt but the issue I have noticed is the unpredictability. At times it wud generate 13k of beautiful natural sounding text but on other times it wud just throw a wrench in the system and come out with complete 2-3k of gibberish. Apart from taking 5-10 minutes for a single api call the biggest issue i have noticed r just bad gateway errors.
But yaml sounds like a good idea, before json i was trying to get csv format which turned out even worse.
13k is actually quite a lot of text – typically the model will generate less than that.
You may wish to do something like a multi-step process, where you first generate an outline of what you should say (like bullet points or something,) and then call the model once for each bullet point, to generate the text for that particular outline item, and then glue them together at the end.
Give AlphaWave a try if you want reliable JSON out:
AlphaWave does JSON Schema validation of the model’s output and uses a feedback loop to improve the overall reliability of the models output