I have a prompt that needs to return JSON that’s been working great. I’ve had to write a lot of rules around it to scrub the data when it comes back to make sure I only process the JSON in my script.
So when functions were announced I was excited to switch over, but when I did, the content of the results have been a lot worse than from my original prompt.
For instance, one of things I ask GPT to do is develop an image generation prompt.
It looked liked this:
For "imagePrompt" please generate a high quality image prompt as plain text (NOT JSON) for Dall-E that will pair nicely with the copy and color selected. Include details such as focal length and composition to create a modern, stunning image that will stand out and make an impact. Make sure there is NO TEXT and NO PEOPLE in the image.
And I get results like this:
High quality image of a shiny and clean car with water droplets on it, shot from a low angle to create a dynamic composition. Focal length should be around 50mm or lower to create a sense of depth and draw attention to the car. The color palette should be bright and vibrant, with blues and greens to complement the orange color of the copy. NO TEXT and NO PEOPLE should be included in the image.
Which is great and produces fantastic results.
Switching it to a function call like this:
"imagePrompt" => [ "type" => "string", "description" => "An image prompt for Dall-E that will create a high qualilty image that will pair nicely with the copy and color, including details such as camera focal length and composition to create a modern, stunning image that will stand out and make an impact.", ],
Gives much worse results which results in images that aren’t near as good, such as:
A close-up of a lush green lawn with perfectly trimmed edges and a vibrant flower bed in the corner.
Or often just:
Am I using function calls incorrectly?