Hi,
Im wondering if anyone know if, or when, it will be possible to use the gpt-4-vision model with the function calling tool?
Hi,
Im wondering if anyone know if, or when, it will be possible to use the gpt-4-vision model with the function calling tool?
Please use the search facility:
step 1:add json to actions.json
{
“name”: “文生图”,
“value”: {
“type”: “function”,
“function”: {
“name”: “dall_e.text2image”,
“description”: “understands significantly more nuance and detail, allowing you to easily translate your ideas into exceptionally accurate images. Use this when you need an image of better quality. Only one images will be generated.”,
“parameters”: {
“type”: “object”,
“properties”: {
“prompt”: {
“type”: “string”,
“description”: “description of image”
}
},
“required”: [
“prompt”
]
}
}
}
},
step 2:coding the function dall_e.text2image
def text2image(is_fee, prompt):
try:
# 用AI生成的指令生成图画
response = (
OpenAI()
.images.generate(
model=“dall-e-3”,
prompt=prompt_en,
size=“1024x1024”,
quality=“hd”,
n=1,
style=“vivid”,
)
)
# save image_url to OSS
for data in response.data:
object_key = f"{uuid.uuid4()}.png"
image_url = oss.save(object_key, requests.get(data.url))
logger.info(“image_url:%s”, image_url)
return json.dumps({“image_url”: image_url})
except Exception as e:
logger.error(“no image %s”,e)
return json.dumps({“success”: “false”})
This is a lazy Post: