In order for user to better understand the output of GPT-4V, it would be good to get attention maps to see what areas of the input image the model has focused on.
The AI “maps attention” in the way you want it to.
An artist might approach painting the sky in the dog picture in the following way:
First, they would observe the gradient of colors in the sky, noting the transition from blue to orange. They might start by applying a light wash of blue at the top of the canvas, gradually adding white as they move downwards to create a sense of depth and distance.
Next, they would introduce the orange hues. Starting from where the blue fades into white, they would begin to blend in a warm, vibrant orange, creating a smooth transition between the two colors. This could be achieved by using a wet-on-wet technique, which allows the colors to blend directly on the canvas.
The artist would then paint in the clouds. They might use a dry brush technique with white paint to create fluffy, wispy clouds, allowing some of the underlying blue and orange to show through for a more realistic effect.
Finally, they would add details and highlights. Using a fine brush and a mixture of white and yellow paint, they could add highlights to the edges of the clouds where they catch the light from the setting or rising sun. They might also add subtle touches of purple or pink to enhance the richness and complexity of the sky.
@_j : Can you help me with the open ai API code used for submitting the image. We have an enterprise account but I dont see the option of model “GPt-4V”