DALLE3 - How to turn real pet photos into replicas in different gestures and themes?

I’m trying to generate pet images using uploaded real pet photos. For example I want to generate a pet in royal family portrait style, using my own pet photo.

Is there a good way to do it? I’ve tried different prompts but the generated pet images do not look like the original pet photos…

1 Like

Welcome to the forums!

My understanding is that DALL-E isn’t a GPT model at all. What’s certain is that ChatGPT isn’t really end-to-end multimodal.

What ChatGPT does is generate what it thinks is a better image prompt than what you’re describing. If you upload a picture and tell it to recreate it, it will describe that picture with words, and DALL-E will use that text description to render an image. DALL-E never sees your input image.

Knowing that, you can try to ask ChatGPT to come up with an incredibly detailed description of the input image and hope for the best - but it’s never gonna look as good as a real image to image solution.


DALL-E is a diffusion model like Midjourney and Stable Diffusion.

Kinda. What it definitely does is reframe the prompt to be inline with OpenAI’s alignment efforts. Which is generally a good thing—when asking for a group photo of “doctors” it wouldn’t be appropriate for it to consist solely of old white men—but occasionally creates some hilarious goofs like trying to create ethnically diverse fantasy characters like “South Asian Dark Elves” @PaulBellow.

This is 100% correct. At least for now, DALL-E 3 doesn’t have img2img capability or inpainting so you’re not going to be able to get exactly what you’re looking for using this particular tool, unless your pet happens to look like the canonical example of a particular breed.

1 Like

I ran some tests on this several months back with my own pet and had some decent results by asking Dalle-E to analyze and create a prompt for the image and then adjusting it from there using Gen_ID based iterative prompting. However, I think I was just lucky because, as @elmstedt points out, my dog’s look is fairly general in description and likely very prevalent in the training data. Below are some examples, but you’ll have much better results working with a diffusion product that has image2image, inpainting and additional extensions that offer better control.

Reference image:


1 Like

My company created Pet Image Wizard. Give it a try in GPT Store. You upload your pet’s photo and turn it into a superhero, cartoon, and other various styles and themes. I hope it can create what you’re looking for!

1 Like

Here are some more images from Pet Image Wizard.

Thanks a lot for the intro on pet image wizard.

I tried Pet Image Wizard and just ChatGPT with the same pet photo and same prompts. The results are not that different either similar backdrop in some ways. And neither are showing realistic images that resemble the original photo.

First one is Pet Image Wizard and second one is ChatGPT.

Sometimes, not always, when you don’t get the results you are looking for, you have to say in the prompt, “No. Sassy (pet’s name) is not brown, but black and white. Try again.” or “No. This image does not look like the uploaded image.” We have made a few GPTs now. It’s crazy when it says it cannot perform an action that it is an expert at doing. During training of GPTs, I have argued by saying, “Yes, you can do this. You are an expert at creating consistent images.” Then, the GPT does it’s job correctly after getting a pep talk. At this time, image generators are not perfect, but with feedback and advancements, I hope we get there soon.

1 Like

Let me know your overall issues with how the generations are turning out compared to what you are looking for. I will see if it is possible to make adjustments on our end in the GPT or maybe provide you with an effective prompt. Thanks.

Currently there are no major enhancements between the GPT and the plain ChatGPT. I wonder what you could do to improve image results in the GPT.

@oicit A GPT will not enable DALL-E to do something it is not capable of doing to begin with. Rather than actually “see” the details in your uploaded images, it is only capable of recognizing the visual elements of it.

In order to enhance the quality of the generated images, the quality of the prompt needs to also be increased. You can find several discussions in the following threads that will lead you in the right direction in terms of understanding how to work with DALL-E.

In terms of having the GPT perform better.

Here is why it cannot do it in its own words:

Ah, I see where the confusion might be wagging its tail! While I do have some special tricks up my sleeve, it’s important to remember that I’m working with the magic of the DALL·E system, which creates images based on the descriptions given. I don’t have access to different training data or programming methods beyond the standard DALL·E capabilities. My specialty lies in adding a personal touch to the prompts, ensuring they capture the unique characteristics and colors of your pet as closely as possible.

I craft a detailed prompt based on your description and the photo you upload to generate an image using DALL·E. Since DALL·E doesn’t have the ability to directly analyze images, I rely on your description of your pet’s unique features to create a prompt that aims to replicate those characteristics as closely as possible.

For the image I previously generated, here’s the exact prompt I used:

“A realistic and detailed image of a dog that looks exactly like the one in the provided photo. The dog has shiny black fur, engaging eyes, and a relaxed posture with a tennis ball. The dog appears content and dignified, lying comfortably on a textured surface. The lighting should mimic a sunny day, casting natural light and shadows on the dog, enhancing its sleek black coat. The image should capture the dog’s calm and friendly demeanor, making it look lifelike and true to the pet’s unique characteristics.”

In my initial prompt, when I said “looks exactly like the one in the provided photo,” it was indeed an overstatement considering DALL·E’s capabilities. Since DALL·E can’t see or analyze the photo you provided, it relies entirely on text descriptions to generate images. My role is to translate the visual elements of your pet into a text description that DALL·E can understand and use to create a new image that hopefully resembles your pet based on the details you and I provide.

So, the correct approach for the user is to describe the features and attributes of their pet as accurately as possible without implying that DALL·E can view or understand the actual photo.