Prompt to make exactly same image but different pose

Via chatgpt 4, I’ve made a image that i personally like a lot.

I would like to use this “character” for my app.

The problem is: I can’t really replicate the image with different pose… How do you manage that? I want exactly same character but with different pose.

This is the image:

I know this is possible because there are AI models that have exactly the same face but doing different things. How do you achieve this?

5 Likes

It’s not 100% doable at this time with DALLE3, but if you keep the description of the character the exact same at the beginning of the prompt, you can change the end of the prompt for the “action/scene…”

Like I said, it’s not 100% reliable, but that’s the best way I’ve found so far…

1 Like

You mention:

It’s not 100% doable at this time with DALLE3

Do you know if It’s actually possible with any other AI image generation tool?

If the “seed” parameter of image generation was again exposed by API or within the DALL-E calling methods within ChatGPT, it might be a bit easier.

However, ChatGPT does have a mechanism for building upon the same image without it’s internal “reshuffling the deck” to give you different styles each time.

When you specifically refer to a prior image and say that you want to make improvements, ChatGPT should be able to send your prompt modifications to the similar seed origin.

More specifically, you can say “create a new dalle image with the prior image generation gen_id return value as the referenced_image_ids parameter to ensure style consistency, and make these small changes to the prompt you sent: xxxx”

Or more simple: make a four-pane 2x2 grid with variations on the same character.

4 Likes

We’ve been begging the DALLE team for this on the API, so they’re aware of the demand…

More specifically, you can say “create a new dalle image with the prior image generation gen_id return value as the referenced_image_ids parameter to ensure style consistency, and make these small changes to the prompt you sent: xxxx”

I’m using ChatGPT directly right now, not the api. Could you explain how can I take the gen_id and referenced_image_ids to send them as prompt? I really like the prompt you’ve written here and it looks like might work for me.

2 Likes

The AI has the gen_id of the image in the session conversation history. It does not need to be typed in again, or even be written out as explicitly as I did here to have the AI use it again. ChatGPT is already instructed to reuse the gen_id returned with an image success message if the user refers to the previous image.

1 Like

Agree with this. This may be possible on ChatGPT DALL-E as it can use the gen_id and referenced_image_ids, provided that the source/generated images are part of the same chat which is being used for generation.

1 Like

This is something the Stable Diffusion folks have long since, more-or-less solved.

I don’t do much with SD these days but, if you search a bit, it shouldn’t be hard to find a few hundred tutorials on how to do it in a dozen or so different ways.

I was also looking for the same. but from my research and prompting, I understood that each time you ask , Dall-e will generate the pose but with different costume, color hairstyle etc. So what I did is to prompt Dall-e to create a “story board sequence of a character in single image” and I crop it manually. for example: create a storyboard sequence of a middle aged boy looking in different directions I know it is a workaround, but solve my purpose.

4 Likes

You can prompt like this by asking for a sprite sheet:

“Create a sprite sheet featuring six unique poses of a character inspired by but distinctly different from Bart Simpson. This character should have spiky hair, wear a short-sleeved shirt and shorts, but in a different color scheme than Bart’s usual red and blue. Each pose should convey a different action or emotion, such as jumping, sitting, running, laughing, thinking, and waving. The character should have a playful and mischievous demeanor, similar to a young boy’s personality. The background should be transparent to focus on the character’s design.”

Let’s proceed with generating this image.

2 Likes

The maximum you can do is to pass the same image and ask for variations in different poses.


2 Likes

Hello, I am trying to create different poses like the stenictle you have posted. I am making a book for my granddaughter, and I am wondering what program you used and how. This is my first time attempting to illustrate a book for her.

Yes, this is very difficult.

Here’s how I achieved reproducing a “character I had to have.”

ChatGPT is the tool used. It is the only way that the enhancement of reusing the same random seed is possible, when you continue in the same session.

First I start with results:




Then the technique where I don’t have to spell out the girl, because I use vision on the other poster’s girl.

Here’s a cute girl character. I want her to look the same, but just with a more realistic nose and more realistic head size to her appearance.

First use your vision skill to develop 100 words that describe her appearance robustly head to toe, and text which would reproduce the style depicted. What you do NOT include in this description that you write out now is anything about her pose or actions she is performing, just her consistent appearance description that can be reused.

Send that baseline character now to dalle, and remember what you said to make the character and pay attention to the gen_id value that you get back (as it will be reused in the future).

That gives us our generic girl, and I can alter the prompt and get new images with new gen_id until I get a good image composition (the multiple poses above are an artifact of one gen_id that I decided to go with instead of retrying again and again.)

Then we have a session that is ready for the next variations:

Your technique will now be to reference the previous gen_id, and resend the exact same prompt language.
The prompt will have new text appended for “Scenario:”, where the scenario describes the actions or the setting of the picture.
Send these scenarios one at a time to dalle, automatically requesting the next image after the first is received:
[“the girl is kicking a soccer football”, “the girl is running alongside a cute puppy”, “the girl is talking on her mobile phone”, “the girl is thinking hard about a mystery”]

(I actually didn’t like the initial result of the seed/gen_id when adding the scenario - DALL-E 3 for no reason making a bunch of faces in all, so started searching for a new gen_id without reuse of the image reference again.)

By having a fixed character prompt that is only appended to, and by referencing a prior image generation, you can achieve relative consistency.

1 Like

Google "free AI Image to Image "
There’s quite a few different ones.

If you have a storyboard with the character with various expressions, you can use Freepik’s AI, which has the option to generate a character. By uploading some images, it generates the model that can be used later. You can upload just one image, but the more images, the more consistent and faithful it will be.

1 Like

Yes. Use OpenArtAI’s Character Creation. Feed this image to it with the Create Your Character Option to ‘Start with One Image’ and after 10-20 minutes it will enable you to prompt you character to be in different poses or in different scenes while still keeping the entirety of the characters appearance exactly the same. You can use the ‘pose your character’ option in the ‘create character visual’ tab to pose your character however you like.

1 Like

When you learn, please let me know as well. This has been my dilemma. I want consistent characters with different positions. Ty