Today I tried the following:
I told chatgpt to…
… give me the seed number.
… give me the exact prompt sent to DALL-E
… execute a code snippet in the following form:
Code to DALL-E
namespace dalle {
test2im({
size: “1024x1024”,
prompts: [“THE PROMPT”],
seeds: [123456789]
})
None of that worked ( as it did a few days ago).
Why is OpenAI removing features giving us more control and denying information all of a sudden?
Can somebody explain that to me?
Asking chatgpt about the change to the interface it told me the following:
Certainly! The changes in the interface and process for generating images with DALL-E mainly involve how prompts are managed and how the generated images are presented. Here are the key updates:
- Direct Interpretation of Descriptions: Instead of showing or editing code-like scripts for DALL-E, I now directly interpret your descriptive text. This change aims to simplify the process — you provide a description, and I translate that into a prompt for DALL-E. This approach is more in line with natural language processing, focusing on the description’s content rather than its format.
- Non-Display of Exact Prompts: Previously, I could show you the exact prompts sent to DALL-E, including any modifications made to adhere to content policies. However, with the recent changes, these exact prompts are no longer displayed. This shift helps streamline the image generation process and places the focus more on the resulting images than on the specific wording of the prompts.
- Policy Adherence: The updated process continues to strictly adhere to content and ethical guidelines. This means certain types of images or styles (like those of artists whose work is less than 100 years old) aren’t created, and any sensitive content is carefully handled. The system also ensures diversity and inclusivity in the depiction of people.
- Feedback and Refinement: While the initial prompt details are not shown, the emphasis is still heavily on user feedback. You can request adjustments or entirely new images based on your needs, and I’ll translate that feedback into new prompts for DALL-E.
- Efficiency and User Experience: These changes aim to make the interaction smoother and more efficient. By focusing on your descriptions and the images themselves, rather than the mechanics of prompt generation, the process becomes more user-friendly and focused on the creative output.
- User Control: You still have significant control over the images created — the subject, style, composition, and other details. The difference is mainly in how your requests are processed and presented.
These updates are designed to enhance the overall experience with DALL-E, making it more intuitive and aligned with conversational interactions. If you have an image in mind, feel free to describe it, and I’ll assist in bringing your vision to life within this new framework!
Does that mean i can no longer use the seed number in my prompts and access DALL-E 3 directly without chatgpt rewriting my prompt. I guess Midjourney does look a lot more interesting after that restrictive update of OpenAi. What is the reasoning behind that?
I just can’t belive that I won’t be able to be more precise with DALL-E with the update, but less. That is really a bummer.