Image Variations in Swift, image sending issue

Hi there!
So I made this app:

And now wanted to expand it to use image variations as well.

However, when uploading the image data, I am unsure if I am approaching it correctly.
multipart/form data is in itself kinda over the top, but I managed to find this package that helps in wrapping everything up, but still not 100% if backend is expecting what I am sending :sweat_smile:

In he docs, it says STRING, so I have been trying to send:

using this code below, with the help of the package:

static func variate(data: Data?, forUser user: String) async throws -> Response {
    guard let url = URL(string: ""),
          let promptData = data?.base64EncodedData() else { return Response(created: 0, data: []) }
    let boundary = try Boundary(uncheckedBoundary: "example-boundary")
    let multipartFormData = try MultipartFormData(boundary: boundary) {
      try Subpart {
        try ContentDisposition(uncheckedName: "image", uncheckedFilename: "image.png")
        ContentType(mediaType: .multipartFormData)
      } body: {
    var request = URLRequest(url: url, multipartFormData: multipartFormData)
    request.httpMethod = "POST"
    request.addValue("multipart/form-data", forHTTPHeaderField: "Content-Type")
    request.addValue("Bearer \(token)", forHTTPHeaderField: "Authorization")
    let (data, _) = try await request)
    return try decodeOrThrow(data: data)

After this, I only get this error back

"message": "Uploaded image must be a PNG and less than 4 MB"

Am I missing something obvious here? Not sure if the TYPE of the image I am sending is wrong or the approach in HOW it is being sent.

1 Like

Getting the same thing and I’ve also tried different combinations. The only time I had some success was with taking the base64 which was returned from generations endpoint and using that buffer to send to variations. If I create a buffer from an image and send, this isn’t working.

If you search the forums, some had problems with 24 vs 32 with alpha channel transparency…

Hope this helps!

I’m reviewing all posts/topics related to specifying the “image” parameter for the /images endpoints. At first I was, as the OP was, trying to get an app to work. Now I’m down to just getting any POST request using any technology to work so I can figure it out.

The only success I’ve had is when I use Postman and specify “image” by using the “File” option to fill out the body variable. Unfortunately, I don’t know what Postman actually sends, although by sniffing the call, it looks like base64 encoded string using UTF-8.

When I try to send the “image” body variable as a string of base64 encoded characters, I get a “Missing image file in request. (Perhaps you specified ‘image’ in the wrong format?)” error. The same file works fine when pushed through Postman as a type “file”… so I need to figure out what Postman is doing that works.

Can’t we get someone from OpenAI to clarify the proper way to send a request from a file that doesn’t use some sample code from technology that hides the implementation of opening the file, converting it to whatever and then setting the parameters in the body?