GPT4 completion model python api

Hello, I used the Python API to play around with GPT3/3.5. I used the Completion class.
I got access to GPT-4 API so I wanted to do the same but I realized that it’s only available through the ChatCompletion class.

Now I didn’t do much research into the difference between the two, but from the little coding I did, I prefer the Completion class. Is this coming to GPT-4 ? (and if someone had documentation about the difference between the two ways to make inference, completion and chatcompletion).
Thank you

I don’t believe so. ChatML is the future, I think… and the future is Bright!

Once you wrap your head around it, it’s a lot simpler and helps protect against prompt injection.

I believe the Chat endpoint has already been added to the Python library…

Is there something specific you don’t like about Chat endpoint?

well, as I said in the original post, I don’t really understand the differences between the two when it comes to what happens with the model.
To me, the completion felt more natural to how any nlp prediction model would work; it has a limited amount of tokens in input, and it tries to predict the following tokens.
ChatCompletion introduces the “roles” that I can’t wrap my head around.

it’s a lot simpler and helps protect against prompt injection.

do you mean injections such as sql injection etc? because I don’t see why ChatCompletion is the answer to that, it’s more about how you send the request than a way to query the model.

Do you mean that we are going to start changing the prompt method to be divided into 2 or more sections?
I read that for got-4 there is the cost of prompt and the cost of response are we still attached to the 8k (or 32k characters) between both?