Enforce GPT-3.5 API Response completion token structure

Hello everyone,

I am prompting the GPT model “gpt-3.5-turbo” to provide a hex code and the associated log probabilities for a colour name provided. So it outputs the response something like this :

Input: Provide hex code for red colour

Response :

token=‘#’, logprob=-1.6240566e-06
token=‘FF’,logprob=-0.20763995
token=‘000’, logprob=-0.081627764
token=‘00’,logprob=-0.20763995

So, it uses 4 completion_tokens in this case. My question is if there is a way to force the GPT to split the output token in the format I specify. I want the gpt to split token response, the hex codes as pairs of two. This way, I can easily estimate how confident it is in the RGB space individually. I would expect something like this :

token=‘FF’,logprob=-0.20763995
token=‘00’, logprob=-0.081627764
token=‘00’,logprob=-0.20763995

I have already tried passing different messages in the role of the system through which I can remove the # from the response token and got the response as follows :

token=‘FF’,logprob=-0.20763995
token=‘000’, logprob=-0.081627
token=‘00’,logprob=-0.20763995

I also did some trials using functions but had no luck.

So I want to know if there is a way to achieve this.

Thanks in advance for your response.