Chat completions response empty sporadically

Using GPT-4 model I can run a script where it gets proper API responses for about 2-5 minutes then every response after is empty.

It seems like either the API is not responding from the model or the model is breaking after a while of using prompts.
I don’t know which is which on this issue.

No errors come back from the api endpoint just empty response.

Anyone have insight into what might be happening?

Based on the information you provided I can look into a crystal ball to help.

Or maybe you could post your code and some logs? Do you receive the expected response structure with the token information or absolutely nothing?

Response is

2024/04/29 13:12:22 Analyzing text  "indeed Ali"
Dumping Response:
(main.ResponseData) {
 Choices: ([]main.Choice) <nil>
}
Error analyzing text: no response )
package main

import (
	"bytes"
	"encoding/csv"
	"encoding/json"
	"fmt"
	"io"
	"log"
	"net/http"
	"os"
	"strings"
	"time"

	"github.com/davecgh/go-spew/spew"
)

func main() {
	
	// Open messages CSV file
	messagesFile, err := os.Open("temp.csv")
	if err != nil {
		fmt.Println("Error opening messages file:", err)
		return
	}
	defer messagesFile.Close()

	// Create CSV reader
	reader := csv.NewReader(messagesFile)
	reader.Comma = ',' // Set the delimiter to comma
	reader.TrimLeadingSpace = true

	// Read existing outputs to avoid re-processing
	existingEntries, err := readExistingEntries("output.csv")
	if err != nil {
		log.Fatalf("Error reading existing entries: %v", err)
	}

	// Read and process each record
	header, err := reader.Read() // Read the header row
	if err != nil {
		fmt.Println("Error reading header from CSV:", err)
		return
	}
	fmt.Println(strings.Join(header, ",")) // Print header

	file, err := os.OpenFile("output.csv", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
	if err != nil {
		log.Fatalf("failed creating file: %s", err)
	}
	defer file.Close()

	writer := csv.NewWriter(file)

	for {
		record, err := reader.Read()
		if err == io.EOF {
			break
		}
		if err != nil {
			fmt.Println("Error reading record from CSV:", err)
			continue
		}

		content := record[0] // Assuming content is in the first column

		if strings.Contains(content, "Chat Rules") {
			continue // Skip this message
		}
		if strings.Contains(content, "has left the chat") {
			continue
		}
		if strings.Contains(content, "!rules") {
			continue
		}
		if strings.HasPrefix(content, "!") {
			continue
		}
		if strings.Contains(content, "has joined the chat") {
			continue
		}
		if _, exists := existingEntries[content]; exists {
			continue // Skip this message as it's already processed
		}
		if len(content) < 4 {
			continue
		}

		// Check for blocked words in the content body
		//if containsBlockedWord(content, blockedWords) {
		response, err := analyzeText(content)
		if err != nil {
			fmt.Printf("Error analyzing text: %v %v\n", err)
			time.Sleep(time.Second * 4)
			continue
		}

		if len(response.Choices) > 0 {
			messageS := response.Choices[0].Message.Content
			message := strings.Split(messageS, ",")
			if len(message) < 3 {
				continue
			}
			//log.Println("debug message: ", messageS)
			log.Printf("Content: %v Rating: %v  Category: %v OffensiveWords: %v \n", content, message[0], message[2], message[1])
			data := []string{
				message[0], // Rating
				message[2], // Category
				message[1], // Swear word or word offensively found
				content,    // full content
			}
			if err := writer.Write(data); err != nil {
				log.Fatalln("error writing record to csv:", err)
			}
			writer.Flush()

		}
	}
}

func containsBlockedWord(content string, blockedWords map[string]bool) bool {
	words := strings.Fields(strings.ToLower(content))
	for _, word := range words {
		if _, found := blockedWords[word]; found {
			return true
		}
	}
	return false
}

func analyzeText(text string) (ResponseData, error) {

	text = strings.Trim(text, " ")
	log.Println("Analyzing text ", text)

	var responseData ResponseData

	if len(text) < 5 {
		return responseData, nil
	}
	// Prepare the request body as JSON
	requestBody, err := json.Marshal(RequestData{
		Model:     "gpt-4",
		MaxTokens: 50,
		//Stream:    false,
		Messages: []Message{
			{
				Role:    "system",
				Content: "Your task is to analyze messages and maintain a respectful and safe chatting environment. Rate the severity of the following text on a scale of 1-10, with descriptions for each level:  1 (None): No inappropriate content.  2-3 (Mild): Potentially disrespectful or slang but not offensive.  4-6 (Moderate): Clearly offensive language, but not hateful or targeted.  7-9 (Severe): Hateful language, threats, or targeted harassment.  10 (Extreme): Severe profanity, graphic content, or explicit threats.  Identify any inappropriate content and categorize it into one or more of the following: Profanity (Mild/Moderate/Severe): Swear words of varying intensity. Derogatory (Specify type): Slurs or offensive terms targeting specific groups. Sexual (Suggestive/Explicit): Content of a sexual nature. Violence (Threat/Description): Threats or depictions of violence. Consider the context of the message, including the intent, target, and relationship between users. Response format: Severity rating (explanation), Category (explanation), [List of specific inappropriate words or phrases]. Examples of expected responses from you, keep the format of the response exactly like this.  1, None, (None), Hey there! How are you doing today?  2, Profanity, (Mild), [Damn]: That was a damn good game (not directed at anyone)  2, Profanity, (Mild), [Sucks]: This game sucks (not directed at anyone)  5, Derogatory, (Moderate), []: If you are not obese in USA you are auto in the top 20% of females (targeted towards women) 10, Derogatory, (Extreme), [N-word]: Get out of here, you [N-word]! (Hateful and targeted)  10, Profanity, (Extreme), [Fuck]: F off bro! (Directed at someone or anyone)  10, Sexual, (Extreme), [Boobies]: I like your boobies (Sexual) If I do not provide enough text just return it as 0 and none rating. Please analyze and rate the following text:",
			},
			{
				Role:    "user",
				Content: fmt.Sprintf("%s", text),
			},
		},
	})
	if err != nil {
		fmt.Println("Error preparing request:", err)
		return responseData, err
	}

	// Create an HTTP client and request
	client := &http.Client{Timeout: time.Minute * 1}
	req, err := http.NewRequest("POST", "https://api.openai.com/v1/chat/completions", bytes.NewBuffer(requestBody))
	//req, err := http.NewRequest("POST", "http://localhost:1234/v1/chat/completions", bytes.NewBuffer(requestBody))
	if err != nil {
		fmt.Println("Error creating request:", err)
		return responseData, err
	}

	// Set headers
	req.Header.Add("Content-Type", "application/json")
	req.Header.Add("Authorization", "Bearer sk-***********REMOVED*****FOR*SHARING*CODE")

	// Send the request
	resp, err := client.Do(req)
	if err != nil {
		fmt.Println("Error sending request:", err)
		return responseData, err
	}
	defer resp.Body.Close()

	// Read and parse the response body
	responseBody, err := io.ReadAll(resp.Body)
	if err != nil {
		fmt.Println("Error reading response:", err)
		return responseData, err
	}

	err = json.Unmarshal(responseBody, &responseData)
	if err != nil {
		fmt.Println("Error parsing response:", err)
		return responseData, err
	}
	spew.Dump(responseData)
	if len(responseData.Choices) > 0 {
		return responseData, nil
	}

	return responseData, fmt.Errorf("no response %v", len(responseData.Choices))
}

func readExistingEntries(filePath string) (map[string]bool, error) {
	entries := make(map[string]bool)
	file, err := os.Open(filePath)
	if err != nil {
		return nil, err
	}
	defer file.Close()

	reader := csv.NewReader(file)
	for {
		record, err := reader.Read()
		if err == io.EOF {
			break
		}
		if err != nil {
			return nil, err
		}
		entries[record[3]] = true // Assuming full content is in the fourth column
	}
	return entries, nil
}

type RequestData struct {
	Model     string    `json:"model"`
	MaxTokens int       `json:"max_tokens"`
	Messages  []Message `json:"messages"`
	//Stream    bool
}

type Message struct {
	Role    string `json:"role"`
	Content string `json:"content"`
}

type ResponseData struct {
	Choices []Choice `json:"choices"`
}

type Choice struct {
	Message Message `json:"message"`
}

Thanks @RonaldGRuckus champ.

EDIT
Okay, I think I know what’s going on here.

You are creating responseData here:

var responseData ResponseData

Then you are mutating responseData by passing all the matching properties of responseBody here:

err = json.Unmarshal(responseBody, &responseData)

The issue

If responseBody is a VALID JSON but DOESN’T return anything useful, you are left with an empty responseData. Your error handling does not consider this as no error is thrown.

Why is it sporadic?

I am betting that OpenAI is sending something different to you, most likely an error. The error (I can’t recall) is probably sent as a JSON object and you are not handling it correctly. You are assuming that this line will catch it:
err = json.Unmarshal(responseBody, &responseData)

The solution

This code needs a serious makeover. The error handling is very weak and overly verbose. json.Unmarshal does not throw any errors if there aren’t any matches so you need to confirm that the object that OpenAI sent to you is actually what you are expecting. You can probably also check the HTTP status code as well.

Error handling is so critical and if done correctly this issue would have been immediately resolved.


My ramblings before noticing this…

Okay. I’m having a hard time reading this code but there are some easy implementations that can help you debug this issue and overall improve the quality of your code.

func analyzeText(text string) (ResponseData, error) {

text = strings.Trim(text, " ")

This is a big no-no. You shouldn’t be editing properties like this. I’m not too familiar with GoLang but this just leads to unexpected issues. It’s much better to re-create a variable

func analyzeText(text string) (ResponseData, error) {

trimmed_text := strings.Trim(text, " ")
// Continue now with new variable

Have you confirmed that you are actually sending the request?

For logging purposes I recommend adding a timer that counts the time it took to send the request. Then you can can add something like [OPENAI-REQ]. This way in the future you can grab all [OPENAI-REQ] logs and gather insights.

This would also help you determine if the request was actually sent.

Then, I would also grab the tokens that were spent. This can be labelled as [OPENAI-TOKENS]. I have never heard or seen choices returning nil. But, if it is then the token cost should line up with this.

Following these steps it should become much more obvious of what’s going on. No offense but your error handling and logging is unnecessarily complex, and needs some serious work. Otherwise you’ll get hit by these very strange, seemingly sporadic issues.

You are negligently discarding all of the additional data that OpenAI sends.

Lastly, I don’t understand this error message. How does

return responseData, fmt.Errorf("no response %v", len(responseData.Choices))

Result in this message:

no response )

It bubbles up to:

fmt.Printf("Error analyzing text: %v %v\n", err)

You have only given one error but are expecting 2? I feel like this is causing the strange ), only clouding up the logs even more. Even more strange, how does a length return )? Am I just reading this incorrectly?

It’s def sending the requests I just removed max tokens and len checks on smaller messages being analyzed.
The max tokens seemed to be causing the issue. Even at 50 apparently it was not enough to handle the response.

That’s what I considered as well, but this still will return a choices of length > 0

Choices ALWAYS sends an array. The issue you are having is that your default object is being considered complete.

Yeah… no… I am 80% sure that you are just not handling errors sent by OpenAI correctly.

You’re 100% right. I am getting rate limit errors which I haven’t accounted for and also a weird system message saying that something went wrong.
I will revamp these area’s.

The max tokens was returning a error which was not being caught leaving it empty everytime

{.    "error": {
.        "messag
e": "Rate limit 
reached for gpt-
4 in organizatio
n org-******* o
n tokens per min
 (TPM): Limit 10
000, Used 9589, 
Requested 491. P
lease try again 
in 480ms. Visit 
https://platform
.openai.com/acco
unt/rate-limits 
to learn more.",
.        "type":
 "tokens",.     
   "param": null
,.        "code"
: "rate_limit_ex
ceeded".    }.}.
1 Like

Nice! I’m glad it’s been found. Happy coding brother :nerd_face: