Comparison of Output Completeness Between ChatGPT and Our AI Model

When I use ChatGPT to generate information or code, it provides a complete output. However, when I use the AI model developed by us, it produces incomplete information or code when I ask for the same. "i will give my code check it import pandas as pd
import openai
import time
from jproperties import Properties
import pytesseract

pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract"
image_text = pytesseract.image_to_string(r’E:\Chatgpt\ETC Image\CustomerMaster.png’)
print(“image_text”,image_text)
prompt = image_text
response_html = openai.ChatCompletion.create(
model=“gpt-4”,
messages=[
{“role”: “system”, “content”: image_text + " Analyze the fields and generate HTML code."}
],
max_tokens=7000,
temperature=0.7
)
html_code = response_html[‘choices’][0][‘message’][‘content’].strip()
print(“Html”,html_code)

response_css = openai.ChatCompletion.create(
model=“gpt-4”,
messages=[
{“role”: “system”, “content”: html_code + " Generate CSS for the for gendnrated html script and make it responcive with bootstrap design.(don’t avoide any fields.)“}
],
max_tokens=7000,
temperature=0.7
)
css_code = response_css[‘choices’][0][‘message’][‘content’].strip()
print(“css_code”,css_code)”

1 Like
  1. use the model gpt-4o unless you understand why you selected GPT-4.
  2. Do not set max_tokens to such an impossible number on GPT-4 and impractical number on other models. The parameter is named max_completion_tokens, and a value 3000 more than aligns with what the AI will produce for you.

Then the whole concept of using programmatic OCR on an image, and poorly instructing in miscommunicated system prompt only is flawed.

See my writings just yesterday as the foundation needed to communicate with API AI models.

2 Likes