I’m working on an app that leverages the Open AI API to generate GPT-supported output based on my input in a web form. The app runs on python with Quart async on an Apache2 server. The server is robust.
Every time I submit the form in my browser, it resolves into {“error”:“”}. No explanation given. I record all the logs and there seems to be an issue with the timeout. I’ve added a timeout instruction into my code to give the app more time. But, the issue keeps repeating. My hunch is that something else is wrong but I can’t put my finger to it
Here’s the app’s code:
from quart import Quart, jsonify, request
from openai import OpenAI
import httpx
import json
import logging
app = Quart(__name__)
OPENAI_API_KEY = 'MYAPIKEY'
logging.basicConfig(filename='app.log', level=logging.DEBUG)
@app.route('/')
async def hello_world():
return 'Hello, World!'
@app.route('/app', methods=['GET', 'POST'])
async def generate_output():
if request.method == 'POST':
app_role = "Instruction to GPT."
form_data = await request.form
wish_count = int(form_data.get('wish_count', '10'))
app_command = f"Generate {wish_count} wishes."
writing_style = form_data.get('writing_style', '')
if writing_style and writing_style != 'default':
automatedfaqs_command += f" Generate the wishes in a {writing_style} tone."
genie_friendly = form_data.get('seo_friendly') == 'on'
if genie_friendly:
genie_objective = form_data.get('genie_objective', '')
genie_tone = form_data.get('genie_tone', '')
app_command += (
f" The objective is {genie_objective}. "
f" The tone is {genie_tone}."
)
try:
async with httpx.AsyncClient() as client:
api_url = "https://api.openai.com/v1/chat/completions"
headers = {"Authorization": f"Bearer {OPENAI_API_KEY}"}
payload = {
"model": "gpt-3.5-turbo",
"messages": [
{"role": "system", "content": app_role},
{"role": "user", "content": app_command}
]
}
response = await client.post(api_url, json=payload, headers=headers, timeout=10.0)
logging.debug("Debug message")
logging.info("Information message")
logging.warning("Warning message")
logging.error("Error message")
logging.critical("Critical message")
response_data = response.json()
# Extract the response content
faqs = response_data['choices'][0]['message']['content'].strip()
# Return the FAQs and answers in the browser
return jsonify({'wishes': wishes})
except Exception as e:
logging.error(f"An error occurred: {str(e)}")
return jsonify({'error': str(e)})
else:
# Form follows
return '''
<form method="post">
...
</form>
'''
Thanks @PaulBellow! Yes, I did test before with the Flask version of my app and it worked fine. Aftet migrating my app’s code to quart and hypercorn, the app stopped working
I’d start by just using a working example of an API call, and then altering it one piece at a time until it is what you need it to be, and then you can know that whatever is the first thing to break, that’s the problem.
Assuming you’re genuinely getting an error back (be sure that’s the case, first) that means the Flask/Quart is not generating the same request at the wire level (bytes). So you should look at the actual stream of bytes (including data and HTTP headers, etc) and see what’s different.
EDIT: Better yet, in case you didn’t do this yet, paste your entire code into GPT-4, and exaplain what error you’re getting and ask it to help! that works a LOT of the time! for me.
Already shared my code with GPT4 and it said everything looked correct. Personally my hunch is that something could be wrong in this part of the script but I don’t know what exactly:
@app.route('/app', methods=['GET', 'POST'])
async def generate_output():
if request.method == 'POST':
app_role = "Instruction to GPT."
form_data = await request.form
wish_count = int(form_data.get('wish_count', '10'))
app_command = f"Generate {wish_count} wishes."
writing_style = form_data.get('writing_style', '')
if writing_style and writing_style != 'default':
automatedfaqs_command += f" Generate the wishes in a {writing_style} tone."
genie_friendly = form_data.get('seo_friendly') == 'on'
if genie_friendly:
genie_objective = form_data.get('genie_objective', '')
genie_tone = form_data.get('genie_tone', '')
app_command += (
f" The objective is {genie_objective}. "
f" The tone is {genie_tone}."
)
try:
async with httpx.AsyncClient() as client:
api_url = "https://api.openai.com/v1/chat/completions"
headers = {"Authorization": f"Bearer {OPENAI_API_KEY}"}
payload = {
"model": "gpt-3.5-turbo",
"messages": [
{"role": "system", "content": app_role},
{"role": "user", "content": app_command}
]
}
response = await client.post(api_url, json=payload, headers=headers, timeout=10.0)
logging.debug("Debug message")
logging.info("Information message")
logging.warning("Warning message")
logging.error("Error message")
logging.critical("Critical message")
response_data = response.json()
# Extract the response content
faqs = response_data['choices'][0]['message']['content'].strip()
# Return the FAQs and answers in the browser
return jsonify({'wishes': wishes})
except Exception as e:
logging.error(f"An error occurred: {str(e)}")
return jsonify({'error': str(e)})```