Open AI API returns empty error

I’m working on an app that leverages the Open AI API to generate GPT-supported output based on my input in a web form. The app runs on python with Quart async on an Apache2 server. The server is robust.

Every time I submit the form in my browser, it resolves into {“error”:“”}. No explanation given. I record all the logs and there seems to be an issue with the timeout. I’ve added a timeout instruction into my code to give the app more time. But, the issue keeps repeating. My hunch is that something else is wrong but I can’t put my finger to it :frowning:

Here’s the app’s code:

from quart import Quart, jsonify, request
from openai import OpenAI
import httpx
import json
import logging

app = Quart(__name__)

OPENAI_API_KEY = 'MYAPIKEY'

logging.basicConfig(filename='app.log', level=logging.DEBUG)  

@app.route('/')
async def hello_world():
    return 'Hello, World!'

@app.route('/app', methods=['GET', 'POST'])  
async def generate_output():
    if request.method == 'POST':
        app_role = "Instruction to GPT."
        form_data = await request.form
        wish_count = int(form_data.get('wish_count', '10')) 
        app_command = f"Generate  {wish_count} wishes."
        writing_style = form_data.get('writing_style', '')
        if writing_style and writing_style != 'default':
            automatedfaqs_command += f" Generate the wishes  in a {writing_style} tone."
        genie_friendly = form_data.get('seo_friendly') == 'on' 
        if genie_friendly:
            genie_objective = form_data.get('genie_objective', '')
            genie_tone = form_data.get('genie_tone', '')
            app_command += (
                f" The objective is {genie_objective}. "
                f" The tone is {genie_tone}."
            )
        try:
            async with httpx.AsyncClient() as client:
                api_url = "https://api.openai.com/v1/chat/completions"
                headers = {"Authorization": f"Bearer {OPENAI_API_KEY}"}
                payload = {
                    "model": "gpt-3.5-turbo",
                    "messages": [
                        {"role": "system", "content": app_role},
                        {"role": "user", "content": app_command}
                    ]
                }
                response = await client.post(api_url, json=payload, headers=headers, timeout=10.0)
                logging.debug("Debug message")
                logging.info("Information message")
                logging.warning("Warning message")
                logging.error("Error message")
                logging.critical("Critical message")  
                response_data = response.json()
                # Extract the response content
                faqs = response_data['choices'][0]['message']['content'].strip()
                # Return the FAQs and answers in the browser
                return jsonify({'wishes': wishes})

        except Exception as e:
           logging.error(f"An error occurred: {str(e)}")
           return jsonify({'error': str(e)})

    else:
        # Form follows
        return '''
            <form method="post">
                ...
            </form>
        '''

Welcome to the forum.

I’d check to make sure your account is working (check Playground)

Then make sure you can do something simple like cURL to the API.

Maybe check your Apache error logs?

Thanks @PaulBellow! Yes, I did test before with the Flask version of my app and it worked fine. Aftet migrating my app’s code to quart and hypercorn, the app stopped working

Not familiar with either, sorry.

Is apache not logging errors?

Here’s the log:

Done. More logs after the new test:

DEBUG:asyncio:Using selector: EpollSelector
INFO:hypercorn.error:Running on IP:8080 (CTRL + C to quit)
DEBUG:httpx:load_ssl_context verify=True cert=None trust_env=True http2=False
DEBUG:httpx:load_verify_locations cafile=‘/var/www/genie/lib/python3.10/site-packages/certifi/cacert.pem’
DEBUG:httpcore.connection:connect_tcp.started host=‘api openai com’ port=443 local_address=None timeout=10.0 socket_options=None
DEBUG:httpcore.connection:connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7fef9a5782e0>
DEBUG:httpcore.connection:start_tls.started ssl_context=<ssl.SSLContext object at 0x7fef9b5a6e40> server_hostname=‘api openai com’ timeout=10.0
DEBUG:httpcore.connection:start_tls.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7fef9a5186d0>
DEBUG:httpcore.http11:send_request_headers.started request=<Request [b’POST’]>
DEBUG:httpcore.http11:send_request_headers.complete
DEBUG:httpcore.http11:send_request_body.started request=<Request [b’POST’]>
DEBUG:httpcore.http11:send_request_body.complete
DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b’POST’]>
DEBUG:httpcore.http11:receive_response_headers.failed exception=ReadTimeout(TimeoutError())
DEBUG:httpcore.http11:response_closed.started
DEBUG:httpcore.http11:response_closed.complete
ERROR:root:An error occurred:

Looks like it’s timing out, so maybe a config set-up for quart and hypercorn?

Good luck.

I’d start by just using a working example of an API call, and then altering it one piece at a time until it is what you need it to be, and then you can know that whatever is the first thing to break, that’s the problem.

1 Like

Yes, already did that. It breaks when I switch from Flask to Quart + Hypercorn

Assuming you’re genuinely getting an error back (be sure that’s the case, first) that means the Flask/Quart is not generating the same request at the wire level (bytes). So you should look at the actual stream of bytes (including data and HTTP headers, etc) and see what’s different.

EDIT: Better yet, in case you didn’t do this yet, paste your entire code into GPT-4, and exaplain what error you’re getting and ask it to help! that works a LOT of the time! for me. :slight_smile:

Already shared my code with GPT4 and it said everything looked correct. Personally my hunch is that something could be wrong in this part of the script but I don’t know what exactly:

@app.route('/app', methods=['GET', 'POST'])  
async def generate_output():
    if request.method == 'POST':
        app_role = "Instruction to GPT."
        form_data = await request.form
        wish_count = int(form_data.get('wish_count', '10')) 
        app_command = f"Generate  {wish_count} wishes."
        writing_style = form_data.get('writing_style', '')
        if writing_style and writing_style != 'default':
            automatedfaqs_command += f" Generate the wishes  in a {writing_style} tone."
        genie_friendly = form_data.get('seo_friendly') == 'on' 
        if genie_friendly:
            genie_objective = form_data.get('genie_objective', '')
            genie_tone = form_data.get('genie_tone', '')
            app_command += (
                f" The objective is {genie_objective}. "
                f" The tone is {genie_tone}."
            )
        try:
            async with httpx.AsyncClient() as client:
                api_url = "https://api.openai.com/v1/chat/completions"
                headers = {"Authorization": f"Bearer {OPENAI_API_KEY}"}
                payload = {
                    "model": "gpt-3.5-turbo",
                    "messages": [
                        {"role": "system", "content": app_role},
                        {"role": "user", "content": app_command}
                    ]
                }
                response = await client.post(api_url, json=payload, headers=headers, timeout=10.0)
                logging.debug("Debug message")
                logging.info("Information message")
                logging.warning("Warning message")
                logging.error("Error message")
                logging.critical("Critical message")  
                response_data = response.json()
                # Extract the response content
                faqs = response_data['choices'][0]['message']['content'].strip()
                # Return the FAQs and answers in the browser
                return jsonify({'wishes': wishes})

        except Exception as e:
           logging.error(f"An error occurred: {str(e)}")
           return jsonify({'error': str(e)})```

Cool. I bet though if you somehow look at the raw bytes being sent you can find the difference. There’s gotta be some difference.