I have the following Flask app code called ‘app.py’:
from flask import Flask, Blueprint, request, jsonify, make_response,current_app, render_template, redirect,url_for, Response,stream_with_context,send_from_directory,session, flash, g
from flask_socketio import SocketIO
from app1 import app1 as 1_blueprint, handle_prompt
from app2 import app2 as app2_blueprint, socketio as app2_socketio
from app3 import app3 as app3_blueprint
app = Flask(__name__)
app.config['SECRET_KEY'] = 'supersecretkey'
app.config['SECURITY_PASSWORD_SALT'] = 'somesalt'
app.register_blueprint(app1_blueprint, url_prefix='/app1')
app.register_blueprint(app2_blueprint, url_prefix='/app2')
app.register_blueprint(app3_blueprint, url_prefix='/app3')
#app1_socketio.init_app(app)
app2_socketio.init_app(app)
socketio = SocketIO(app)
@socketio.on('send_app1_prompt')
def on_send_prompt(data):
handle_prompt(socketio, data)
@socketio.on_error_default
def default_error_handler(e):
print('An error occurred: ' + str(e))
# SocketIO event to handle client connection
@socketio.on('connect')
def on_connect():
print('Client connected:', request.sid)
if __name__ == '__main__':
socketio.run(app, debug=True)
It’s current objective is to be the handler of three submodules: app1.py,app2.py and app3.py. app1 and app2 utilize the Azure OpenAI Python SDK with the bring your own data (BYOD) adapter and app3.py uses the regular OpenAI API library with the gpt4 model to complete its tasks. The function generating the output in the 2 Azure cases is this:
and the function used to generate output in the regular openAI case is this:
As 3 separate apps, i.e when being ran as their own individual modules rather than submodules in a Blueprint schema, the code works perfectly. However, now that I’ve consolidated them into one code base, I get the following error when trying to generate an Azure completion: “Request failed with exception: Unrecognized request argument supplied: dataSources” and the following error when running the regular OpenAI GPT4 completion: “Error extracting information: Must provide an ‘engine’ or ‘deployment_id’ parameter to create a <class ‘openai.api_resources.chat_completion.ChatCompletion’>”
The parameters both messages refer to are set correctly in the code, I know this because like I said they worked fine before copying everything to one codebase.
In the Azure OpenAI SDK cases, I noticed that the error only came after the second app was run (i.e: if I fired up app.py then went to the subroute for app1.py and ran it, it worked fine. And if I followed this up by trying to run app2.py, I would get the error. But, if I started with app2.py and then ran app1.py, I would be able to use app2.py but not app1.py)
So, my theory is that the cause of the errors is because of having multiple API sessions open simultaneously, is this something anyone knows anything about or can anyone provide advice or support on how to proceed? Thanks in advance for any help:)