Editing this post to be deleted. Sorry, I thought this forum was for a different purpose. Great to see so many people sharing insights.
Even if you are providing the retrieved documentation to the AI to enable answering, the language model is using its own analysis and inference to be able to answer questions, which cuts a fine line…
Disallowed usage of our models
We don’t allow the use of our models for the following:
…
Engaging in the unauthorized practice of law, or offering tailored legal advice without a qualified person reviewing the information-OpenAI’s models are not fine-tuned to provide legal advice. You should not rely on our models as a sole source of legal advice.
You can ask your Lexi about that…I just used ChatGPT to make up a string of nonsense words without legal value:
Based on the OpenAI usage policy you provided, using an OpenAI product, such as GPT-3 or any other language model, to provide tailored legal advice without a qualified person reviewing the information would be considered disallowed usage and could potentially run afoul of OpenAI’s policies.
As a lawyer, you are likely aware that providing legal advice requires a deep understanding of specific legal contexts, regulations, and case law, and involves careful consideration of individual circumstances. OpenAI’s language models are not fine-tuned to provide legal advice and should not be relied upon as a sole source of legal guidance.
If you plan to use an OpenAI-based chatbot to provide consistent, reliable, and plain-language answers to Australian business law questions, it is important to ensure that the responses generated by the chatbot are verified and reviewed by a qualified legal professional. This qualified person should have the expertise and authority to assess the accuracy and appropriateness of the responses provided by the chatbot in the context of Australian business law.
By having a qualified lawyer review the information and responses generated by the chatbot, you can mitigate the risk of engaging in the unauthorized practice of law or offering tailored legal advice without appropriate oversight. This approach aligns with OpenAI’s policies and ensures responsible usage of their language models.
So either the chatbot gives bad advice and proves my point, or it gives good advice and proves my point…
Ultimately its not a question that will be decided in court, but will be decided by someone that has the “off” button to your API account.
Welcome to the forum!
I am not going to take sides in a discussion what’s allowed and what’s not according to OpenAI today or in the future but since this is the developer forum it’s likely that the other members of the community are much more interested in the details and the performance of the app that you are promoting. As it is right now you can expect that many of us have already build and deployed similar apps.
If you want to promote your solution maybe expand on the technical details of the implementation?