I want to make GPT able to write legal opinions (law technical reports) based on my previous legal opinion dataset. For example, I would ask GPT to write a legal opinion that discusses the feasibility of a taxpayer not paying a product tax. So GPT would write this legal opinion by taking into account my legal thoughts on this matter, which can be found in my legal opinion datasets.
ChatGPT (the Web app) could perform this task as it saves the chat history/context. But it performed this task poorly because I had copied and pasted 4 legal opinions only. As I have hundreds of legal opinions, I tried to use ChatGPT through using openai API but it has no way to save the chat history, except if we save all the dataset in a JSON file and load it every time we need to use it. However, this would be too expensive since openai API charges per token.
Then I finally concluded that the most suitable approach would be to fine-tune a GPT model using my legal opinion dataset. Do you agree?
Thank you in advance!
P.s. Any other thoughts on this legal opinion problem/task are appreciated.
Yes. ChatGPT (Web) is just the poster child for OpenAI to allay fears about AI. It is not the core of the OpenAI products.
For fine-tuning a language model on the specific domain of Law: a usual recommendation for these cases are Transformers libraries, such as Hugging Face. These libraries provide a range of pre-trained language models that can be fine-tuned on specific domains using domain-specific text data. The recommended procedure seems exhaustive, but that is the most experienced advice when the domain is Law:
Install a Transformers library of your choice;
Collect a corpus of Law texts that will be used for fine-tuning the language model;
Choose an OpenAI model that is compatible with the Transformers library;
Load the OpenAI model using the Transformers library and set the configuration to fine-tune the model on the Law domain: adjust hyperparameters, set the maximum sequence length, etc.;
Load the corpus of Law texts and preprocess the data to prepare it for fine-tuning: tokenized, format, etc.;
Fine-tune the OpenAI model on the corpus of Law texts using the Transformers library: set learning rate; training epochs, batch size, etc.;
Evaluate the performance of the fine-tuned model on a validation dataset to determine the accuracy and effectiveness of the fine-tuning process;
Test the fine-tuned model on new Law texts to determine its ability to generalize and provide accurate predictions;
If you want to fine-tune a language model by yourself without writing any code, you may consider prompt engineering:
Select an OpenAI model that is suitable for the Law domain, such as GPT-3. There are different models available with varying levels of performance, so choose the one that best suits your needs;
Prepare a dataset of text examples that are relevant to the Law domain, such as legal texts or case studies. The dataset should be in a plain text format, with one example per line;
Upload the dataset to OpenAI’s API using the API key. The API provides an endpoint for uploading datasets, according to API documentation;
Use OpenAI’s API to fine-tune the pre-trained language model on your dataset;
Once the fine-tuning process is complete, test the performance of the fine-tuned model on a separate dataset of examples to help to evaluate the fine-tuning process;
Use the fine-tuned model to generate text prompts for the Law domain. Adjust the parameters as needed to generate text that is relevant to a specific case.
I know that everything above sounds like a herculean task at first glance but it becomes easier as you consider these tasks as intellectual games like a puzzle, crosswords, or whatever. The models were made for text, they fit Law quite alright, but Law is a lot of text - probably the largest text dataset of all domains. You are on the right path.
also you might try using the OpenAI API with opensource chatGPT like interfaces such as chatbotUI.com (there’s a github for that project). has lots of nice features that would work well for this use case.
It is not my recommendation - it is someone else’s recommendation that performed a similar job. He tried a few transformer libraries and chose this one for his case: Law in another country. Probably, there are many other better options, but I consider Hugging Face transformer libraries reliable and fully compatible with OpenAI models. Since I don’t know the size of the Law corpus for this project, I offered it as a suggestion only.
However, to make sure you get the opinion/interpretation based on the law of the region that your dataset is based on, you’ll have to create two embeddings:
One for the relevant law and the other for your dataset.
Hi @sps , thanks for the answer. Do you have any thoughts on how the openai embeddings API could generate texts, i.e., legal opinions? As far as I understood, it is useful to perform classic embedding tasks.
Embeddings don’t generate text. They can be used to find out semantically similar pieces of text from given documents. This semantically similar content will then be supplied to a completion model to generate texts - legal opinions in your case.
Just wondering if you ever got your project completed or if you were able to find a solution elsewhere. Although i dont specifically desire legal opinions, i would like to train a model on specific state regulatory statutes and cooresponding rules & regulations so as to be helpful for advising on certain regulatory questions. My only concern is that there exists so many inconsistancies within the law(s) that surround certain industries that I am unsure how a model would react if those inconsitancies were what it was trained on.