My application/service generates a JSON response and pushes it to IOT devices (count is in Millions). JSON response is generated based on data setup targetted for a specific categrory of devices (existing version, region, model etc…) Sometimes the JSON response gets deployed to a wrong device (not targetted for) in turn making it non functional. I have huge historic database of good JSON response and bad/error ones. Is it possible to train the AI model and then use it to consume the future JSON response and alert these are bad ones go through to stop the process. Pls share your thoughts ?
If I understood correctly, the issue is that incorrect JSON configurations are being sent to IoT devices, causing malfunctions. The goal is to predict whether a newly generated JSON configuration is likely to cause an issue based on historical data of good and bad responses.
It is possible but OpenAI API is not the ideal tool for this kind of problems. LLM models are good with unstructured data (i.e. text). However, you are dealing with JSON object.
I would try with machine learning, more specifically with the tree-based models (i.e. random forrest). You can use scikit-learn python library for that.
Thank you for your response
look into structured output as well. Or Maybe loading it up with a crap ton of good examples into RAG and adding a quality control agent could identify potential problems before they roll out.