GPT-4 undoubtedly has mind blowing conversational capabilities, but it’s all talk and images only. Any venture into more specific domains with formally strict rules, facts and discrete structured data (science, math, economics, for example) delivers nonsense. That’s to be expected, so no surprise.
Are there any plans to directly interface GPT with domain-specific models or rules-based reasoning systems?
One application immediately comes to mind: Building and tuning domain-specific deep learning models. This is still a daunting task, even for experts. Keeping up with the latest research and platforms consumes much time.
People doing this full-time ask for north of $400k anually, which is out of reach for SMB, especially in niche markets. If there was a GPT + domain-specific combination, this would empower SMB to coexist side by side with big tech, which is consistent with the agenda of OpenAI (and a good thing).
We could need a GPT assistant that helps with building model architectures, feature extraction, auto-encoding, optimization, testing, etc. In other words, a model that helps build new models.