Any OpenAI consultants out there?

We want to build an OpenAI based chatbot to assist in technical customer support of our products.
Feel free to DM me if you are interested. Will share more details there.

You should consider “holding off” until OpenAI releases their ChatGPT API, coming very soon.

Does anyone know how OpenAI API is different from upcoming ChatGPT API for the Tech Support Chatbot use case?

@ruby_coder Do you think ChatGPT will replace embeddings for Q&A? I hadn’t heard of that one.

Hi @curt.kennedy and thanks for the @ mention.

Embeddings are a fundamental component of the GPT architecture, so I do not see embeddings being replaced.

For Q&A applications currently being developed with the OpenAI API, my guess is that the ChatGPT API (when it comes out), may speed up development time for many Q&A applications.

My crystal ball is as foggy as yours, so I think it is prudent to see exactly what the ChatGPT API will bring to developers. I think it is possible an embeddings endpoint may also be available in the ChatGPT API, but that decision will is far above my pay grade in OpenAI world.

Regrading embeddings and fine-tuning, I have noticed that many developers are attempting to use embeddings and fine-tuning sub-optimally to do what a front-end rules-based (expert-system) would accomplish. In my mind, for developers who have rigid Q&A responses which are easily defined, instead of trying to fine-tune a model, they would be “better off” to have a non-GPT front-end to do the basic Q&A and then refer the user questions / replies to a GPT language model for Q&A which are sub-optimal in an expert-system.

In other words, I think it is probably sub-optimal in many use cases to only use a language model AI component with a lot of (expensive) fine-tuning, when using both a natural language model (GPT) component and an expert-system front-end component would be more cost-effective and provide better Q&A results.

Normally well designed systems are built with more than one architectural component; and so (personally speaking) as a systems engineer, I find it sub-optimal when developers attempt to fine-tune a natural language model to mimic an expert-system. It is (in my view) better engineering to consider multiple AI components, for example expert-system (front-end) + natural language model (back-end).

Of course, the ChatGPT API will bring another component into the developers toolbox as well.

Hope this makes sense.

See also, for example (just as a reference):

1 Like

@ruby_coder I agree that multiple AI and even non-AI components working together is the way to go. I normally know of this as “Hybrid AI”, and it’s very powerful. I agree with the point on well designed systems having multiple architectural components too. And hey, I was even a Systems Engineer for many years. So kudos for thinking system wide!

Well see what the ChatGPT API provides. My guess is that it will just be like it is now, so you send some identifier over letting it know which conversation it’s in, and you talk to it. But you already know you can do this today with the current API by feeding it past context. It will be interesting.

1 Like

Me too! I worked as a systems engineer long (and many years) I before was a coder.

Of course, as you said, one of the main roles of a systems engineer is to think at the component and system level and of course to never let the overarching systems solutions be biased by component level engineers and their favorite (or trendy, shiny, new objects) technologies :slight_smile:

1 Like