I’ve built several Wordpress plugins using OpenAI the “Complete” format with text-davinci-003 which is now listed as “legacy”.
I was able to build these plugins almost exclusively by asking ChatGPT to help me write and edit the necessary code based on my ideas.
I’m now wanting to revisit and expand upon my original, simpler ideas, where I was primarily building proof of concepts. However, I’m reluctant to move forward without better understanding a couple key challenges.
(1) How much different is the new “Chat” API compared to text-davinci-003? Will I be rewriting/refactoring the entire codebase or making minor modifications to fit the new API?
(2) My biggest painpoint is the # of tokens per API call in OpenAI text-davinci-003; will I have this same challenge in the GPT-4 Chat API? If I want my Wordpress plugins to source a larger volume of content and I want to craft more expansive outputs, what solutions should I be seeking?
Reminder: I’m not a developer. Just a guy trying to connect some dots to build cool stuff.
Thanks in advance!
You mention that you are not a developer, well, you kind of are if you are building cool things. One of the issue you are going to face fairly quickly is the wall when it comes to understanding what your AI is creating for you in code terms. Without an understanding of why a particular bit of code has been created, you can start to get bogged down in loops, the loops start with an overly complex piece of code that starts to reach the length limits the AI is prepared or able to work on. Once you hit and then exceed that limit, you now have a monumental undertaking when it comes to debugging or making changes.
I guess the upshot would be, if you are serious about this then you will need to either learn python, js, maybe php sql a few other bits, or hire/work with someone who does.
To answer your specific question :
The difference is enough that you will need to recreate most of it, not from scratch as the basics will be the same but the implementation will change.
The # of tokens goes up from 4k to 8k to 16k to eventually 32k with chat models, so this need not be an issue when combined with other methods and techniques.
Appreciate your thoughtful feedback!
I am definitely already hitting the types of walls you mention and it’s part of the reason I decided to “pause” the projects in their proof of concept state. Not sure I have the capacity to learn python/js/php/sql etc… fast enough to make a dent, from a time standpoint. Especially when considering how fast the AI world is moving.
It sounds like leap to Chat models will be alleviate some of my immediate concerns- didn’t realize it could handle 32k tokens. That being said, others have told me to look into Python if I’m trying to use data in bulk (ex: compile many different encyclopedia articles and sources for use with the API).
I guess I’ll start with converting my text-davinci-003 plugins to the chatgpt-4 model… but I worry a bit that once I do, that too will be legacy!
Just to keep you informed, the 32K models are not yet public, but will be Soon™ so anything more long term than a few months should realistically include that as a possible route, cost increases with length as the compute required is the square of the token count with current model design.
Making a start on converting to chat models sounds like an excellent first step, other devs are always more willing to join or help out with projects that have already made a start but have hit a specific issue, rather than nebulous “I have this cool idea, now tell me how to code it from scratch with detailed instructions for a non coder and please do this for free” type request that this forum gets quite a bit of.
Yep, I’m sure I’ll be returning with questions that are more specific! To give you a little bit of an idea, here are two examples of plugins/sites I’ve built with ChatGPT/OpenAI:
Fairly straightforward Q&A style sites but they’ve been fun to make and helpful to learn the ropes. It’s hard to expand upon the concept given my current limitations, but I’ll make the gpt-4 API leap and go from there. Thanks a bunch!