(iii) use output from the Services to develop models that compete with OpenAI;
I’ll make sure to stop that when I raise a few billion and start competing. Thanks for the heads up
(iii) use output from the Services to develop models that compete with OpenAI;
I’ll make sure to stop that when I raise a few billion and start competing. Thanks for the heads up
Caching responses on your database is ok? I mean it would help reducing load on the API.
That’s a model, just a different architecture.
What if I create documents e.g. summaries and safe them to a database and send that to a tts to create podcasts?
As long as you’re not using it to generally train GPT competitive models you’re probably OK.
I’m using it to train bert model which can be run on a CPU for an exceedingly narrow use case.
I am requesting jobtitle alternatives for the jobsearch plugin. And I store them in the database.
E.g. Programmer - Software Developer, Coder, etc.
Next time someone searches for Programmer I don’t call the API again.
It’s more to speed up the response, since the API is so slow. I guess I also safe a few cent per month.
Yeah, exactly. You’re not competing with OpenAI.
I mean, where would it end? As soon as you stored any state from their models and re-used it, it would be using their data to train your model, whatever it might be.
I am not training a GPT. Maybe that’s how it is meant?
I have been doing PHP programming for the past nearly 20 years, primarily in support of modules I have created and support for the Drupal CMS.
I’ve been working with the OpenAI API for the last 6 months, pretty much all day every day for the past month or so. I am writing code, in PHP, to build a chat completion system that works in the Drupal environment. I use GPt-4 primarily as a coding assistant.
My experience? I’ve not gotten weaker as a programmer, I’ve gotten stronger. I’ve never written cleaner, more efficient, more modularized object oriented code in my life. Furthermore, I’ve gotten to know new Drupal features and intricacies that I never before bothered to learn. I know everyone’s experience is different, but I have to say I’ve gotten a lot smarter and lot more technically adept, not just in building systems to support AI interaction (what a great many of us are doing here in the first place), but in my “supposed” area of expertise as well.
And these can be clarified with subbullets too. GPT can review a doc for spelling, grammar, and punctuation, but also structure, content, factual accuracy, bias, wordiness, etc.
I use GPT4 API to play management consultant and choose which charts to include in a sales report.
I have no idea why GPT4 should know which plots are more or less insightful,but it seems to work.
I “tell” GPT4 the structure of my dataset, the plots it can choose, and the structure of the expected Json file.
I takes about 4000 tokens (around 20 cents) and a few moments to get my list of 30 charts, that I feed into a python app. The user can override the suggestions, but generally speaking they are good enough. An analyst would take more than an hour to come up with 30 blank slides and load their parameters in my app.
Make work boilerplate content generation is just that though.
I guess GPT4 is great at revealing all the stuff we never should have done in the first place.