Sorry to hear that it didn’t work for you, best of luck to your project.
Hi, do you mind drop the link for the Google Sheet that worked? I make a copy of the Google Sheet and face the same issue as you mentioned in previous post.
I found out how to make it work. Need to click appscript and approve the permission and then it works perfectly.
Thanks. I’ll take another look. Here is the script that worked for me:
const SECRET_KEY = “”;
const MAX_TOKENS = 200;
const gpt_model = “text-davinci-003”
function OpenAI(prompt, temperature = 0.7, model = gpt_model) {
const url = “https://api.openai.com/v1/completions”;
const payload = {
model: model,
prompt: prompt,
temperature: temperature,
max_tokens: MAX_TOKENS,
};
const options = {
contentType: “application/json”,
headers: { Authorization: "Bearer " + SECRET_KEY },
payload: JSON.stringify(payload),
};
const res = JSON.parse(UrlFetchApp.fetch(url, options).getContentText());
return res.choices[0].text.trim();
}
I made a new copy of the Google Sheet and tried it again. I accepted the security settings, but no luck. I must have some other weird setting.
Hi @johnfaig
Can you also check if you are
- Using this latest version of the sheet
GPT3-On-Custom-Data - Google Sheets - Using the api key without the Bearer keyword, meaning that it should only contain the value sk-xxxxxxxxxx
Thanks for your help. It works now.
I made a copy of the Google Sheet from the link you just provided. I’m not sure if it was a previous version, but you need to do some serious editing to your sample formulas. For example, simply pressing [F2] to edit and pressing [ENTER] doesn’t cut it. I added a leading apostrophe to make it a text label and then removed it and that worked. In some cases, I’ve copied formulas around. It seems like Google Shhet might have some caching.
Good to know that you were able to get it up and running.
Cheers!
This sheet has been super helpful for our experiments. Just want to flag that it looks like it may need some updates to support the just recently updated ChatGPT. It seems the API endpoint is slightly different for those models.
FYI the docs for that are here, including how to use ChatGPT for the completion use case:
Thanks for sharing Matt.
Right now it’s running at the text-davinci-003 model, I’ll definitely test out the new ChatGPT model as well and making changes on the default setting tab. Have you tested it out yet?
What is your experience in the new ChatGPT model?
I haven’t had a chance to play with ChatGPT via API, but I’ve done side-by-side comparisons with davinci-003 in API vs ChatGPT in browser and in our use cases it’s been night and day in response quality. I’ve been waiting for the day we can plugin ChatGPT instead.
I think for the sheets version you will probably want a new “Chat Completions” sheet and a new setting under Settings for the chat model. Reason is that it’s a slightly different API end point and a slightly different parameter format so the GPT3 and the chat models aren’t interchangeable. Then have a separate sheets function that accepts a range of cells rather than a single value, to correspond to the array that the API endpoint accepts.
Spitballing here, but that should work (I’ve never written scripts for sheets)
Hi everyone.
I have updated the Google Sheet + GPT3 to support the new ChatGPT API.
When running on the new ChatGPT API, you can reduce your cost by 10 times and get much safer responses, so I would highly recommend you to do the upgrade.
Also I have kept this google sheet in the same structure as the old one, so it should be be fairly easy if you are migrating from the old sheet to this new one. Good luck on your AI journey and please comment below and share your results. Cheers!
Great work Nelson!
Where can we find your updated version - would love to play around
Thanks for your kind words.
I updated the original link on the top.
But this is is newest version of the spreadsheet
HI Nelson.
Thanks for making this sheet available for free here. Is it possible for you to make one that can send request to the GPT-4 Api ? Aslo how can I use this embeddings search to answer more complex questions that require information from more than I cell of embeddings to be able to answer the question ?
(post deleted by author)
Hi @arjunsanyal9, how are you?
The Google Sheet should work with GPT4 if you change the model.
You can modify it on the setting tab, have you tried it?
See OpenAI API for reference.
I believe the embeddings search is already searching for all the rows in the data sheet.
Does that work for you? If not can tell me more about what you are trying to do?
Thanks.
Nelson
Hi Nelson,
Thanks for your reply. I meant that lets say on your sheet in the data tab there are few cells with info about John and i get embeddings for this in cell B . Now if my question is ( Can you tell me what is John’s age, where he lives and where he works ? ) Then the answer to this question will require information from all these 3 cells. So how can I retrieve information from more than 1 cell to answer this question?
cell A1 - John is 28 years old
Cell A2 - John lives in New Jersey
Cell A3 - John works for Microsoft
What you’re asking is precisely the nature of GPT Plugins. But it is possible to do this today if you shape the solution in a real-time fashion.
- Imagine an embeddings-based chatbot that uses vectors that point to answers (in a database).
- Imagine those answers also contain “pointers” to dynamic values related to that vector.
- Such “pointers” are actually API references to the latest values for these dynamic values.
- The ChatBot app uses embeddings to locate the best answer; it executes the API references, bundles all the data into a sensible prompt, and sends it to completion.
It’s not rocket science. It’s a simple matter of architecture and programming.
I’ve done it here using Coda and CustomGPT, and here using Mem, Airtable, and Pinecone.
And it’s even deeper. You’re touching on the DevOps side of employing AGI in actual business cases. But not DevOps in the traditional sense; this is training curation DevOps. ![]()
The advent of usable AI models by domain experts has made obvious the need for operationally efficient patterns to help everyday domain experts and business people organize, curate, and transform their information into something AI models can digest. From Google sheets to Airtable, email messages, and PDFs - all of it will vie for inclusion into the future training sets we craft to put these new AGI tools to good use.
Consider the analytics you need to know if your corpus curation process is improving the experience. This is exactly the approach I used with Coda (instead of Google Sheets) to craft something useful for my team.
Your suggestion of a Chat Completions sheet and versioning attributes is critical. My Coda-based process factors all these variables into the workflow.
But the workflow also needs to support a way to test quickly and with an equal measure of refinement. Model curators should be able to quickly assess and change the corpus and then try to ensure there’s no regression.
Google Sheets is a good place to start, but my experience has shown it is not likely to carry us to the finish line. You need an environment that support content development with deep tentacles into other data sources. It needs to be as intelligent as the AI solutions you are trying to build. This has caused me to see a clear indication that GPT itself, is needed in many aspects of the corpus creation process.
Example, when I discover a poorly performing query for an FAQ system, I need to assimilate perhaps two other vectors that are performing well and which each contain parts of the correct answer. Using Coda’s relationship capabilities and OpenAI completions, I can automate the assimilation of a new answer that addresses the poorly-performing vector. This is made possible by using keyword extractions that align the other answers and completions that paint the new answer. With a few final edits in place, I now have a new vector and all the data needed to know that this new thing needs to be tested.




