Ensuring gpt-4 Model is Used in v1/chat/completions Endpoint for Spreadsheet Integration

Greetings OpenAI community,

After successfully integrating the OpenAI API into a Google Spreadsheet application using Apps Script, aimed at autofilling empty cells, I’ve directed my requests to utilize the gpt-4 model via the v1/chat/completions endpoint. The setup works, and I’m able to generate responses. However, the nature of the generated content often seems reminiscent of what I would expect from gpt-3.5, given its occasionally limited recent internet knowledge and depth on certain topics.

To cross-verify, I have tested the gpt-4 model directly in the OpenAI Playground with similar prompts, where gpt-4 is explicitly selectable, and noticed a marked improvement in response quality and relevancy. This contrast raises my concern about whether my application is indeed harnessing the full potential of gpt-4.

Here’s the code I’m using:

function onOpen() {
  var ui = SpreadsheetApp.getUi();
    .addItem('Generate Queries for Empty Cells', 'generateQueriesForEmptyCells')

function generateQueriesForEmptyCells() {
  var sheet = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
  var lastRow = sheet.getLastRow();
  var lastColumn = sheet.getLastColumn();

  for (var i = 2; i <= lastRow; i++) {
    for (var j = 2; j <= lastColumn; j++) {
      var currentCell = sheet.getRange(i, j);
      var cellContent = currentCell.getValue();

      if (!cellContent || cellContent.trim() === "") {
        var facilityName = sheet.getRange(i, 1).getValue();
        var infoCategory = sheet.getRange(1, j).getValue();

        if (facilityName && infoCategory) {
          var prompt = facilityName + "の" + infoCategory + "をwebで調べて実在するURLと共に示してください";
          var response = callOpenAI(prompt);


function callOpenAI(prompt) {
  var apiKey = "MY_API_CODE"; // 
  var url = "https://api.openai.com/v1/chat/completions";

  if (!prompt) {
    return "Error: Prompt is empty or null";

  var maxTokens = 100; 
  var temperature = 0.7; 
  var presencePenalty = 0.0;

  var payload = JSON.stringify({
    model: "gpt-4",
    messages: [{
      "role": "user",
      "content": prompt
    max_tokens: maxTokens,
    temperature: temperature,
    presence_penalty: presencePenalty

  var options = {
    method: "post",
    headers: {
      "Content-Type": "application/json",
      Authorization: "Bearer " + apiKey
    payload: payload,
    muteHttpExceptions: true

  try {
    var response = UrlFetchApp.fetch(url, options);
    var jsonResponse = JSON.parse(response.getContentText());

    if (jsonResponse.error) {
      Logger.log("Error: " + jsonResponse.error.message);
      return "API error: " + jsonResponse.error.message;

    if (jsonResponse.choices && jsonResponse.choices.length > 0 && jsonResponse.choices[0].message) {
      return jsonResponse.choices[0].message.content.trim();
    } else {
      Logger.log("Error: Response has no choices or message");
      return "No response generated";
  } catch (e) {
    Logger.log("Error: " + e.toString());
    return "Error in fetching response";

Given this discrepancy, I am seeking advice on:

  1. Confirming the gpt-4 model is processing my requests, especially considering gpt-4 can be explicitly selected and tested in the Playground.
  2. Identifying if there are any additional parameters or settings I might be overlooking to fully leverage gpt-4 capabilities in my application.

Has anyone faced similar challenges or can offer insights into ensuring the advanced capabilities of gpt-4 are effectively utilized, particularly when direct comparisons with Playground tests suggest a difference in performance?

Your guidance and suggestions will be highly valued.

Thank you for your support!

Hi and welcome to the Developer Forum!

You have a max tokens of 100, which may be too small to capture proper answers, you also have a request for the model to “Please search on the web and provide a real URL” this is not something the API can do, so I think that will be the main issue, you need to do the web search in code and then pass the results to the model, ChatGPT handles this for you, so there may be some confusion there.



Thanks for your advice on my previous question about using GPT for spreadsheet automation. I see my mistake in expecting the API to directly search the web.

Could you guide me on how to implement a web search in my script, or point me to similar discussions here?

Thanks again for your help!

Giving the AI a tool specification that it can call is pretty straightforward:

tool_list = [
      "type": "function",
      "function": {
          "name":  "web_search",
          "description": "Gets top results of web search engine with page descriptions. Use for recent events or news.",
          "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "Search term or keywords",
            "required": ["query"]

Fulfilling that with your code needs a separate call you make to a search API. Besides paying for Bing per 1000, there’s other shims that scrape pages or access sketchy APIs that do the same.

Often the search is enough information. Getting the contents of modern web pages is harder.