I have created a script to test the possibilities to assess students with AI. I am looking at the coefficient of variance when assessing 100 times teh same student submission with AI. I am comparing openAI different models, with CLaude and DeepSeek. the script works perfctly for Gpt4o, 4o-mini, claude, DeepSeek, but I have an ERROR message when running the script with o1 or o1-mini.
The script is:
function GPT4o(Input) {
const GPT_API = "MY API KEY HERE";
const BASE_URL = "https://api.openai.com/v1/chat/completions";
const headers = {
"Content-Type": "application/json",
"Authorization": `Bearer ${GPT_API}`
};
const options = {
headers,
method: "GET",
muteHttpExceptions: true,
payload: JSON.stringify({
"model": "gpt-4o",
"messages": [{
"role": "system",
"content": "You are an experienced educator."
},
{
"role": "user",
"content": Input
}
],
"temperature": 0.1
})
}
const response = JSON.parse(UrlFetchApp.fetch(BASE_URL, options));
// console.log(response)
console.log(response.choices[0].message.content);
return response.choices[0].message.content
}
the two variables i am modifying are the name of the action and the model:
function GPT4o(Input) becomes function GPT4omini(Input)
and
“model”: “gpt-4o”, becomes “model”: “gpt-4o-mini”,
When I do the samje for o1 and o1-mini I have an error message:
TypeError: Cannot read properties of undefined (reading ‘0’) (line 32).
What I did is for example:
function GPT4o(Input) becomes function GPTo1(Input)
and
“model”: “gpt-4o”, becomes “model”: “o1”,
Is tehre something I need to do on the backend to allow this script to use the o1 and o1-mini, or those models work differently and cannot be called with the same script that can call 4o and 4o mini?