Are there any free ChatGPT models that can be used for dev testing personal project?

Title says it all.

I’m not too worried about throttling (TooManyRequests ) or other efficient models. Any free model that can be used for personal projects?

Welcome to the Forum!

There is no option to use the API for free, even for personal projects. OpenAI has phased out free trial API credits earlier this year. You now need to fund your developer account with a minimum of USD 5 in order to use the API.

Just curious: I can infinitely use the prompt UI of ChatGPT but I cannot access the same via API?

Like, I’ve just signed up and am able to freely use ChatGPT using the chat window.

ChatGPT and the API are two different products and hence cannot be compared from a billing/cost perspective.

I’m new to this, can you explain this a bit? I thought API will internally call one of the models on top of which ChatGPT is built? Am I missing something?

Thank you!

ChatGPT is basically just a UI built on top of the gpt models, which has been complemented and enhanced by other capabilities such as code interpreter and browsing.

Just like a direct API call, it interacts with the different models in the background. However, via ChatGPT you have significantly less control over the individual parameters than you would have via the API as well as face other constraints such as with respect to model context length.

1 Like

In light of there being no free model which is the cheapest? I am just learning python and running code over and over. I guess I need to comment out the API stuff as there is no free way to test run scripts.

https://openai.com/api/pricing/

You can apply here and use your GITHUB_TOKEN to do something similar and do some research (there are models from OpenAI and other models) :wink::
https://github.com/marketplace/models

"""Run this model in Python

> pip install openai
"""
import os
from openai import OpenAI

# To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings. 
# Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens
client = OpenAI(
    base_url="https://models.inference.ai.azure.com",
    api_key=os.environ["GITHUB_TOKEN"],
)

response = client.chat.completions.create(
    messages=[
        {
            "role": "system",
            "content": "",
        },
        {
            "role": "user",
            "content": "What is the capital of France?",
        }
    ],
    model="gpt-4o",
    temperature=1,
    max_tokens=4096,
    top_p=1
)

print(response.choices[0].message.content)

I’ve just been trying out Llama3.2 with function calling locally and I must say that it is definitely good enough for local dev now!

Not ChatGPT but are you hung up on the brand?

This is a graphic of my Chatbot using local functions and the default 3B Llama3.2 locally via Ollama, fantastic and very quick! (caveat I have a 3080 Ti).


The speed is incredible and I believe this is the first local model I’ve seen being able to handle function calling (semi-)reliably.

The other beauty of this is I have not had to change any code to accommodate it! :tada:

2 Likes