johnwoo
February 2, 2024, 10:53am
1
I use the same key for gpt-4, which is working good.
but when do
from transformers import GPT4LMHeadModel, GPT4Tokenizer
got this cannot import error.
I’ve installed openai, transformers (using github repo);
import GPT2LMHeadModel is working
any idea?
Diet
February 2, 2024, 11:42am
2
Hi!
Why are you doing that? transformers is for running models from huggingface,
while gpt2 is on huggingface, gpt3, 3.5, and 4 aren’t! you’ll probably want to use the openai lib.
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4-0125-preview",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
https://platform.openai.com/docs/api-reference/chat
3 Likes
Diet:
gpt-4-0125-preview
I don’t want to call the API but train/use the customized gpt 4 model; and I saw others (in google search) having same code.
I can confirm that will not work.
2 Likes
vb
February 2, 2024, 3:04pm
5
Interesting. Can you share some of these confusing results?
johnwoo
February 2, 2024, 11:13pm
6
https: // www.tome01. com /fine-tuning-openai-gpt-4-deep-dive-into-the-process-and-python-implementation
1 Like
vb
February 2, 2024, 11:18pm
7
Thanks for sharing.
Google is getting overwhelmed by AI created content and this “thing” is apparently one of the many culprits.
In short: GPT-4 fine tuning is currently in limited beta and only available to users with a background in fine-tuning the older models via the OpenAI platform.
johnwoo
February 3, 2024, 10:04am
8
the follow post was from a real Data Scientist
https :// mangeshkendre01.medium .com /artificial-intelligence-insights-701a53a0f3da
Diet
February 3, 2024, 10:15am
9
looks more like a SEO experiment to me
1 Like