Introducing ChatGPT and Whisper APIs

I’m wondering the same thing. I couldn’t find any examples

This new endpoint is amazing. Uploaded some example code in Node for anyone who’s having trouble integrating.

Old generation of “an angry dwarf” (text-davinci-003):

Grievous was an angry dwarf. He had a scowl on his face and a piercing gaze that made others avoid his presence. His unkempt hair was full of tangles and his clothing was in tatters. His drooping moustache seemed to mimic his perpetual frown and he spoke with a growl, conveying all his bitter resentment through every word.

When someone did the slightest thing to annoy him, he would start yelling and stomping his feet. He kept a sharp eye on everything around him, looking for any sign of disrespect or trouble. He felt as if the world had done him wrong, so he kept all those around him at a distance, no matter how kind they were. His ranting about all the injustice in life served as an expression of his anger and frustration.

And yet, despite all this aggression, there was something sad about Gregg’s expression - as though he was desperately trying to make sense of the world around him, but all he found was pain and suffering instead of joy and peace.

New generation of “an angry dwarf”:

The dwarf stands at barely four feet tall, yet his fiery temper makes him seem larger than life. His broad shoulders are hunched forward in a stance that screams aggression, and his thick, muscular arms chisel with veins as if coiled to spring into action at any moment. His bushy eyebrows nearly meet in the middle of his forehead, furrowing deep with rage as he glares through piercing blue eyes. A thick beard and mustache frame his scowling lips, which are twisted in a permanent sneer as if daring anyone to cross him.

His clothing is utilitarian, consisting of roughspun trousers and shirt with heavy leather boots laced up to his knees. A sturdy leather vest provides some protection to his torso but doesn’t hinder his movements in the slightest. He carries an axe in one hand and a short sword in the other, both weapons gleaming with a deadly sharpness that reflects his intense focus on combat.

There’s a thunderous quality to the dwarf’s voice that booms like a clap of thunder when he speaks, emphasizing each word with a forceful punch. Even when he’s not yelling, there’s an underlying edge of anger that vibrates through every syllable. It’s clear that this dwarf has a short fuse and is not afraid to let loose with his formidable strength if provoked.

Despite all this, though, there’s a sense of honor and loyalty that lies beneath the dwarf’s anger. Once someone earns his trust, he’ll fight ferociously to protect them, even giving his own life if necessary. But those who cross him or betray him will find themselves at the receiving end of his fury, facing a force that seems unstoppable until justice is served.

The old one was good, but the new completion gives more fine control over tone and such.

2 Likes

I plugged it into my new Scratchpad tool and the fiction it outputs is amazing. Still wish context window was a little bit bigger, though! :wink:

2 Likes

Hi bro, really need your advice and input for this

I really impressed with the new ChatGPT API for 2 reason :

  • 10% cost of Davinci
  • is ChatGPT ( more natural response )

but I was focus on project to repurpose ( fine-tuning ) AI for special use.

My question, seem the ChatGPT Api not allowed fine-tuning

  • the only possible way to “tuning” the Chatbot = context in prompt?
  • maximum data limit it can be insert into the “context” prompt ( the 4098 token? )

Thanks in advance, and thanks for your helpful presence in this community

Hi Sir,

I have been learning fine-tuning in GPT-3 (davinci) to create Chatbot Apps
( which train with product/company/ or even AMA(ask me anything) data

for many reason, 10% cost of Davinci made GPT3.5 a much better option for me.

how do I continue my project with GPT3.5(chatgpt API)? as it doesn’t allowed fine-tuning at the moment.

( but I see the example from OpenAI page, it seem it accept custom data )

If you want to get Whisper transcriptions in python and not deal with the overhead of the OpenAI python bindings, here is the python code I came up with:

import requests
import os

ApiKey = os.environ.get("ApiKey")

headers = {
    'Authorization': f'Bearer {ApiKey}'
}

files = {
    'file': open('/local/path/to/your/file/audio.wav', 'rb'),
    'model': (None, 'whisper-1'),
}

response = requests.post('https://api.openai.com/v1/audio/transcriptions', headers=headers, files=files)

print(response.json())
5 Likes

You are welcome, @zhihong0321

:slight_smile:

Well, as for me, I think “tuning” has a distinct meaning in generative AI; so it’s not a term I would use, but I know what you mean.

Actually according to the chat method docs, it seems all the role keys contribute to the textual-information used by the API in a chat completion.

The API docs on this. point are awkaraedly worded, so it’s easy to understand how you might be confused. Yes, the current “MAX” for max_token for a `chat completion" is 4096 tokens.

You are welcome again.

:slight_smile:

Very great news ever! Thank you so much for this big news.

Just wondering if there’s ever going to be any plan to allow for custom roles? Finding it very restrictive for my particular use case

You can “proxy” the roles as a developer. The only issue currently is there are only three roles.

However, for your app, you can call them whatever you want and just map them to the API keys before you submit your API call.

HTH

:slight_smile:

1 Like

Thanks for the fast reply! The issue is I want to share conversational context between two separate instances of openai.ChatCompletion (for lack of a better term, let’s call them characters). I want to treat them as if they’re in the same room so they should be able to answer questions about what I’ve said to the other. I’ve created a ConversationManager but when I pass the same instance of ConversationManager to the two separate instances of openai.ChatCompletion, it gets confused and the two characters become one. (Not sure I’ve explained that correctly)

This is amazing! Does anyone know when we can expect to be able to use the ChatGPT API with Zapier? It’s not showing up under models. :slight_smile:

getting this same result for many codes usage: openai api [-h]
{engines.list,engines.get,engines.update,engines.generate,chat_completions.create,completions.create,deployments.list,deployments.get,deployments.delete,deployments.create,models.list,models.get,models.delete,files.create,files.get,files.delete,files.list,fine_tunes.list,fine_tunes.create,fine_tunes.get,fine_tunes.results,fine_tunes.events,fine_tunes.follow,fine_tunes.cancel,fine_tunes.delete,image.create,image.create_edit,image.create_variation,audio.transcribe,audio.translate}

positional arguments:
{engines.list,engines.get,engines.update,engines.generate,chat_completions.create,completions.create,deployments.list,deployments.get,deployments.delete,deployments.create,models.list,models.get,models.delete,files.create,files.get,files.delete,files.list,fine_tunes.list,fine_tunes.create,fine_tunes.get,fine_tunes.results,fine_tunes.events,fine_tunes.follow,fine_tunes.cancel,fine_tunes.delete,image.create,image.create_edit,image.create_variation,audio.transcribe,audio.translate}
All API subcommands

options:
-h, --help show this help message and exit
also for this command openai api fine_tunes.prepare_data -f C:\Games\Notepad++\prompteg.json

Nice! !!! !

@PaulBellow or @ruby_coder and everything, would you have an example of a simple view using “stream”:“true”? I use a version I created in python, but I couldn’t access delta content using the same code.

solved :blush: yupiiiIiiIi ii i

response = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  temperature=0.9,
  max_tokens=111,
  top_p=1,
  stream=True,
  messages=[
    {"role": "user", "content": "AHi  he is my life"},	
  ]
)

collected_events = []
completion_text = ''

for event in response:

    collected_events.append(event)  
    for choices in event['choices']:
        event_ntext = choices['delta']
        event_ntext = str(event_ntext)        
        completion_text += event_ntext  
        print(f'Text received: {event_ntext} ')

2 Likes

Thanks for sharing the news about the new GPT-3.5 Turbo model! As someone interested in fine-tuning GPT models for specialized tasks, I’m curious about the cost implications of this approach with the new model. assuming this become available in the future (as it is not yet mentioned it would be possible)…

Fine-tuning the full GPT-3 model for specific tasks can be prohibitively expensive, even with the option to be billed only for the tokens used in requests. I’m hoping that the new GPT-3.5 Turbo model will offer more cost-effective options for fine-tuning, as well as improved performance in chat-based applications.

Overall, I’m excited to see how the GPT-3.5 Turbo model can improve chatbot technology and other specialized applications, while also being more cost-effective than the full GPT-3 model for fine-tuning. Thank you for sharing this news, and I look forward to learning more about the cost implications of fine-tuning with the GPT-3.5 Turbo model in the coming weeks and months.

Hello,
great news,
we are using davinci 3 from CURL, any doc ressource or details on what should be changed to use gpt3.5?

thanks

2 Likes

Will there be a playground for ChatGPT?

1 Like

Is this a way of addressing “memory” for a better lack of terminology? At the moment, I am still taking the previous response and passing it to the next. But I would love to have a better implementation and wondering if this is it?

Thank you!

That’s absolutely amazing news! :heart_eyes:

As a senior full-stack dev, I’m pivoting into this field because of the huge potential to build products using OpenAI’s API. It provides great leverage! I’m offering my skills to implement OpenAI and improve your products, as well as create new products and services.

Feel free to reach out to me on this forum or on linkedin if you’re interested.

https://www.linkedin.com/in/catalinwaack/

we upgraded from davinci 3 to gpt 3.5
FANTASTIC!

I guess now we can start thinking about moving to production