Here is what I compiled on the API pricing for GPT-4 (8K, 32K context) and GPT-3.5-turbo (ChatGPT API)
Openai.com shows GPT-3.5-turbo pricing as “usage” and lists only $0.002/1K tokens, but for GPT-4 lists both completion and prompt pricing for 1K each. So if someone sends a 1K prompt and gets a 1K response, aka completion, their cost for GPT-3.5-turbo is 2 x $0.002 = $0.004. Whereas the same token count for GPT4 8K model would be. 1K prompt + 1K completion = $0.03 + $0.06 = $0.09. Right?
So for ChatGPT API (GPT-3.5-turbo), a 2K token API (call/response) = $.004 vs $0.09 for GPT4 8K context model.
Was in on the first round of Bing, it was ok in the beginning, hit or miss along the way and pretty much worthless yesterday. literally just gave me ad infested links to 3rd party sources for direct windows admin questions and no answers. Today? 180 degrees. Perfect straight to the point answers.
i havnt noticed any ads being given to me, or i didnt notice, it just gives few links at the bottom of each reply, which I never click, and they are nonintrusive.
I always have 2 browser windows available, one is chatgpt and other is bing chat, because I am on free chagpt i know that chatgpt may not be working many times but bing keeps working. And it does format the source code also like chatgpt which is great, i am a programmer.