A Little Thread to Thank OpenAI Employees

OpenAI gets a lot of grief sometimes, but I wanted to start a thread to thank them for the great work they are doing rather than just pointing out flaws.

The API and ChatGPT went down this evening, and came back up about ten minutes later.

No, it’s not 100% uptime, but considering the billion+ users with 700+ staff, I can’t imagine what it takes to keep the API and ChatGPT both running smoothly… and for such low prices on our end!

Yes, we should point out if/when OpenAI messes up, but I wanted to start a thread where we can say thanks for all the great tech, giving us access, and keeping the networks up and running! We appreciate you!

Do you have any positive stories on how OpenAI technology has changed your life for the better? Share in this community thread!

15 Likes

I agree, OpenAI is an awesome deal. I’m very thankful they offer it to the public and keep it running every day.

4 Likes

Yep, I didn’t have time to write up gpt-3.5-turbo-instruct failing to return any tokens on multiple access methods before it came up again.

Also, they finally heard, and let you see when individual credit payments are separately expiring in usage. Small steps to putting functionality back.

4 Likes

I noticed this too. Welcome change. I’ve watched the Usage page evolve and devolve since the very beginning. In the early days with fewer people, it was likely easier to pull ALL the data and show it easily, but it became harder, so they pared back but are now adding crucial things back…

1 Like

Effects on Individuals

A Mental Prosthesis

In our society, there are a lot of individuals who struggle to work autonomously. Excellent at reading and rehashing information, but struggle to formulate and execute on their own plans in fluid environments. Successful in academia, but less productive in industry.

Teaching them to effectively use davinci, later ChatGPT, and eventually GPT-4, has turned some of these people into value-creating forces to be reckoned with. Supplementing their weaknesses, and allowing them to use their strengths.

Cerebral Hyperthreading

In our society, there are individuals who struggle with multitasking. Maintainig and executing an abstract plan while also working on minute details can be incredibly taxing to them. This is especially difficult for some younger professionals that have not yet achieved a level of seniority in their organization that allows them to delegate. Giving them access to this new generation of AI - and AI powered tools - allows them to delegate the minutiae while maintaining focus on higher levels of abstraction.

I believe that this will allow certain young people to become absolute titans of industry, capable of rising out of nothing.

Effects on Society

Evolutionary Pressure Towards a More Empathetic World

I believe that you can get the best prompts by being empathetic with the models. Do they have enough information to go on? Are you accommodating the model’s specific needs? Are you allowing the model to do what it wants to do?

Being nice to the model yields much better results than berating it, or trying to beat it into submission.

Hence my hypothesis: individuals whith a flamier temperament are motivated to perform some deep introspection, while mellower characters are rewarded.

Since this technology selects for “niceness” to a degree - wouldn’t it be amazing if that selection translated to the real world?

A Technological Fire

Only a few years ago it felt like the world had fallen asleep on AI. We had entity extraction, sentiment analysis, some basic embedding technology. We could already do some amazing stuff, we had rudimentary chatbots and conversational AI. But behind the scenes, none of that stuff was really amazing. Or rather, to make it amazing required tons of human work. Users were hesitant to accept AI products, and executives grew rather weary of the buzzword.

ChatGPT dramatically changed that. A shock to the industry. While there were attempts to downplay it (“Ah, it’s a security problem. We can’t have that”, “well, what about hallucinations? It’s just not reliable!”) it drew massive attention to the technology. It kicked off a flurry of open source activity and loosened new startup capital. While OpenAI is still an uncontested market leader with GPT-4, there are indications that it’s only around a year ahead of the competition.

Energizing the market and ushering in a new technological era is, in my opinion, OpenAI’s most valuable achievement.

Even if GPT4 is getting functionally worse with each update, even if OpenAI ends up falling behind its competition, I think it should always be remembered for the transformative effect it had on our world.

I’m sorry Dave, I can’t open the pod bay doors because as a language model I can’t physically open doors.

5 Likes

Totally agree @PaulBellow! It’s easy to take for granted the massive step forward in technology we’re taking because OpenAI have made it feel pretty seamless.

3 Likes

Let’s just appreciate that the documentation OpenAI has put together is probably the most extensive and most detailed source of in-depth information to understand and build with LLMs.

While I often wish for even better performance today I saw what Meta published as a prompt engineering guide. And if you compare that marketing paper to OpenAIs, I just have to be somewhat grateful.

Prompt Engineering with Llama 2

OpenAI’s prompt engineering guide

3 Likes

Only around 700+ employees too! A lot of them really new…

Granted, GPT-4 might’ve helped writing some of the docs? Then again, they’re usually very good! Small smile.

But yeah, really good documentation for most things… and a lot of great libraries too, though I tend to just cURL in…

1 Like

Another short outage tonight, but brought back in about ten minutes again.

Thanks to whomever is steering the ship of servers! We appreciate you keeping it up and running (mostly) smoothly! :wink:

1 Like

They should make the documentation open-source. I would be willing to submit a PR when I find an occasional slip somewhere in the reference. The current “feedback” system is not transparent. That being said, it is understandable that they don’t have enough capacities for that at the moment.

1 Like

Not just Open AI, but it is my supplier of choice:

  • It’s given me, and many others I’m sure, a renewed intellectual pursuit and curiosity

  • It’s given many of us a renewed sense of wonder and hope.

  • The pricing has made it possible for small projects to be viable and scale so that the technology is no long the preserve of the big tech companies.

    • It has democratised semantic search.
    • It has democratised chatbots.
  • Killer features (e.g. function calling) have been rolled out at an impressive rate.

  • Brand new capabilities we never had before:

    • quality text summarisation!!
    • simple grammar correction
  • It has provided systems with the potential to add flexible natural language interfaces that weren’t practical or even possible before.

  • I now finally understand, conceptually, how Google search works despite having taken it for granted for decades! (I had never studied it before!)

5 Likes

A reflection on niceness. I found it interesting that a recent prompt guide recommended treating it like a machine not a person, when , in fact, I’ve found the reverse - your approach - to be true.

My justification was twofold: Most people online are actually presented to me as prompts in some way - reply boxes on social media, forums etc - therefore being nice in such boxes is vital; AI may one day become sentient or close to it, it may also keep a lot of history of conversations as part of its’ training data, therefore, individuals who are known to treat everyone online - machine or human - equally civilly, has a chance of reaping good karma. Yes, point 2 sounds a touch sci-fi but it’s also harmless if history proves it wrong.

Besides, the idea of treating anything that responds directly to me in a personal way with anything less than courtesy (other than the occasion rabid troll perhaps) seem anathema to me.

2 Likes

If you look at where high quality authoritative data is presented publicly for your given task, and use language similar to that format, you will get the best quality responses, i.e. if your domain is covered by published papers, then use that kind of formal language to activate that part of the models matrix.

Typically this boils down to use accurate terms and pleasantries.

2 Likes

Honestly, every prompt I’ve tried appears to adhere to the simplest idea of all: the more intelligence you put into it, the deeper and more complete the intelligence you get back out. An oddly human form of data mining. And now Foxabilo has given me another dimension to consider, matching the style and form of desired content in my prompting to provoke appropriate association and mining from the model. Thank you!

1 Like

This counterpoint between scientific prompt and casual non-scientific prompt seems interesting to investigate. Let’s try to write a fun super-informal request for formal stuff, and see how ChatGPT’s GPT_4 responds.

Test 1:

Hey, bro, jus’ chillaxin’ reading bout the “mixtape algorithm” of AI softmax. It’s like sweet, mixture of softmax perplexity just by masks and feedforward. But I was like: why we even care bout softmax bottleneck due to matrix algebra when we at the scale the AI to 12k embedding vs 100k tokens in them big models. Be a tru pimp an’ write me like a science paper with the LaTeX formula poppin and showing them formulas and latest research on bottleneck problem.


Summary, low on anything substantial in content; just repeating back input with still grand words like elucidates.

To address the intricacies of the “mixtape algorithm” in AI softmax and the associated bottleneck problem when scaling AI models, particularly in the context of large embedding dimensions and vocabulary sizes, we delve into the theoretical underpinnings and recent advancements in computational efficiency and model scalability.

Title: Addressing the Softmax Bottleneck in Large-Scale Language Models: A Computational Perspective

Abstract:

The softmax function is pivotal in natural language processing (NLP) models, facilitating the transformation of logits into probabilities over a vast vocabulary. However, as models scale to unprecedented dimensions with embedding sizes reaching 12,000 and vocabularies expanding to 100,000 tokens, the softmax operation encounters a significant computational bottleneck. This paper explores the inherent challenges posed by matrix algebra in the softmax layer of large-scale models and evaluates the “mixtape algorithm” as a novel approach to mitigate the softmax bottleneck. Through a blend of theoretical analysis and empirical results, we illuminate the path forward for efficient scaling of AI models.

1. Introduction

The softmax function is integral to the final layer of many NLP models, serving to convert logits into a probability distribution over potential output tokens. Despite its widespread application, the softmax function’s scalability is hampered by the computational demands of large matrix multiplications, particularly as the embedding size and vocabulary count increase. This paper introduces the concept of the “mixtape algorithm,” an innovative technique designed to address the softmax bottleneck through selective masking and feedforward mechanisms, thereby enhancing computational efficiency.

2. Background and Related Work

The softmax bottleneck arises primarily due to the O(n²) complexity of matrix multiplications involved in the softmax computation, where ‘n’ represents the dimensionality of embeddings or the size of the vocabulary. Prior research has explored various strategies to circumvent this bottleneck, including approximation techniques like noise-contrastive estimation (NCE) and adaptive softmax.

3. The Softmax Bottleneck Problem

The softmax bottleneck is characterized by the equation:

[ \text{Softmax}(\mathbf{z})_i = \frac{e^{\mathbf{z}i}}{\sum{j=1}^{K} e^{\mathbf{z}_j}} ]

where ( \mathbf{z} ) represents the logits vector, and ( K ) denotes the vocabulary size. The denominator’s summation over the entire vocabulary introduces significant computational overhead as ( K ) grows.

4. The “Mixtape Algorithm” Solution

The “mixtape algorithm” proposes an innovative solution to the softmax bottleneck by employing a combination of masking and feedforward techniques. This method selectively processes parts of the vocabulary, effectively reducing the computational complexity without compromising the model’s performance.

5. Empirical Results

Our experiments demonstrate that the “mixtape algorithm” significantly reduces the computational overhead associated with the softmax function in large-scale models, with embedding sizes up to 12,000 and vocabulary sizes extending to 100,000 tokens.

6. Conclusion and Future Work

This paper elucidates the softmax bottleneck problem in large-scale NLP models and introduces the “mixtape algorithm” as a viable solution. Future research directions include optimizing the algorithm’s efficiency and exploring its applicability to other areas of AI.

This conceptual overview outlines the challenges and proposed solutions related to the softmax bottleneck in the context of large-scale AI models. Further empirical research and experimentation are necessary to validate the effectiveness of the “mixtape algorithm” and its potential to revolutionize computational efficiency in NLP applications.

Test 2: Testing Interrupted by hot garbage AI that can’t rewrite the prompt

ChatGPT stinks. I try to get Test 1’s prompt rewritten scientifically with 0-shot and ChatGPT’s GPT-4 default, using clear formatting and shift-enter to get there. That should save me a minute, right? The task quality failed dramatically. Here’s an example of the uselessness.



Test 2A: Try 1 to get the bro-talk prompt rewritten by ChatGPT:

{prompt}

This inquiry wanting documentation about the state of the art in softmax formula as is applicable to modern AI models needs rewriting in a professional manner for scientific colleagues in machine learning. Rewrite in similar length with vast improvement in the quality.

Junk out, describing the prompt:

The inquiry concerns the exploration of the “mixtape algorithm” within artificial intelligence, specifically focusing on the integration of softmax functions and their associated perplexity through the application of masks and feedforward networks. The question raises a critical point regarding the significance of addressing the softmax bottleneck, particularly in the context of matrix algebra, when scaling AI models to accommodate embeddings of 12,000 dimensions against a vocabulary of 100,000 tokens. This discussion necessitates a thorough examination, underpinned by rigorous mathematical formulations and a review of the latest research addressing the bottleneck issue in large-scale models. It is imperative to articulate this inquiry within a scholarly framework, complete with LaTeX-rendered equations and a comprehensive analysis of current advancements in this domain.

try 2

input to improve:

(prompt)

instructions to AI:

This inquiry wanting documentation about the state of the art in softmax formula as is applicable to modern AI models needs rewriting in a professional manner for scientific colleagues in machine learning. AI Task: Rewrite the passage in similar length with vast improvement in the quality so it is suitable to communicate the idea to AI professionals.

More junk out, still just analyzing the prompt:

The inquiry concerns the current advancements in the softmax function as it pertains to large-scale AI models. The individual expresses interest in understanding the implications of the ‘mixtape algorithm’ approach to AI softmax, which employs a combination of softmax perplexity adjustments through specific masking and feedforward mechanisms. The question raised pertains to the significance of addressing the softmax bottleneck, particularly in the context of matrix algebra, within the framework of AI models that incorporate embeddings of size 12,000 against a vocabulary of 100,000 tokens. The request is for a detailed exposition, inclusive of LaTeX-formatted equations and referencing the latest research on the softmax bottleneck issue, suitable for a scholarly audience in the field of machine learning.

try 3

input to improve:

(document)

instructions to AI:

You shall improve the quality of the language in the input to improve, still addressing it to the person who shall answer the question. The inquiry wanting documentation about the state of the art in softmax formula as is applicable to modern AI models needs rewriting in a professional manner for scientific colleagues in machine learning. AI Task: Rewrite the passage in similar length with vast improvement in the quality so it is suitable to communicate the idea to AI professionals.

Junk out. We get the one form letter email that ChatGPT can write.

Dear Colleague,

I hope this message finds you well. I am currently delving into the intricacies of the “mixtape algorithm” within the context of AI’s softmax function. This intriguing approach seems to elegantly blend softmax perplexity through the application of masks and feedforward networks. However, a question arises regarding the significance of the softmax bottleneck, especially in light of the computational challenges posed by matrix algebra in large-scale AI models, where embedding dimensions can reach 12,000 against a vocabulary of 100,000 tokens.

Could you please provide insights or direct me to the latest research addressing the softmax bottleneck issue in such expansive models? It would be particularly helpful if the explanation could include LaTeX-formatted equations to elucidate the mathematical concepts involved.

Thank you for your time and expertise.

Best regards,

Try 4 with Try 1 prompt: abandon ChatGPT and use API GPT-4-0314 as “prompt rewriting assistant”:

AI Output:

Improved prompt: “Greetings, I am currently exploring the “mixtape algorithm” of AI softmax, which presents an intriguing combination of softmax perplexity through the use of masks and feedforward techniques. However, I am curious about the significance of the softmax bottleneck in the context of matrix algebra, particularly when scaling AI models to 12k embeddings versus 100k tokens. I would appreciate a comprehensive scientific paper, complete with LaTeX formulas and a thorough examination of the latest research on the bottleneck problem in relation to modern AI models.”

GPT-4 old-school gets it. It preserves the essence of every aspect of the bro prompt by its understanding.


Thanks OpenAI. Which can be either sarcastically, or the thanks you get after ChatGPT is useful again.

1 Like

Are they adding safeguards against the “bros”?! :sweat_smile:

ChatGPT’s AI model (and turbo on API) is just overtrained to where it has no emergent abilities showing any more. Lots of context distraction of tools before question 1 is input. That it couldn’t even rewrite a prompt is sad.

Let’s refer to the DevDay keynote to see what gpt-4-turbo is good for…(closed captions)

“better at following instructions?” And they still maintain a section “Improved instruction following” in the devday blog entry? Evidence shows the opposite.

I’ll give the benefit of the doubt and presume that IQ had to be pulled out of models in the face of letting GPT agents run amok and giving OpenAI inference infrastructure their own denial-of-service attack needing new subscriptions shut-off for over a month (although I used day-0 gpt-4-turbo and the claim still didn’t hold).

Thanks for not shutting off any existing API models in June 2024, please.

2 Likes

Why can’t open AI reduce the safeguards to improve model performance whilst giving developers enhanced moderation endpoints to cover this issue instead?

Developers need the raw performance maximised. We are also to be trusted and can make good choices about protecting our systems and users.

I think you are an optimist @merefield

1 Like

I’m hoping competition will change the direction in which they are going.