Is GPT-3 good at math? Let the answers speak for themselves!

Dear OpenAI Staff:

~~ ~~ ~~ ~~
This post discusses GPT-3’s ability to solve math questions. A detailed analysis is being performed regarding a previously conducted study. With this post, I look at how the study was conducted, provide my insight, and discuss it with others to get varying opinions and views.

This post contains 4 (pretty cool) screenshots of GPT-3 (Davinci) doing basic algebra attributed to me and 1 screenshot by Qasim Munye. I reviewed what’s allowed regarding posting screenshots on social media but I’m thinking this is different since it is within the OpenAI Community where this topic is freely discussed and talked about. Simply let me know if there are any changes that need to be changed to the post!
-Nick
~~ ~~ ~~ ~~

Prologue: I’m fairly new here!

~~ ~~ ~~ ~~
Hello world!! I hit the ground running with an unavoidable smile after completely being taken back by everyone’s AWESOME energy on here and all the MINDBLOWING projects being showcased and demo’ed. Seems like every project has a creativity level over 9000+. Although I may not get as creative as some of y’all, I seriously do hope to be a good contributor on here. I have lots to learn, much to teach, with the ultimate goal that I help as many people as I can.

I totally forgot to introduce myself to the OpenAI Community but I can get that done here really quick. My name is Nicholas Hickam (DutytoDevelop). I started my career & hobby at 11 when I saw people botting on Runescape, where a program plays Runescape for you with total autonomy, saving you from exerting energy on repetitive tasks. What most users saw as just another script to help them advance in a game, I saw building blocks to create real-world systems that automate and control anything within reach. In addition to programming, I like using mathematics to describe how systems work, from how energy flows all throughout the Universe to predicting the behavior of any system using mathematics.

I’ve since acquired a Computer Science Bachelor’s degree with a minor in Mathematics, and like any new guy on the block, I’m constantly searching for people and places where we can solve huge problems and improve humanity. I like to believe that somewhere out there is an amazing team that helps solves problems that are well beyond the scope of humankind like superheros do!

~~ ~~ ~~ ~~

Overview:

~~ ~~ ~~ ~~
Alright, if any of the content below is inaccurate, biased, or just slightly off base, let me know so I can do what I need to to fix any misconceptions. I am not an A.I expert, but I will say that after getting acquinted with GPT-3 and all the amazing things it is capable of, I figured I’d make a catchy, informative post that looks into a study that was performed that really just took a slap at GPT-3 without much constructive critism. I’m no expert, but I think I see a learning experience looking into this study either way.

After some digging around on the Internet, I started coming across articles saying that GPT-3 isn’t good at math, which was quite surprising considering it was able to correctly identify that a 10yo boy has asthma given a brief list of symptoms. Only medical professionals would have been able to correctly diagnosis this. Here’s the screenshot of GPT-3’s very accurate medical diagnosis:


Image courtesy of Qasim Munye, Twitter (Source). [Language model settings not provided.]

With the conflicting reports regarding GPT-3’s capabilities, I want to address the confusion I have and let this post be insightful to others who are also in the midst of learning about GPT-3. I came across a study that concluded that GPT-3 isn’t good at math, and before we jump to conclusions about what GPT-3 can and can’t do, I’d like to point out that GPT-3 picks up quite a lot with little training data, and so my personal stance regarding GPT-3 having poor math skills is that GPT-3 wasn’t taught correctly during the study. Also, I apologize if this post seems rather spontaneous, but I personally believe the study jumped to conclusions for the most part and if what I’ve seen matches up then there’s certainly room to debate here.

I don’t have as much training data as the study in question, but the quality of the data I’ve looked over can certainly help the arguments I’ll present. This entire post aims to not only understand GPT-3 better, but also allow us to help disperse misconceptions found on the Internet by sharing our own findings. That way, we see what came from this post and the importance it has in the future!

Action Time:

~~ ~~ ~~ ~~
Alright, the study conducted by researchers used a dataset consisting of 12,500 problems from high school math competitions. Those problems vary in difficulty and each problem can be categorized as one of seven general areas of mathematics (geometry, algebra, calculus, statistics, linear algebra, and number theory).

The researchers allowed GPT-3 “scratch space” so that it could show its work before arriving at the final answer, or it could simply provide a solution, and in both scenarios, GPT-3 was limited in training time but was allowed more training time on later runs. The study concluded that GPT-3 was only about 25% accurate at best (when no “scratch space” was used to show work and more training time was allotted on the dataset).

Now, 25% sounds quite low to me, so I wanted to get an idea as to how GPT-3 performs when answering algebra questions I gave it and what methods help it learn. The questions are expressed not only mathematically with symbols and numerical representation of numbers, but also through natural language where the math problem looks more like a word problem.

Here’s GPT-3 demonstrating substitution of variables to answer algebraic questions:

Davinci_Problem_Solving
Given 3 algebraic sample problems, we see that GPT-3 was able to correctly answer the 4th problem with ease.


This second run consisted of 13 questions, 5 containing only 1 algebraic variable and 8 containing 2 algebraic variables.

Given enough sample problems, we see GPT-3 demonstrating the ability to assign a numerical value to a corresponding symbol and then solving the algebraic equation given through substitution. We simply gave GPT-3 enough sample problems where it can identify what the question is asking given the symbols have meaning within any equation. After reviewing the spectrum of probability, I do see that it has learned to assign values to x and y and solve the equation.

Now, although I did succeed in teaching GPT-3 basic algebra, there are a few caveats keeping GPT-3 from working alongside gifted math prodigy Terence Tao anytime soon. When you look at the screenshots above, you’ll see that the numbers I used were low and didn’t get too high above the double-digit values. This is because GPT-3 saw any number that was double-digit and beyond as something other than just the numerical representation of a said number. It’s not a good start for proving my argument, but that’ll change.

During the development of this very post, I decided that I should try expressing the problems differently so that the model is provided the same data but having the data expressed differently to see how the model performs. When phrasing the algebra problems such that they’re expressed through natural language, GPT-3 was not only handling large numbers with ease, but it seems to pick up on everything faster while being less error-prone at the same time. Looking back now, we may find the algebra problems with symbols easier to read, but we can’t ignore the best way to interact with GPT-3. I’m on the dictionary meaning of a word GPT-3 is a language model, that communicating with GPT-3 seems to be most optimal when the data is expressed with natural language. This can be done by simply rephrasing any question to be expressed as a word problem. Not only is this applicable in Mathematics, but applications, where GPT-3 could help with Chemistry and Biology problems, may have better chances right off the bat by ensuring communication is as close as natural-language as possible to improve GPT-3’s chances that it understands and applies the information given to it:

When the same questions are formatted differently, matching GPT-3’s data that it was trained on, we see a deeper understanding of algebraic functions by GPT-3:


Expressing the algebra problems through standard form instead of numerics allowed GPT-3 to better understand what was being asked of it and did very well at applying addition to large numbers despite them being written out.


Second screenshot showcasing GPT-3 answering algebra questions that it couldn’t answer before when questions are expressed with natural-language.

It seems like we’ve done a few things to help GPT-3 not only understand algebra problems but also learn how to solve them confidently too. We see that given enough well-defined sample problems, it is most certainly possible to train GPT-3 to get the output you desire. The biggest factor I see here is getting enough optimal training data for GPT-3 that builds upon ideas to slowly develop an understanding of math of various complexities.

Let’s address the elephant in the room here: Using a language model such as GPT-3 to perform math problems won’t be as effective as other models that are geared towards answering math problems, because, very simply, it was designed to be given text like a phrase or a sentence and return a text completion in natural language as stated by OpenAI. Mathematics is a science, conveyed through a symbolic language but is not the standard language that GPT-3 was trained on. What I want to convey here, in this analysis, is whether or not GPT-3 can then apply the training data to effectively show its understanding of algebra and show that the biggest reasons the study got poor results, from my own understanding, is due to the formatting of the data the model was trained on.

What I see here is potential, lots of potential in fact. With that, I’d like to return back to the study that was conducted. Taking a quick peek at the dataset provided to GPT-3 by the university shows they encountered the same issue I ran into when training GPT-3, and although it’s not devastatingly bad, the model has far less confidence and accuracy when it doesn’t learn correctly:

I’ve highlighted the problematic areas for GPT-3 when it goes over these questions. You can see that numerical values and symbols are used quite often, and like I’ve said before, training with data that includes mathematical symbols and numerical values will work in some cases, but by rephrasing the question in standard form, and replacing the equal symbol with the word “equivalent”, you’ll notice improvement but only if you’ve properly trained the model to understand probability and all the other topics that the model was pre-trained correctly on the Khan Academy dataset.

Conclusion:

~~ ~~ ~~ ~~
Am I on the right track in that study that did not appear to train the GPT-3 correctly in order to properly answer the questions? What have y’all done to help teach GPT-3 new concepts and ideas? I didn’t even need to fine-tune GPT-3 to allow it to quickly learn basic algebra, but would fine-tuning GPT-3 really give me that much more control over the training process? I’m sure I could teach it other mathematical concepts with the right data and time.

What is your view on this study and how I approached it? What observations were spot-on, on the right track, or mostly wrong so I can improve? I realize that there wasn’t much training data on my end, but the fact is that we did teach GPT-3 algebra, and I can only assume if GPT-3 is properly trained that it can also learn other mathematical concepts.
~~ ~~ ~~ ~~

Summary:

  • Since this is my first time really contributing here, I would appreciate constructive criticism!
  • When fine-tuning a model, is too much training data a potential risk even if the data is well-defined?
  • Thank you for taking the time to read over this post. I put a lot of effort into this post and would like to improve it if possible.
  • P.S. For those that already were notified of this post several hours ago, I sincerely apologize. It was nowhere near complete and there didn’t seem to be a way to delete it entirely so I’m now reposting.
  • P.S.S. Cool links to y’all may like (Actually free programming books & a comprehensive Python cheatsheet)

Don’t blame GPT-3, blame the teacher (not the creators, just to be clear)!

13 Likes

Thanks for taking the time to write the post! Here’s our publication from a while ago, tackling harder part of mathematics - proofs [2009.03393] Generative Language Modeling for Automated Theorem Proving

With regards to the basic arithmetic you’re right - using more language generally helps, but as you can see from your own screenshots, the model even if right is often uncertain about the correct answer. I remember a while ago somebody trying to represent larger numbers delimited by an underscore, to ensure that each digit is tokenized as a single token. For example 1557 would be represented as 1_5_5_7

4 Likes

Thank you @boris! Yes, I did notice that even with the well-defined data, GPT-3 was still uncertain when giving an answer. If I really wanted to, I could continue trying to teach GPT-3 more mathematics, but I don’t see much coming out of it without fine-tuning and I’m only just now beginning that topic. It’s a learning curve, for sure, but I’m happy to have discovered the most optimal way to phrase data for GPT-3.

GPT-3 absolutely nails converting numbers to their word form counterpart almost right away:
image
View this preset on the Playground here

this is an A+ quality post :heart_eyes: :ok_hand: thank you for sharing

1 Like

I think that maybe more robust approach would be to use the GPT-3 or Codex language modelling power to generalize and translate natural language concepts into algorithms. Then run these algorithms to make exact calculations. Something like this should work:

2 Likes

All the examples are SMALL INTEGERS in particular positive integers.

Once you move to Reals or large integers none of the known Neural Nets perform worthy of any notice!

The rest of the examples are arithmetic based including the arxiv paper which again quite simple and you might need need any AI to handle those at all!

To keep hyping these simple examples over and over only does injustice to the Open-AI techs! Although I have no commitment to Open-Al and vice-a-versa, I just feel these hyped up simple examples are going to repel sophisticated experts in the field away.

I will post examples tonight, which was planning to do some for some weeks, so you understand what is in demand.

I see from my experience where Open-AI fits in math and algebra and geometry, but we need a better forum to focus energy with sophisticated applications.

Obviously these statements are not well received but someone needs to say them .

Dara

Hello @dara,

First of all, thank you for your input! I’d like to hear your reasoning since this is exactly why I created this post. I see now that I should have gone into more detail regarding a few key points I made to address the counter-arguments you gave me. As a side-note, I don’t believe this post does any injustice to the OpenAI crew. This post is meant to be a learning experience built upon and discussed by whoever wishes to provide their insights and perspective regarding GPT-3 and what GPT-3 is capable of.

:stop_sign: Clarification:

Since I didn’t communicate this as clearly as I would’ve liked, allow me to further explain why I used the training data I did in order to conclude why I believe GPT-3 is capable of understanding math to any degree. My analysis aimed to conclude that GPT-3 has the core function necessary to understand a math problem and solve it with correct understanding. I designed the practice problems to see if GPT-3 was able to learn one area of mathematics. Once the language model demonstrates that, then theoretically what’s to stop it from learning more mathematical concepts?

During my analysis, I did have limitations but I did my best to work with what I had. Please review the limitations I faced when performing my analysis, as this may help understand why the training data wasn’t the best, but still valuable I believe:

  1. I saw that GPT-3 had a tough time understanding numbers, however just like natural language, there is a pattern in the way numbers are expressed (word, numerical, or any other form). I’m sure that with the right training data, GPT-3 can accurately and confidently quantify the numbers expressed to it. I did look into how I could teach GPT-3 to quantify numbers properly, however, I found that I didn’t have as good of an understanding of how GPT-3 best understands input given to it that I do now and found myself re-constructing my training data over and over. On top of that, I realized that I would need way more training data as well. I am sure there’s a way to tackle it, but I do have other projects I’d like to work on with GPT-3 and I’d like to explore other topics to be as productive as possible.

  2. Going forward, let’s assume that we successfully trained GPT-3 on how to properly quantify any number given to it. We then need to see if GPT-3 can pick up and understand basic arithmetic and algebra. We do see that GPT-3 was able to learn how to solve the problem. However, since I didn’t train GPT-3 to quantify numbers confidently, as @boris pointed out, the language model did have a harder time solving the equations since it wasn’t confident with the numbers we were giving to it. The screenshots that provided the Completion with the full spectrum probability graph show that the least confident parts of the input data were the numbers. Now, I’m unsure how much of this was due to the language model learning a new change in the pattern with each sample problem, but knowing that GPT-3 does have trouble quantifying numbers, I’m sure that affected the confidence level.

:dna: My Main Point:

We see that when GPT-3 fails to understand how big numbers are supposed to be quantified but we still introduce a higher-level concept like arithmetic prematurely, we automatically inherit the problems that the lower-level concept had. Sure, you can feed it more training data to account for the loss, but I suspect it’d take more training data to accurately train the model.

Given well-defined training data, i.e. a series of math problems, GPT-3 can understand the structure of the problem such that it correctly identifies how every part of the problem will affect the final answer. With that, any similar problem presented to GPT-3 will be understood already, since the model has trained enough to know how every part interacts with each other to produce the correct answer!

By ensuring that lower-level concepts are properly understood by the model, we prevent further errors when training GPT-3 with higher-level concepts. Yes, I am aware that I could’ve trained the model better but that would’ve taken more time and resources to perfect the model and I was trying to show y’all with my data how I could still get the main point across. Since we were able to get the model to briefly demonstrate a higher-level concept such as basic algebra so as long as we didn’t use big numbers, it shows that if we had perfected the lower-level concept that we would’ve been able to see the model do basic algebra with much higher numbers more confidently.

And also I’m not saying that there’s only one way of training GPT-3, I’m sure there are several other ways that would allow for the model to learn a concept! The key point is that as long as the model in learning correctly, then GPT-3 should be able to continuously build upon previous concepts to learn more advanced topics. My examples were meant to conceptualize how GPT-3 could be taught in order to achieve an understanding of higher-level mathematics correctly.

:bookmark: Summary:

To sum up what I’m saying, here is the rundown of why I believe GPT-3 can be good at math:

  1. If we look past the poor confidence when I didn’t teach GPT-3 to properly quantify numbers, we see that GPT-3 was pretty good with answering basic algebra questions with the numbers it could quantify. I’m sure had we successfully taught GPT-3 to quantify numbers first, it would be much easier for us to teach higher-level mathematical concepts.

  2. With the right training data, I’m sure we can teach GPT-3 how to quantify numbers correctly so that we can allow it to better learn more advanced topics.

  3. If it can learn one mathematical concept, why can’t it learn another one? Theoretically, we should be able to build upon multiple concepts given that we have trained them correctly.

:question: Questions I Now Have:

  1. What are some things that I did or didn’t do that may have hindered its learning process when answering the math questions I gave it? Please exclude the example of teaching GPT-3 to quantify numbers correctly.

  2. Since I only fed GPT-3 the question “If x is equivalent to [first number], what is [second number] plus x?”, I found that giving GPT-3 the same question but worded differently slightly threw off GPT-3’s overall confidence when answering the question. When training GPT-3 to get good at a concept, would feeding training data with the same problems simply worded differently help the language model separate the concept from the structure of the question itself?

  3. In addition to the projects I have planned, are there any topics that you’d like to see explored?

  4. Are there any topics that you found hard or almost impossible to teach GPT-3 that you’d like to share?

:robot: Please feel free to share your thoughts and ideas, fellow community members!

It’s interesting to understand the current downsides. With these types of problems I’d expect sufficient amount of data and fine-tuning to be able to solve the problem efficiently at some point, as demonstrated on a more difficult mathematical problem of theorem prooving in the publication I shared above [2009.03393] Generative Language Modeling for Automated Theorem Proving

1 Like

I am sorry if I was discouraging that was not my purpose. Obviously if I did not see value in your work I would not have commented.

A. We need to UNDERSTAND what GPT-3 can do and what GPT-3 cannot do.

Open-AI IMHO needs 1000s of hardcore application and systems people to develop not fans. But that is my single opinion and should be worth a bit but not much.

B. If you pay close attention to any of the math examples around here you can see they are of arithmetic nature integer forms or simple single variable polynomial forms. Those do not require any AI solvers. There are 100s of well-known algorithms that address their usage.

  1. Try to use this numbers in functional expression Form:

sqrt(1.03001987)*cos(1.9099875) *exp(-0.0135789)

I would be curious to see how GPT-3 or any other Neural net application is able to process these Expressions. Mind you, I know how to do these myself using alternative systems! So the question is to understand the abilities of GPT-3 not to bang on your good work.

I will post some stuff in a separate thread here, so you can bang on my work :slight_smile:

Personally I think you are moving in the right direction, and again I did not mean to be discouraging.

Dara

4 Likes

I added some stuff from our side of the wall

These are Free Form Generative textual interfaces which output code+geometry and so on.

Review this please:

annulus&&x>y

This a partial mathematical phrases that assume the Generative AI has a reference for annylus and Boolean algebra and x>y.

Note that user could have input:

-1.00238765*x>sqrt(3.098779)*y

I do not know any Neural Net algorithm which could:

  1. Strip off the English and work on annulus&&x>y as an algebraic expression. I have written my own elaborate Recurring Graph Neural Nets that does that, and it was quite hard and required a custom Learned net.

  2. I like to know how GPT-3 could handle Linearized mathematical expressions?

However I do know how to add custom Learned nets to GPT-3 if I had under the hood knowledge , but somehow I find no support from Open AI to build such systems.

So far as I know Open AI’s open source is officially closed by Microsoft so not clear to me who to call at Open AI?

Dara

@DutytoDevelop Very interesting post, thank you. I think that while GPT-3 may be good at being able to do simple arithmetic and even algebra, the question that I would like to know is if it actually has internalized the idea of what it means to sum two numbers together. In theory for it to actually understand the algorithm of addition it should be able to understand how to add in general. If you ask it to add two-digit numbers for example, GPT-3 may have seen in its vast amount of training data all possible combinations of two digit arithmetic since there are only so many possible ways to arrange two digit numbers for addition. What would be more impressive would be to see it get correct an answer of adding two ten digit numbers, or in general numbers with enough digits that it could not possibly have seen an example of those numbers previously. Then we would have to admit that GPT-3 knows what it means to do addition.

1 Like

As it was pointed out, GPT-3 does have problems with direct arithmetical calculations, especially with big numbers or float numbers. But its really good in language modelling, so why not using this strong side of it to tackle all the other problems? Look at this example:

This code (generated by Codex) does exactly what it was told to do - addition of numbers of an arbitrary length.

2 Likes

The above algorithm (3 steps of addition) was taken from internet. So it was relatively easy for Codex to convert it into code. But what if we ask GPT-3 to create an algorithm by itself? So here we go:

In this example I just asked it to make “An algorithm of arithmetic addition of two numbers. Consider the numbers as strings of digit literals.” All the rest was generated. Notice that the second piece of code is quite similar to the first one!

And in the end, I asked it to explain the code. That was funny :slight_smile: But I think it explained it quite well. Fascinating, isn’t it?

Just to clarify about my last example, I used Davinci-Codex everywhere. And I run it in two steps. First one was to generate the code from this prompt "“An algorithm of arithmetic addition of two numbers. Consider the numbers as strings of digit literals.” and the second step was to explain the code so I just added “Explain the algorithm:” in the end and it produced all the rest. Here is the preset (OpenAI API)

1 Like

That is impressive. Did you use the same session for both results? In other words, is it possible that it generated the first result by following the instructions in the header, and for the second result just reused the same response?

1 Like

Thanks for the input boris!

@dara,

No worries, I respect the constructive criticism and open discussion that we’re having.

We need to UNDERSTAND what GPT-3 can do and what GPT-3 cannot do .

Essentially you’re wanting to find out how GPT-3 processes input on a token-by-token basis so that you know how GPT-3 will answer.

Try to use this numbers in functional expression Form :
sqrt(1.03001987)*cos(1.9099875) *exp(-0.0135789)

This is a high-level mathematic concept. In order for GPT-3 to understand this, you’d need to teach it how to:

  1. Properly handle decimal numbers
  2. Perceive the sqrt() function correctly and, provided the training data has clear-cut steps needed for it to understand how to calculate square root functions longhand, produce the correct answer
  3. Same as Step 2 but for the cosine function and exponential function

Assuming there’s enough training data that is well-defined, I don’t see why GPT-3 wouldn’t be able to perform the calculation that you provided.

Hello @jpoirier! Like @m-a.schenk stated, memory between sessions is not possible for GPT-3. However, I’d like to clarify that I wasn’t wanting to generate an algorithm that could be produced by GPT-3 to solve equations, but instead, I’m aiming to teach GPT-3 itself how to quantify numbers and understand mathematical concepts so that given the right training data and a problem that the training data shows how to solve, that GPT-3 can solve the problem with the correct problem-solving steps.

Even though memory between sessions is not possible, you can fine-tune models with your own datasets so that the model does have an understanding of those concepts going forward instead of having to teach it over and over again. As stated by OpenAI here, you would need to send a request to fine-tune the Davinci model to the OpenAI team.

I do not think anyone could teach GPT-3 anything substantial about algebras and so on. That I know for a fact about any Neural Network model.

That is why people suggesting and working on alternative techs, such as ourselves.

However, that was not my proposal. I need to know if I could build a Hybrid system where GPT-3 does what it is good at, and we add our techs dealing with the algebraic and geometrical systems.

I seem not able to get anyone here or at Open-AI to discuss openly, pun intended.

Nor we are interested in any Learning systems to solve equations! Though we have number of innovative algorithms for that purpose.

The equation solvers are for another purpose.

When you say something like below about a Landscape:

"add some trees to region … " and some being some form of Existential Quantifier i.e. you do not tell the AI to learn how to place trees in some way of its own, you give the AI Semantics to attach to the word some.

If we could do that, then GPT Hybrid systems would fly to the moon :slight_smile:

Else GPT is stuck to constantly need to learn every operation for its Semantic.

You do that in your brain and language when you learn from childhood:

“Snakes are beautiful.”

“Snakes are dangerous.”

beautiful is not Semantics you learn!

dangerous is Semantics you are taught by your parents and the environment.

But these very similar sentences require two disparate Semantics in order to make sense.

Weakness here is that everyone is trying to make GPT learn some new stuff (Semantics) and even it could, it is a bad idea.

Dara

In my second example (which I gave the preset link for) there was no algorithm in the header. The explanation of the code was generated from the code itself. Which suggests that it has a good grasp of the semantic concepts related to “addition”. Just that it has to be presented in a correct format.

I agree that the hybrid systems is the way forward, at least for now. Because in production-ready environments we need explainability and verifiability of these systems. So I’m advocating for the neuro-symbolic hybrids. What I’m currently trying to implement is a system, which would effectively integrate knowledge graphs with the neuro-symbolic reasoning. And GPT-3 plays an important role here as a means to disambiguate concepts and transform them into logical forms for the “reasoner” to work with. Ideally, with the symbolic subsystem we are able to achieve continues learning as well as long/short memory. Also leveraging First Order Logic or Real Valued Logic we are able to achieve sound and verifiable reasoning.

1 Like

We are working on similar concepts.

neuro-symbolic hybrids: We have created such hybrids in Wolfram Mathematica and shows great promise.They are Recurring Graph Networks, Graphs are the Expression Tree structures for the operators and functions and arguments … all seems to be working.

BUT even at 90%+ accuracy when the neuro-symbolic hybrid makes sometimes the Expression trees produce syntax error. But we can do much better soon.

As for memory, unless I misunderstand it, we can provide Content Associative Memory stored in our Symbolic Cloud for arbitrary random access. These Content Associative Memories are ACTIVE i.e. they could have their learning algorithms or other update mechanisms.

To me the role of GPT is clear, somehow I get confused by what is presented in Open AI sites and this forum.
D

1 Like