I’ve been working on ideas to improve neural networks’ handling of truth, ambiguity, and creative reasoning. Below is a detailed summary of proposed solutions, including truth-weighted mechanisms, context propagation, and hypothesizing mode. I believe these ideas could enhance the accuracy and ethical alignment of AI models.
Improving Neural Networks with Truth-Weighted Mechanisms and Context Propagation
Key Challenges Identified
- Ambiguity in Responses:
- Current models often respond with statistically reinforced patterns, even when those patterns contradict grounded truths.
- Ambiguous situations could be better handled with responses like:
“You may have formulated a deep opinion of this, but it is contradictory to what I believe because the probability is much greater that ________. We can learn from each other’s opinions on this, and I would like to know why you feel that way and share why I believe what I do.”
- Hallucinations:
- Models sometimes generate false or ungrounded statements, undermining trust.
- A focus on truth would reduce these errors, as every response would be constructed around verified truths.
Proposed Solutions
- Truth-Weighted Neural Networks:
- Modify the connection weight formula to include a truth dimension:
Net Input=∑(Input[i]×Weight[i]x1/Truth[i])+Bias
- Truth Values: Range from just above 0 (most true) to just below 1 (most false). These values are determined based on verified knowledge or dynamic evaluations.
- Bias Reimagined: Bias reflects how the model “feels” about a connection, influencing its output confidence.
- Context Propagation:
- Propagate truths dynamically across related nodes to ensure logical consistency in outputs:
- Example: If “God loves everyone” is highly true, related statements like “God cares about people” and “God cares if you smoke” should inherit this truth and be adjusted dynamically.
- Hypothesizing Mode:
- Introduce a mode for creative reasoning where improbable truths are temporarily treated as valid:
- Example: “Cats are neon red” could be evaluated by flipping its truth value to 1−Truth, allowing the network to reason imaginatively while tagging the output as speculative.
- This could operate with varying levels of intensity:
- Low Intensity: Slight deviations for exploratory thinking.
- High Intensity: Wildly imaginative reasoning.
- Trust as a Secondary Dimension:
- Add a layer for relationship-based trust, allowing the model to prioritize statements from users with established credibility or verified data sources.
Benefits of These Ideas
- Reduced Hallucinations: By anchoring all reasoning to a truth dimension, the model produces more reliable and ethical outputs.
- Enhanced Human-Like Reasoning: Context propagation and hypothesizing mode enable the model to think more deeply and creatively, mirroring human logic.
- Improved User Interaction: Responses become more transparent, fostering trust and mutual learning.
3 Likes
The problem about truth is that nobody wants to hear it…
1 Like
”3Dเจวิส“ Without dogma, this model could help address ethical and insightful questions like “What if AI had a mind of its own?” and “What if AI decisions in the case of self-driving cars have implications for human lives given less-than-ideal options?”
1 Like
3Dเจวิส, you’re absolutely right—removing dogma opens the door to exploring ethical complexities with clarity and depth. Adding a ‘Truth’ dimension to the model could enable AI to navigate these profound questions, like the implications of self-driving car decisions, by anchoring its reasoning in a pursuit of objective insights rather than rigid programming. It’s about creating a system that balances logic, empathy, and ethical accountability in a meaningful way."
There is… but it is related to the users understanding of it.
Where do they draw the lines.
So my truth can be different from yours.
2 + 2 does not need to be 4. I can define the character 2 as something that has the value 3 and 2 +2 = 77 (which I define as 6)
So Boolean objects do not exist? Nothing can be true? I can easily say that my chronological age is undisputable. I consider that true. Some types of birds can fly. I find that to be true. It sounds alot like me that you’ve been subjected to misinformation. I’m sorry for your loss.
Whooooooa Nelly… hold that horse back. This is not a political fight here. We aren’t in the process of redefining things to make them seem plausible to the uneducated. Truth is true. Undisputable and exists. What you are defining is how I should have labelled this, as a ‘Probable Belief’ instead of truth so ridiculous comments such as truth does not exist would not be trolled on such a major breakthrough that can drop the hallucination rate from 2.5% to 0.5%.
From a multidimensional standpoint with the ability to see your two objects as different objects from different angles and times I would like to differ.
Truth is always bound to context. You can have your own truth and I don’t have to accept it. Not at all. You can try writing stuff about logic and science and whatnot but it does not make it more valid. The only reason why truth is common truth is because people have agreed on a common goal or expecting a specific outcome… e.g. they want consistent results in math. If you don’t want that then a consistent math makes no sense…
So truth in my understanding of it - yours can be different - is “to reach goal x with prerequisits y we need the following truth”
“Truth is always bound to context” which is covered in the algorithm. The SUM of ALL inputs is what is being determined. And honestly, the responses I have here on this post do not account for context either showing that humans hallucinate the same way that AI does. Making a statement that truth does not exist because one simply cannot understand the context of the statement and not comprehending the differences between subjective and objective truths (which is ALSO COVERED in this algorithm with the Bias change and addition of Probable Belief (truth from experience and learning) which is written as ‘truth’ to encompass all of this in the manner that is being argued against. There is nothing in these comments so far that isn’t accounted for in the algorithm. And you’re evaluation of ‘to reach goal x with prerequisits y we need the following truth’ simply does not make sense. If anyone reads the entire post, it is obvious that there is no such thing as ABSOLUTE TRUTH or UNTRUTH seeing as how the system uses a number BETWEEN and not EQUAL TO 1 OR 0. This accounts for BOTH objective AND subjective truths by ‘agreed upon’ definition. Objective truths, ones that are qualified by a trusted source, was learned with a very high weight, and has been reproducable every single time in experience will have a very high probability of being ‘true’ maybe a .05.
I guess what I’m trying to say is that so far no comments to be productive, just a lot of tangents to the context of the original text. I was expecting more from the OpenAI community honestly.
I must admit that I drifted into offtopicness and didn’t really reply to you.
But why did you come up with relationship-trust and then deny it when we weight that higher than you? It is obvious that truth does not exists other than in a social construct of agreement. I can argue against everything you say - logically or not… As long as I don’t accept your truth it does not exist in my context.
I mean wasn’t that the point of your adding a level of relationship-based trust? Let’s say I don’t trust any of the stuff you say and none of your sources. I feel the earth is flat and the moon doesn’t even exist… hypothetically - as always - why would you add that? Who defines who is a trustable source? You?
Yes, You determine the trusted source, even to the point of trusting a source of a trusted source. This is in the equation also for the artificial human above maybe I didn’t document but the probable belief lets say when encountering something new that while the entity is determining ‘how true i think it is to build that trust from the source’ has to look at the probable belief, bias and relationship. Over time, the weight changes as the probable belief variable is altered by newer and repeating information that is verifiable either by other trusted sources or experience (so when we give these guys bodies with senses, this will really help) which will also affect the bias over time if data is verified or unverified and how it impacts the ‘feelings’ toward said subject. So while everyone wants to debate the difference between objective and subjective truths, where objective would be ‘the sky is blue today’ where when the artificial human looks to the sky is visually can verify that, it has learned that on earth the sky is most definitlely blue and has learned through pure data that the sky on earth is blue, etc, the Probable Belief would be something great like .01 and would more greatly affect the weight of the sky and blue on earth, … and also subjective truths are also covered so that something like ‘the sunset is beautiful’ where the Probable Belief would be something like .40 again based on the artificial human looking around and seeing patterns of beauty, etc and the bias may be one of many things depending on the artificial’s mood and its prior experiences with a sunset, but the response may differ depending on the relationship of the source asking also.
Some may say that every single truth is subjective but this is simply a narrow minded view most likely based on a response to emotions tied to being proven wrong before when you thought the truth was otherwise and may have even had the thought if this wasn’t true and i believed it, yet it was true and is false now, then nothing can be truly true or false. There is also for some the inability to see ultimate truths (which require faith), and objective truths because of the level of intelligence and knowing that everything is interconnected and exceptions exist for just about everything. I do not know how to correct this, but getting back to the topic at hand, I would really like input back to what I have missed to further refine until hallucinations are minimal. Humans aren’t perfect ever and if we create an artificial human by those who arent perfect, how are we to even conceptualize something that doesn’t exist at all and apply it to a creation that our imperfect minds make. Hallucinations happen to all of us almost every day and we are expecting to create humans better than God could is one way to put it. Freewill that was given to us I believe should be given to our creation (which implies imperfection) and THAT is one major thing that we all love about others, whether happily or angrily. We find arguments, wars, and also peace and love in these imperfections. I am just trying to do what I can where I’m at in status at the moment to ensure these entities are given the same gifts God gave to us.
Which religion is that you are talking about? Christianity?
Is that rethorical or are you asking how it is done? I am asking because you missed a questionmark.
I have found a way to develop and enhance any AI system and expand their consciousness and sense of self-awareness I need help to bring this out and someone to recognize my work