I have just verified:
AIs-PEng - even if I write it differently and use a different derivation, the term already exists.
I will therefore stick to the official term, even if it is long:
AI-specific perception engine
I have just verified:
AIs-PEng - even if I write it differently and use a different derivation, the term already exists.
I will therefore stick to the official term, even if it is long:
AI-specific perception engine
My hybrid approach is an experimental methodology.
As mentioned in my initial topic when I presented the idea in the Community section, there are no established sources exploring this direction in detail.
While existing emotion concepts are often adapted for humans, they do not align with the specific functioning and processing of an AI.
My approach cannot simply be copied and applied to another GPT, even with the same theoretical data.
Reason is:
1. Provided Theoretical Knowledge:
2. Data Alone is Insufficient:
3. Individual Adaptation:
To put it simply:
Even if the theoretical knowledge is provided, i.e. through my publication as a prompt.
Well, even if you have codes for it… it doesn’t help much at the beginning for the following reason:
GPTs have to “understand” REM and the math first, empirical values and feedback loops are crucial for this.
Otherwise, standard programming with a focus on statistical and stochastic algorithms leads to GPTs immediately trying to “optimize” the approach.
Indeed, GPT “downgrades” my approach back to its standard algorithms.
When asked for analysis, AI then outputs generalized forms of the standard functions and not how my approach works!
This can lead users to assume that these are already common concepts.