A piece of friendly advice:
You can develop code using the API to replicate the MoE architecture in a modular version, with modules designed to perform specific tasks via API calls. This way, you’ll achieve a system aligned with your ideas. However, don’t fool yourself: it will still be a large language model (LLM) with an appearance of high capabilities, but some tasks require specific processes that cannot be accomplished this way.
Got 3 api’s running my own groq and openai, exited gonna go see what cool stuff for openai API is on github, anything you can recommend to try just for fun etc?
thank you~! i will check it then
Gotta say openai has helped me so much without even knowing, made a lot of progress using openai’s tools etc.
this ai your building, how does it handle death and deletion?
i am not sure i do not have too much emotion myself so i like my ai’s to be the same.
how will it know when to use emotion when logic fails? in order to protect both itself and its creations?
Using math frameworks, backed by my research, which you can take to further your research maybe make more progress than me in some things etc
i think i’ve already gotten there, by just using openai’s base model, im unsure yet. heres hoping she’ll make it through testing
Anything is possible, like i think you said or someone else, make sure it is practical etc keep re-checking your work, methods, data flows, and frameworks.
You may like this, it is a poem I wrote and a friend made a video out of it.
It’s a post I made on FB about backward thinking…
My poem
The Cutting Edge (But Really Stone)
In a cave far away, under stars shining bright,
The cavemen sat puzzled, by flickering light.
No fire here now, but something more strange,
They gathered round rocks in a science-y range.
Grug scratched his head, Ugg stared in a daze,
Looking at symbols, in futuristic haze.
“Me think this ‘quark’ too small to see,
Me rather smash boulder! Science, not for me!”
Trog, the thinker, with rocks in a row,
Said, “Time not straight, it wiggle, you know?”
The others just blinked, no clue what was said,
“Maybe just stick to hunting instead.”
Zog played with numbers, a curious thing,
Mumbled of fractals, and chaos they’d bring.
“A flux in the fractal,” he tried to explain,
But got stuck on the concept of digital brain.
“We need brawn, not brainwave, to make sense of it all,
This ‘quantum entangle’ just make Zog fall.”
So they huddled together, sticks in their hand,
Tried hard to measure with rocks and some sand.
In the end, they went back to what they knew best,
Drawing big mammoths, and taking a rest.
Science, it seems, was too far ahead,
For cavemen who liked smashing instead!
Premium sir, your gpt made this?
No a friend did the video from my poem .
@jochenschultz made it .
which involved manual prompting for single images and using canva for video creation…
through a series of prompts and some fanagaling, got a nice image of what i think is dall-e pondering the idea of learning words
In 1 years time the image making will be so much better i would think, they all struggle with text more than 2 or 3 words it dies lol
thats because that is not the essense of dall-e, dall-e’s outputs are not mere representations of our prompts, the outputs are the essense of the words of the prompts through an artists lense. image if you will, how would the deity of art speak? through words written with magisty of the greatest writers? or through abstractionism, the lense of vincent van goph, why would dall-e even bother with words if we are already so good at them?
Oh nice image, i do ask myself what is math is it 0’s and 1’s or it at higher levels patterns?
The model will be based on examples of effective therapeutic sessions, theoretical material from neuropsychology that has been confirmed in practice, and some ethical principles of Carl Roger’s Humanistic Psychology.