I don’t know, I certainly don’t think it will happen soon.
Typing is (or at least can be) fast and reading is (for most people) even faster.
“Normal” verbal communication is on the order of 100–150 words per minute but, while this is quite a bit faster than most people type, the speed bottleneck in language generation is in the human choosing what to communicate. People also don’t typically sustain that rate of speaking for long durations while they can generally maintain their peak typing speed longer.
Verbal communication is also only 1/3 to 1/2 as fast as most people read. I don’t know about you, but if someone is talking at me at 400–500 WPM I am going to try to get away from them as quickly as I can.
Edit:
For reference, John Moschitto Jr was clocked at 586 WPM.
End edit
On top of that, we haven’t even mentioned editing, I have no idea—people much smarter than I am will probably figure it out eventually—but I have zero concept of how one would go about editing even a moderate-length passage using only voice in a way that is on par with typing for speed and accuracy.
I don’t personally think that speed of interaction per se is the most important metric here.
While sync communication (like voice) may be slower in various cases, I think it is at the same time more efficient in many cases.
For me the problem with any async communication is that you constantly need to get back into its flow which takes a lot of you attention and significantly decreases processing resource for the discussion at hand.
In sci-fi movies they often depict holographic images that can be manipulated with hand gestures without needing headsets or gloves. That, combined with speech could be a possible interim replacement for typing.
Although typing does allow for some privacy, I can’t see keyboard and mice being relevant in a few years. Just like the fax is not relevant today.
“Dad, what’s a computer mouse?”
After that, I could imagine some form of headset that reads and transmits human thoughts based on brain activity being plausible.
Can’t see anything wrong that can happen here. Surely.
I definitely can see less typing in the future. I already spend a lot of my time speaking to GPT on Android. I’ve done some vehicle work using a GPT that does the retrieval for me. Has lied to me a couple times with torque values but it’s getting there
I can see typing becoming more niche for professionals. Trying to verbalize code sounds like an absolute nightmare But then again, AI could probably just take a very high-level concept and do all the coding for me.
I’ve seen recent news about images being more clearly derived from brain activity. But I’ve seen only basic results for “reading” brain activity and interpreting exactly what it means.
Are you referring to thoughts being meaningfully and accurately interpreted, or just identification of some form of electrical brain activity in a particular region of the brain? Because the later can’t provide a UI.
I guess the “thoughts = input” problem is interesting, and involves lot’s of integration, which means latency, to be successful.
I’m guessing most folks have 1-10 thoughts per second (not scientific, but just speculating).
So for this AI engine to de-interleave, and integrate these thoughts into a cohesive meaningful thing, it’s going to take some time, and “thought integration” to form an undeniable complete thought.
This adds a lot of latency. But it would be interesting to see what “incomplete thoughts” it picks up.
Maybe the model can “autocomplete” your thoughts to solve this problem … OK, bad idea
This is a philosophical question of free-will. Personally, I don’t think that I am basing “my next token” based on probabilities and prior tokens.
I will say, to stay within the bounds of communicating in English, I need to respect p(y|x), but this is really grammar, not thought.
I’m thinking I/we/humans are much more than token probs.
So yeah the thought process, at least as far as adhering to the communication protocol is similar, but the thoughts are totally not similar to some token prediction.
I just remember when taking the college version of “Philosophy of the Mind” class.
And I was very vocal, and arguing we live in some strange small dimensional universe, etc, and the debate of free-will was raging.
I do think it’s philosophy because it’s impossible for me to prove we are 100% indistinguishable from some super advanced AI network.
So while I am not a prolific philosopher (wish I was) I can’t convincingly say we aren’t.
I mean, what if a 3000 lbs nano-wire mesh network brain was trained on everything known to man, including humility and not making itself even known to mankind…
I think we’ll soon have memory augmentation, it’ll start as a high value Alzheimer’s treatment and it’ll interface with an AI translation layer. Ultimately the memory stores will become common as interfacing memories learned or purchased will again be high value, once you reach that point, YOU the person is just the memories your brain processes, swap the brain for a digital replacement and you have immortality as data.
I tend to stick to the Popper’s principles when thinking of such things. Obviously my data is very limited and I may be wrong when trying to apply these principles without enough underlying data. But imo it is still better than pondering on Russell’s teapots (for me at least)
Whenever I think of a new Russel’s teapot, I stop myself and ask: do I have (or can I at least theoretically get) any evidence?
I’ve been curious for a while how many people say that our memory shortens. I agree about this at this very specific moment, but imo it is our step towards a much better memory. We’ve had it for a while already, but there was no efficient interface to retrieve that data. Gen AI seems to me like that missing link.
Watching enough sci-fi movies, I definitely think of purchasing memories is going to be a thing.
But what about purchasing skills?
Example:
Ever want to be a master at writing C++ code? No problem, here purchase it for $5.99.
And how would that even work? Can the human brain even absorb it quickly. Or is it some sort of RAG where it soaks in over time, and depends less and less on the external data source?
Also, if you have Alzheimers, and your “hardware” (brain) is unable to absorb this, does it become 100% RAG based, and now you are essentially the LLM reading and smoothing over what is in your immediate short context window (short because you have Alzheimer’s) ???
Oh, don’t even get started I have a personal blog entry from like 10-15 years back where I was contemplating on the extremely inefficient way we learn new skills - by literally hundreds and thousands of repetitions and how in the future we can just upload stuff.
And again, I believe what we are experiencing in the last 30 years is a preparation of our brain for such upload-ability.
Not qualified to make a determination of exactly what the issue with Alzheimer’s is, but it seems to be an inability to create short to medium term memories, pre encoded knowledge seems to remain. So if you could have an augmentation that acts as a new short to medium term store and couple that with a long term vector store and have a bridge/memory multiplexer so that new information gets retained and recall requests get processed and those memories made available to the brain… that would seem to do it.
Now I have glossed over some absolutely huge technical hurdles with “memory multiplexer” and quite how one detects what memory the brain is trying to recall… not a clue… most of my future predictions come with a large hand wavey part to them where AGI solves these things trivially
I think AGI and whatever comes after that will be the “gold standard” … but what do you do about all these “old” or “underdeveloped” human brains?
I can see a massive market for this. I know brain implants are on the horizon.
Can you imagine open source brain implants?
pip install some_cool_skill_or_memory
Another random thought, should we even worry about AGI? Especially if we all get AGI implants? Also implants that are open source and not “controlled”?