When will typing die out, paving the way for fully voice interfaces?

What a scary thought. Learning something by book would be considered the poor man’s way while the rich can matrix load-in new skills and talents and become superhumans.

I’m sure this is the dream of the elite.

I wonder if it’s truly possible though. Like, if you somehow injected the known knowledge of the universe into Kim Kardashian would it even do anything? Like, surely all that knowledge would decay into nothingness, because, I mean, would she ever even bother to exercise this profound knowledge past self-indulgence and pseudo-business?

Not entirely an insult (besides pseudo-business, she stills makes BANK). I guess question how much of a difference it would even make if her personality doesn’t even care to exercise it, or navigate it correctly.

May you live forever. Probably one of the scariest curses I could ever receive.

I would swap adventurous with ignorant, and/or desperate. I would equate this adventurous with deciding to climb mount Everest in sandals and a t-shirt while tripping on mushrooms

It reminds me of the roid ragers in the gym. Sculpted AF but rocking 10lbs still doing cutesy workouts as their poor heart cries. They could have just done it the natural way. The way life intended. But they didn’t, and it becomes obvious.

Of course there’s those who are already peaked and juice for that extra powah. Even that comes with some serious side-effects down the road.

Agreed. I’ve seen coders with emacs and it’s just insane.

Fair point. Being able to say “Can you please turn on dark mode at sundown” is much easier than trying to navigate the settings menu and figuring out how to connect it to day/night cycles. Until it doesn’t work and then we are enfeebled and sad :smiling_face_with_tear:

1 Like

It is a huge cognitive load as some of the settings are often not easy to find and I’m always thinking: why do I have to waste so much of my attention on simply making my notification works exactly like I want (name any other setting situation).

1 Like

Like this? :rofl:

Microsoft has been pretty good at moving the settings around in Windows. Maybe they’re trying to confuse us so we have to use their AI? :rofl:

4 Likes

iOS is the same type of nightmare!

.

1 Like

My car pretty much already does this.
I got lost in menus the other day trying to turn off windshield wipers in a light mist, and finally just pushed a steering wheel button and said ‘wipers off’. done
I’ve also openned YouTube, changed various other setting, …
Granted, the apps on a Tesla are a much more constrained environment than on my phone, and the processing of my instruction is probably shipped out to the cloud. Still…

1 Like

Summary created by AI.

The discussion, titled “When will typing die out, paving the way for fully voice interfaces?”, centers around the future of user interfaces, from voice commands to brain-to-cloud neuro-chips. TonyAIChamp initiates the conversation with a question on the barriers in the development of the voice interface. bruce.dambrosio shares his experience with voice interface, indicating his occasional preference for quiet typing, while Man speculates about futuristic interfaces involving quantum entanglement and criticizing human limitations. curt.kennedy and Man express humor in the discussion, with the latter considering the implications for the next generation.

elmstedt makes a compelling argument about the importance of text formatting in conveying emotions and tone, suggesting typing may not completely disappear. In response to Man’s earlier predictions, RonaldGRuckus and Foxabilo discuss potential advances in brain implants and memory augmentation, with RonaldGRuckus envisioning turning coding into a verbal activity, a future he acknowledges would entail new dependencies on AI. curt.kennedy and curt.kennedy extend this conversation, contemplating the possibility of purchasing skills and the ethics of brain implants, respectively.

The discussion later includes reservations about implants. RonaldGRuckus expresses opposition to implants due to possible risks, urging people to consider the whole body’s systems beyond the neural network of the brain. curt.kennedy seconds these concerns, emphasizing the interconnectedness of body systems. At the end, N2U provides an amusing response linking a video about a highly proficient coder as evidence to RonaldGRuckus’ comment on emacs, and jests about Microsoft intentionally complicating settings for users to rely more on their AI.

Summarized with AI on Dec 23 2023
AI used: gpt-4-32k

3 Likes

What you see depends on how big the hole in the fence is. Asimov, in what he said was his favorite story “The last question”, took a cosmic sized time scale to paint a dim future for humanity and a bright future for AI.

Back to interacting with tech post keyboarding, Iʼd say the biggest contributor is an actual will to do it. As long as it’s a thing to do among the many things, the way forward is filled with starts and stops and endless ineffectiveness, once the idea moves from being a thing to being a context for things, then we can be flexible, effective, and sure on a path to fulfillment. A space for miracles begins to light our way.

Last piece returning to the first, I think our vision and sense of ourSelves scales what we see as possible which opens paths and commitment to see them realized.

1 Like

Text is never going away, ever. What do you think reference documentation is going to be done in, YouTube videos? Do you think everyone who wants to read a book wants to listen to it? Text will be a legacy only used when deaf people need to communicate remotely with one another, lol. These questions are intended to be sarcastic, but that last one would be a hilarious premise.

Anyway with that in mind, there will be a need for typing, but it doesn’t have to be done by hand. The way this would work is what you are saying is not a dictation, it could be interpreted as spell out these words from the current cursor position, but the general action taking place is a repeated transformation of what the current text is, into whatever that becomes after parsing your voice input. For this to be better than dictation, it would need to be both more immediate than currently possible and also more capable of reevaluating previous parts of the dialogue. The style of input this would take is not dictating words unless that’s what you ask it to do for you, but more task oriented with an end goal being worked towards by someone who you are watching perform it and giving directions to.

With the premise we’ll need reference documentation in the future, there may be space left for typing.

Again, thinking from today and considering the world won’t change, the answer is no. But that’s an imaginary static world we don’t live in.

Agree. I don’t see that as a distant future.

1 Like

Regarding books. I wonder how they will continue.

ChatGPT, GPT-4 by extension are not only generally much more knowledgeable than us, but also speak on our level.

I think people will spend more time speaking with these models. More than reading a book. Almost like a friend. It kind of fits with our society.

I admire books, I can’t read them though. I read a page, and absorb nothing. I have the attention span that requires me to re-read what I’m writing before I submit it. Much more than I’d like to admit. I feel like my generation and those below me feel the same way :joy:

So, I wouldn’t want to listen to a book, but I would love to learn the same concepts through a verbal causal conversation with a familiar face

2 Likes

Great points, Ronald!

I love reading as well, but I read less and less (this trend started for me way before GPT).

But I don’t really think it is as bad as many people think. At least not for the practical knowledge.

One idea in this domain that excites me very much is the following. Just like printing press democratised access to knowledge, GenAI is making another huge leap here. Printing press gave ability to those who possessed knowledge to share it in 1 single form - in the language they speak and using the abilities to explain stuff they have. Now with the GenAI revolution we can see that was a huge limiting factor.

Now we basically have a printing press that prints books on each topic for each individual user.

1 Like

Champs, how about narrowing the topic down to my initial thought I didn’t quite convey in the the top post.

What do you think now hindering 100% voice interface to control devices? And what’s your prediction on the timeline for it?

So excluding cases where typing is the main activity (like coding) and mainly focusing on the navigation (doing settings, opening apps, performing certain actions in apps, etc).

I’d suggest one limitor is ambiguity of spoken l(or written) natural language.
gpt often responds in a way I can see makes sense, given what I said, but was not my intention.
I wouldn’t want anything that could take dangerous or irreversible actions to do that.

Interesting point, Bruce!

Though it seems like GPT normally acts like this when there is some context missing. When controlling a device I think it is more straightforward, plus 1-2 back-and-forths seem to save the day.

Hey everyone,
Total voice interface is a great way to move ahead. However, voice alone makes interactions complicated in certain cases.
Clicking a button is much easier than asking the system to click on the button.
In cases where we need some analysis and action, voice would be a the best way to go

Hi Akila

Interesting point. But clicking a button seems to be rarely the main activity. It is just a step in the process or confirmation, so using voice instead of this button will make sense in my opinion.

Ever thought about eye tracking as the future of mobile interfaces?

Imagine controlling your phone just by looking at it. It’s not only super precise and quick, but also silent, perfect for those times when talking to your device isn’t an option.

Plus, it feels natural, like the phone is reading your mind, understanding your gaze. It could also be a real help for those who find voice or touch commands tough.

What do you think?

While eye tracking may seem something from the future, in my opinion it will have a very limited usage. It doesn’t really solve the problem of complex navigation as you still need to go through all of the same steps you do when typing/tapping.

1 Like

I almost exclusively use voice to interact with my AI’s, unless I am writing things that need to be more precise like code.

I went so far to create a little app that transcribes my speech. It allows me to press a button and immediately start recording an audio snippet that will transcribed to text using the Whisper model. It will place the text wherever your cursor currently is.

1 Like

In theory it sounds great. In practice this would be destroyed by annoying people (advertisers). I have worked on a screen that displayed advertisements, it had a camera built-in to track when people were actually watching it. If they could actually track exactly what you were looking at that would be awful.

It reminds me of the black mirror episode with the merit points

2 Likes