We saw some big announcements from Google about new features for Google Assistant that add a ton of functionality through deep integration with the rest of the apps on your phone, but possibly the biggest news was talked about the least — Assistant has truly broken out and ready to take on the rest of the digital world.
The most obvious ways this is happening surround Google’s Nest Hub and other smart displays. New tools for educational use and ways to offer better choices or better entertainment are obviously going to be a focus when you’re talking about a device that is really just a window into your Google account and content to consume.
Assistant is a better personal assistant when it goes everywhere with you.
Likewise, the new Chromecast with Google TV or the new Nest Audio are content consumption styled devices, but they too will benefit from new features like being able to detect the doorbell ringing or a leaky pipe (that’s super-cool). But I think the most impactful features, the ones that we’ll notice really do make a difference, are how all of this is expanding into the internet — both the “of Things” kind and the internet we just use all of the time without realizing it.
Features like Google login integration through Assistant or vertical custom intents — where you tell an app to do a custom action the developer has implemented — are also great with a PWA (Progressive Web App). When done right, these integrations mean it’s no longer tied to an app on your phone.
Assistant will always be prioritized with your Android phone in mind, but it can branch out.
The idea of telling an app on your phone to order your favorite smoothie from your favorite smoothie bar is cool. Moving that into being able to tell a smart speaker or your TV to do the same thing is even cooler. The idea of saying it to the webpage with the menu in front of you changes the whole experience, same for ordering it from the scrolling marquee in the window of an adjacent bookstore or the screen on the seat in front of you at a sporting event (whenever we can have those again). These examples aren’t necessarily better than using Assistant on your phone, but they are different.
Using Assistant to recognize who you are, the context of what you’re saying, and it being able to act on what you say once the recognition part is done is known as ambient computing — you’re connected everywhere, to everything, all of the time.
The idea itself makes most of us a bit uneasy. It should because if things are not done the right way, it is an actual privacy nightmare that should make you wake up in the middle of the night drenched in sweat. It’s nigh impossible to work right 100% of the time, and when things don’t work, the potential for abuse is really high.
Source: Android Central
We all know what Ambient Display is, but the easy way to think of ambient computing is to look at how Google Assistant’s voice match works. Your phone (or your smart display or other Google Assistant device) can “voiceprint” the way you speak. It’s definitely not 100% accurate 100% of the time, but when the conditions are right it just works exactly the way it’s described. This is how you can ask for your agenda or ask to make a Duo call on a shared device in your living room.
The important part of ambient computing is the sounds that aren’t your voice.
For years Google has been interested in ways to improve ambient computing and much of that has been based around sounds that are not your voice. Seeing that Google has devised a way for your Nest Mini to let you know your toilet is overflowing or your dogs are barking means Google is getting better at analyzing the background sounds without transmitting any data back to any Google servers. Your phone or your Nest device or even your Chromebook could be listening and processing all this data to check that you are really you and you’re in a place where you are expected to be while asking for something.
None of this data is saved or sent anywhere once it’s processed and you’re authenticated, but Assistant can have a very high level of confidence that it’s really you so when you do ask it to do a thing, it can do it on your behalf.
Source: Daniel Bader / Android Central
That’s still pretty scary, I admit. And right now, I don’t think the digital brains behind Assistant are there yet. But this is the path we’re heading down. It’s a path where the steps can be very small. Today, Assistant is more ambient focused and new features are built that make it more functional; if I’m at work and my dogs are barking, my Nest Speaker can send me a message. Tomorrow, the same thing could happen, but I also can pull a live feed from a smart camera. The next step could involve interaction through a smart doorbell or a call to emergency services if needed, or my face and voice on a Nest Hub telling my dogs to shush.
Privacy still needs to be the most important thing about ambient computing.
There are plenty of ways to keep developing the tech without it becoming invasive on the privacy front. Once it’s all good enough, we can be given the chance to opt into something like using Google Pay through Assistant at a smart terminal or telling our car stereo our Starbucks order while we’re driving. We just need to be sure that the tech isn’t easy to hack and that our privacy is respected.
This whole idea could just fail, too. Nothing about future tech is written in stone and for every good idea we’re using today, hundreds of other good ideas failed and were rejected. I just think it’s exciting to see a small glimpse of what things could be. The next few years of work on ambient computing are going to be fun to watch, even if we never get to try any of it.
If you’re going to end up streaming through a smart display, you may as well go big or go home. The Nest Hub Max comes with a great speaker, a roomy display, and quick gestures that make Netflixing in the kitchen less messy than it should be.
We may earn a commission for purchases using our links. Learn more.