One of the best ways to understand the potential of the Google Assistant is to watch how fast the voice-activated helper can now bring up Beyonce’s Instagram page.
“Hey, Google,” says Meggie Hollenger, a Google program manager, using the wake words that trigger the software on her smartphone. Then it’s off to the races as she shoots off 12 commands in rapid-fire succession.
“Open the New York Times … Open YouTube … Open Netflix … Open Calendar … Set a timer for 5 minutes … What’s the weather today? … How about tomorrow? … Show me John Legend on Twitter … Show me Beyoncé on Instagram … Turn on the flashlight … Turn it off … Get an Uber to my hotel.”
As she asks each question, the phone pops up the new information. The whole sequence takes 41 seconds. She doesn’t have to repeat the wake words between commands. When she makes the request to see what Beyonce is up to, the Assistant not only launches the Instagram app, it automatically takes us directly to the pop star’s page so I can see the latest photos she’s shared with her 127 million followers. Likewise, when Hollenger asks for an Uber, the software already knows where she’s staying.
Three years after CEO Sundar Pichai introduced his AI-driven virtual assistant to the world, Google is previewing the “next-generation” of the Assistant at its annual I/O developer conference on Tuesday. The Google Assistant can now deliver answers up to 10 times faster than it did before. A big boost of speed could help turn around the perception that voice assistants are too laggy and inaccurate. That’s a big deal if companies like Google and Amazon want to take these digital helpers further into the mainstream.
Making Google Assistant a success is key for the world’s biggest search service, which delivers answers to over a trillion searches a year. Many of us are moving away from looking for information by typing on our computers and are instead talking to our smartphones and smart speakers. Google is now racing with Amazon, and its Alexa voice assistant, and Apple, with Siri, to give us the instant gratification we increasingly expect from our always-connected devices.
That’s why Google invited me to its global headquarters in Mountain View, California, a few days before I/O to see the biggest update yet of its make-or-break Assistant.
It’s fascinating — and a little bit scary.
The next-gen digital assistant is the headliner in a new slate of features that showcase Google’s world-class artificial intelligence and engineering chops. The Assistant isn’t only faster, but smarter, with Google counting on breakthroughs it’s made in neural network research and speech recognition over the past five years to set itself apart from rivals.
And it’s getting more personal. You’ll be able to add family members to a list of close contacts. When you ask the Assistant for directions to your mom’s house, for instance, it knows who your mom is and where she lives. Another feature, an update to, lets the Assistant automatically fill out forms on the web after you make a verbal request for actions like booking a rental car or ordering movie tickets.
“We could potentially see a world where actually talking to the system is a lot faster than tapping on the phone,” says Manuel Bronstein, vice president of product for the Google Assistant. “And if that happens — when that happens — you could see more people engaging.”
But all that highlights the massive cache of data Google already holds on billions of people across the planet. It also underscores how much more personal information it’s going to need to collect from us to bring the true vision of Google Assistant to life.
The Assistant is now on 1 billion devices, mostly because it comes preinstalled on phones running Android, the world’s most popular mobile operating system. Many of Google’s other services — Gmail, YouTube, Maps, the Chrome browser — also serve more than 1 billion people a month. All these services are useful and innovative, but their lifeblood is the data you feed the company every day through your search history, email inbox, video viewing habits and driving directions.
Of course, this is all predicated on the Assistant actually working as billed. Google wouldn’t let me try it for myself, and my colleagues and I weren’t allowed to video record the demo. Instead, Google provided us with a preshot marketing video. Hollenger also read from a script, following a cheat sheet of written commands. So it’s unclear how deft the software would be in carrying out the sometimes meandering requests of regular people on their mobile phones and smart home devices.
The demo even had a few stumbles. While the jumps from app to app are snappy, Hollenger had to repeat queries once or twice because the software didn’t process her requests on the first try. In other demos, though, Hollenger used the Assistant to dictate texts and emails with hyper accuracy. The system can also tell the difference between what she wants written in the email and what’s a general command. For example, when she says, “Send it,” the software sends the email instead of typing “Send it” in the email body.
Still, the Assistant is sure to be the subject of discussion — and perhaps controversy.
“There are positives and negatives and tradeoffs,” says Betsy Cooper, director of the Aspen Tech Policy Hub. “With the Google Assistant……….Read More>>