Google’s future is useful, creepy and everywhere: nine things learned at I/O
With Google Assistant coming to the iPhone, the company hopes to kill off Siri and wants to ‘see’ inside your home as it reiterates its AI-first approach
By Olivia Solon in San Francisco
May 18 2017
There were whoops and cheers from developers as Google announced the incremental ways it is strengthening its grip on many aspects of people’s lives at its annual developer conference, Google I/O.
There were no jaw-dropping major product launches nor executives proclaiming their utopian vision of the future (ahem, Mark Zuckerberg). Instead there was a showcase of features, powered by artificial intelligence, designed to make people more connected – and more reliant on Google.
“We are focused on our core mission of organising the world’s information for everyone and approach this by applying deep computer science and technical insights to solve problems at scale,” said CEO Sundar Pichai.
By combining the personal data harvested from its users with industry leading (and human Go player beating) artificial intelligence, Google is squeezing itself into spaces in our everyday interactions it hasn’t been before, filling in the gaps and oozing into new territory like a sticky glue that is becoming harder and harder to escape.
Here’s what the key I/O announcements tell us about Google’s future.
1. AI is Google’s USP
Google reiterated that the company has shifted from a mobile-first to an AI-first approach. This means using AI at the core of all of its new products, whether that’s to improve image recognition in Google Assistant or for beating human players at Go.
2. Google wants to ‘see’ as well as ‘hear’ your surroundings
Lens is Google’s answer to Facebook’s augmented reality Camera Effects platform. It comprises a set of vision-based computing capabilities, combined into Google Assistant and the Photos app, that works to ‘understand’ what you’re looking at. So you can point the camera at a flower and it will identify the species or automatically connect to a wifi network by showing the camera the log-in details printed on the sticker on the router. You can also hold your camera up to a restaurant in the street and see reviews.
3. Google Assistant is getting smarter
Google’s equivalent of Siri, Google Assistant, is embedded in Android devices including smartphones, watches and Google Home. Google’s Scott Huffman noted that Assistant would become even more conversational over the coming months, allowing you accomplish tasks with a quick chat.
In addition to having voice recognition, Google Assistant, drawing on Lens, can now take in, understand and have conversations about what you see. For example, if you are in Japan but don’t read Japanese you can hold the Assistant up to a sign advertising some street food and it will “read” and translate the text. You can then ask “what does it look like?” and Google will know that the “it” refers to the name of the food written on the sign and it will pull up pictures of the dish.
“It comes so naturally to humans and now Google is getting really good at conversations too,” said Huffman.
4. Google Home is getting creepier (and more useful)
Voice-activated smart speaker Google Home, will now start offering “proactive assistance” rather than waiting for you to say “OK, Google” to wake it up. For example, it might notify you if you have to leave your house earlier than expected because traffic is particularly heavy. Perhaps the company will start proactively advertising to customers in the future?
Less creepy is the option to make hands-free calls from the Google Home speaker. You simply ask it to dial any landline or mobile number in the US or Canada and it will do so for free. The device can also now recognize up to six different voices in a household and adapt to personal preferences accordingly.