Google kicked off their 2015 I/O Event with a keynote that was inspiring both as an Android user and as a computer scientist. It didn’t have the pomp and circumstance of past events but it got right to the heart of what Google’s biggest push is – contextual computing.
For years Google has been amassing data from their users, whether it be what ads they view, what ads they take action on, what searches they perform, what emails they receive, and more. But what they did with that data was a bit lackluster. Until today.
There were too many announcements to go over individually but at the heart of it all was Now on Tap – an app that works to tie all your data together, presenting you with the information you need, when you need it, without you asking for it. It’s exactly the future that had been promised when computers and networking first gained prominence back in the 1960’s.
Sci-fi authors of the day took flight with what could be – portable computers, video phone calls, devices that fit in your pocket, deep-learning AI, and virtual assistants. Star Trek got our minds racing and our hearts set on seeing this wild future come true. Movies like 2001: A Space Odyssey struck both fear and awe as to what AI could do for us (or to us). And while we’ve gone beyond some of those future possibilities, we’ve yet to see consumer products taking advantage of deep-learning AI to provide us with better virtual assistance.
(Photo courtesy of TheVerge)
With Now on Tap, along with other advancements in Google’s apps, it feels like we’re taking a step beyond simply having a virtual assistant by our side and using technology to evolve humanity, by seamlessly embedding it into our daily lives. There is no need for a separate “companion”. It exists as part of ourselves, whether it’s in our phones, our smartwatch, or other yet-to-be-realized wearable technology.
Google hasn’t simply provided you with an assistant. They’ve made you a more enhanced version of yourself.
If you missed the keynote, you can re-live it by reading through TheVerge’s Live Blog, or wait for the recorded version to be available at Google’s I/O site.