Spring geekfest goes on. Around one week ago we could see Facebook F8, taking place at San Jose’s McEnery Convention Center. Now, we are right after Google I/O 2018, probably the biggest developer and product conference (7000+ people attending), happening near to Googleplex, Mountain View. Here’s my short summary of what we could see in Shoreline Amphitheatre this year.
AI for everyone
It’s very clear that Google continues their vision announced two years ago — to be and make the world of technology AI-first. Over years keynotes and sessions are more and more dominated by announcements around research and implementations in fields of Artificial Intelligence.
Healthcare, automotive, and our everyday live surrounded by the technology — development in all of them move on faster than ever before.
Two years ago we could see that Google’s neural network was able to detect signs of diabetic retinopathy by analysing images of the eye. Today, the same images can help with predicting the 5-years risk of heart attack or stroke. We, humans, didn’t even know what to look for, while new machine learning techniques found the correlations leading to up to 70% accuracy in those predictions.
Today, based on health records, the technology can predict better than ever before if patients will stay long in the hospital, the inpatient mortality or unexpected readmissions so the doctors and nurses could act proactively before things get worse.
AI can make our life easier by improving accessibility solutions. Captions visually assigned to the person who is speaking (esp. when people are talking over one another), or new ways of accessible communication (morse code mobile keyboard) are some of the examples which will help with technology accommodation for people’s unique needs.
AI talking to us 🤖🔊
There is also a great progress in voice user interfaces. Speech is humans natural way of communication that in addition doesn’t require our full attention (we can drive a car and speak for example). Some time ago I wrote an article about how VUIs are changing our lives. And today Google is showing us that we are a couple steps further in this area.
So what is the best way of lowering the entry threshold for voice interfaces? Make the conversation more natural for the people.
That’s why on the I/O we could see announcements of 6 new voices, ability to keep natural back-and-forth conversation without repeating “Hey Google”, multiple actions (“What’s the weather in Warsaw and London?”), Custom routines where “Hey Google, nap time!” can dim the lights, turn off TV, play chilling sound for next 30minutes and then wake you up.
And when your kids ask for a game nicely, Pretty Please will use positive reinforcement to encourage polite conversation.
Probably the biggest announcement around VUIs was Google Duplex. This AI system will not only be able to do more complex things for you (make the restaurant reservation or schedule hairdresser visit for given day and hours) but also use the traditional phone call, if the local business doesn’t support reservations via online booking system.
And if by chance business is closed because of a holiday, Google Assistant will update Google’s information, so other users will be able to see it in search results or Google Maps.
There are still a lot of unknowns, and the technology isn’t perfect yet. But who knows, maybe sometime in the future you will send money to your loved ones, with no formal requirements at all (trusted bot will do it for you), when the currency rate is good enough, no matter if it’s the middle of the day or night. AI will keep eye on it for you, 24/7. Just with your voice.
More intelligent solutions will also care about the quality of our life. Sure, next version of Android — P, will have even better power management thanks to Adaptive Battery (prioritise energy for the apps we use the most).
But there is one particular thing much friendlier to our device’s battery. Using our mobile device less during the day.
And this is the actual goal for Google products ecosystem — to make us focus on what matters the most. Digital wellbeing project is going to help us with creating healthy habits, disconnecting from virtual world(s) and information streams, finding the right balance.
How? It starts with the understanding how we use the technology. Android app dashboard will give us daily insight into notifications and time spent on particular apps. Youtube, for example, will ask you for a break if you are watching videos for more than 1 hour at once.
And we’ll have much bigger control on don’t disturb modes. Wind Down will fade your display to grayscale, to remind you to switch off for the night. DND isn’t only about sound and vibration. Now you will also be able to hide visual notifications and set messaging apps auto-responses. And if you want to do this without tapping on your phone, you can simply flip your phone face down or ask Google Assistant for quiet time.
Those and much more can be found in Google’s dedicated place for Digital Wellbeing: wellbeing.google.
Side note: You can think why it’s so important for Google to make users use their devices or apps less. Of course, care for their health can play the most significant role. But have you ever wonder how much energy it can be consumed globally by mobile devices? And it’s not just the energy coming from their batteries.
For more, I highly encourage to read one of The Increment magazine’s articles: The secret energy impact of your phone.
Even more AI
What else? Among the others, travellers will get a lot of updates for Maps(e.g. navigating through the city, thanks to the device camera and hints displayed in “AR-way”, on top of the real world).
Software engineers can now deploy and use ML solutions on Firebase, while Data Scientist will be able to use TPU 3.0 for their computations. And when Machine Learning models are ready to utilise, thanks to TensorFlow lite, you can also deploy them into devices with limited resources like mobile phones (Android and iOS) or Raspberry Pi.
— Mirek Stanek 🤖📱 (@froger_mcs) May 9, 2018
If you like mobile photography like I do, thanks to AI, Google Photos will suggest you relevant action such as the option to brighten, share, rotate or archive. Right as you’re looking at the photography. And if you look at the world through Google Lens, you will be able to select, copy and paste the real text or find the information about photographed objects like books or clothes, in real-time.
Those are just the tip of the iceberg. If you are curious what else you could see at Google I/O 2018, just take a look at the 100 things announced this year.
Isn’t it too much?
This question is raised more and more times recently. And probably there is no single answer to that. The advancements in AI and new technologies pop up so fast that not many people in the world follow it. It can be even scarier when those changes and revelations touch the fields of our everyday lives. But we need to get used to it. Because this is how the world works — it moves forward and brings us new challenges and possibilities.
Like in the past, there were people who were afraid of assembly lines and automation, and those who didn’t know the world before them. Same with computers and the Internet. And soon there will be people who won’t know the world before AI and autonomous cars.
What we can do, as a people who live in times of transformations in fields of AI and computing is just making sure that incoming future will be better. As creators, we should be careful and empathic while building the new world, and as humans, we should never stop asking tough questions. Out loud. Especially on behalf of those who remain unheard.
About the impact of technology and its role in improving the quality of life for the people all around the world.