Google I/O 17 wrap up From mobile, through VR/AR, to Artificial Intelligence

May 17–19 — in that days at Shoreline Amphitheatre, Google organised the 9th edition of one of the biggest developer conference in the world — I/O17. This year I also had a great pleasure to spend that time in Mountain View with thousands of software engineers and techies from all over the globe. In this post I have collected all important announcements and experiences from this amazing 3-days festival. Just in case you would like to know what’s new coming in a world of new technology.

TL;DR
If you are reading this on your mobile device and want to see really short recap of Google I/O 2017 Keynote, I encourage you to check my Medium Series:

AI-first

Do you remember key announcements from I/O16? Probably the most important one was transition from mobile first to AI-first.
And here we are — one year later we have voice controlled device: Google Home, Google Assistant is a fact and Machine Learning is everywhere — also in our mobile devices, cars and smartwatches (literally doing some computation without accessing the cloud solutions). Again, during I/O17 Google’s CEO Sundar Pichai spent great time of Keynote showing the progress of AI revolution.

What does it bring to us? Even more intelligence in Google Photos like best shots selection or sharing suggestions (hey, your friend is on this photo, maybe you want to send it him?). But there will be also completely new level of vision understanding — Google Lens. Camera will be able to give more context (what kind of flower we are looking at) but also take an actions (automatically apply scanned WiFi password or book cinema tickets based on poster photography). Do you still receive paper bills? Soon whole payment process can be as quick as photo shot — Google Lens and Google Assistant (together with partners integrations) will do the rest for you.

More AI use cases: Gmail/Inbox smart replies (do you know that 12% of responses on mobile devices are sent in this way?), AutoML — neural networks to build other neural networks, so we can spend our time to think what to achieve, not how. Even more cloud AI services, partnerships and benefits for the best researchers. And new TPU, Google’s hardware to do cloud Machine Learning even faster.
Those and many other AI-initiatives finally have new home on the Internet: Google.ai.

Google Assistant

Number of announcements and talks related to voice/text interfaces during I/O17 shows that it’s not only an app but completely new ecosystem which can work across all of your mobile devices. Google Assistant is available on Google Home, smartphones (including iPhones!), smartwatches, TVs and soon also in cars and any other pieces of hardware (thanks to Android Things).
Finally besides just talking, you can communicate with Assistant through text messages (so you can use it outside, in crowded places without looking weird 😉). And response will come in the best possible format — if the words are not enough, Assistant will use one of your screens (phone, TV, tablet) to display intended informations. So you can ask Google Home for direction to your favourite store and map will be sent to your mobile phone or car!

There is of course much more — hands-free calling, multilingual support, bluetooth, proactive notifications (Assistant will be able to back to you with informations in the best possible time, e.g. when your flight is delayed or traffic jam can make you late on scheduled meeting).
But probably the biggest announcement here is that Google Assistant now supports transactions and payments 💸.
Why is it so important? The numbers show that most of people really don’t like any kind of input fields — the biggest drop-offs happen while entering payment details or during sign-up process. And it makes sense, we would like to order food or send some money to friends without filling another form (because we already did this so many times in the past!). And this is where Assistant can help with, especially for first-timers. With a simple permission request it can provide all needed data (user or payment details) to our app or platform, so user won’t need to care about it anymore.

Android O

Even if we still don’t know if it’ll be Oreo or Oatmeal Cookie, there are a couple great things which are coming to our mobile devices. Full picture-in-picture support will give you possibility to keep video conversation while looking at calendar or notes. Even more intelligence — to improve battery, security and user experience. With Smart Selection your device knows the meaning of text so it’ll be able to suggest Maps for selected address or dialer for phone number. Who knows, maybe when this API is opened, you will be able to send some money to your friend just by selecting his name?

O comes with another great initiative — Android Go. With fast growing global market for mobile, especially in emerging countries, we need to have system which works well on slower devices and limited data connection.And Go is an adaptation of Android O to meet those requirements — to make our apps truly global.

Side note: if you are Web developer it can be also very interesting for you how Twitter built their Lite app (<1MB) in Progressive Web App technology.

#AndroidDev
A lot of great announcements for developers: official support for Kotlin (and not deprecating Java in any time in the future!), Android Studio 3.0focused on speed improvements (including build process) with better debugging features to make our work even easier. And if you are just starting to build world class app, don’t miss Architecture Components — guidelines and solutions for making our code robust and scalable.

And Instant Apps — native lightweight apps which don’t require installation announced on I/O16 are now available for everyone to develop. Keep an eye on that — early partners report noticeable (double-digit) growth in purchases or content views.

For those who want to bring some intelligence directly to their apps, soon there will be TensorFlow Lite and Neural Network API — entire Machine Learning stack to make usage of our pre-trained models possibly the easiest.

AR/VR

Google also invests a lot of resources in Augmented and Virtual Reality. Besides new Daydream enabled devices, more apps and VR headsets there were two really interesting notes:

VPS — Visual Positioning System, Google’s extension to GPS for really precise indoor location. Thanks to Tango we’ll be able to locate inside buildings — with an addition of voice guidance it could be extremely helpful for visually-impaired people navigating through the world.

AR in education — the most exciting announcement: Tango will be actively used in education! After successful adaptation of Cardboards (more than 2 million students could take a part in virtual trips), Google announced continuation to their program — Expeditions AR (formerly Expeditions).

This time students will be able to gather around virtual objects placed in 3D space thanks to Tango enabled devices. If you want to see something very similar, I highly encourage to check one of the latest Tango integrations: ArtScience Museum in Singapore.

Everything else

There is of course much, much more. During I/O17 we could see 2 keynotes (main one and keynote for developers) and take a part in more than 150 sessions. We could see, hear or even try new YouTube 360 videos, new updates in Firebase (Crashlytics as an official crash reporter, new tools to performance analysis), Cloud for IoT, Android Auto or new smartwatches with Android 2.0.

If you would like to see literally all of announcements from 3 days of I/O17, Google published blog post with everything covered in one place:

And if you want to immerse yourself in the details, videos from all 156 sessions are available now on YouTube!

Open source your code!

Besides all of these fascinating announcements, there is one particular takeaway which I’d like to share with you. More than 1 year ago Google open sourced TensorFlow. They shared years of their experience with everyone to make Machine Learning easier. After that time there are a lot of projects from people around the world, who solve real issues without having full knowledge about theory behind TensorFlow or ML. But they have great ideas (check this inspiring video below!). By giving them proper tools, we can make all these ideas happen.

That’s why you and your dev team should share the code. Because it’s really likely that somewhere there is someone who you could help with making the world a better place. We, the tech team behind Azimo do it.

Do you?

Hello world Google Home Github Actions — building first agent for Google Assistant in Java

Some time ago I published unofficial Google Actions SDK written in Java. Source code and documentation can be found on Github: Google-Actions-Java-SDK. Library can be also downloaded from Bintray jCenter:

The goal for this project is to give Android/Java developers possibility to build solutions for Google Home without learning new language (official Actions SDK is written in Node.js). Continue readingHello world Google Home Github Actions — building first agent for Google Assistant in Java

Facebook F8 2017 in 4 minutes Augmented reality, chat bots, AI and mind reading

When you read those words F8, Facebook Developer Conference is over. Like every year, Mark Zuckerberg and his staff showed us what they worked on in last months and how the future of social media will look like. If by any chance you missed keynote videos (day 1day 2) and didn’t have a chance to attend McEnery Convention Center in San Jose, we are here with a short summary of worth remembering announcements.

After revelations on F8 2016, Facebook’s direction stays unchanged in 2017 — extend our reality as much as possible by technologies like Augmented Reality, Virtual Reality, 3D cameras and everything else what could enrich the real world and make physical distance between people negligible.

In this year among the others we could see:

Camera effects — platform which turns our mobile devices into AR platform. You will be able not only add text or stickers to your photos and videos but also interact with them in 3D world model, drawing “around” cup of your favourite coffee or simply putting something on real-virtual table.

Facebook Spaces — virtual world in which thanks to VR systems from Oculus you will see and even interact with your friends and family, no matter if they are tens or thousands kilometers away.

AR Studio — for those who have artistic souls and gallery of camera effects isn’t enough, there is a tool which allows to build our own effects and animations.

And much, much more — AI to analyse people’s movement in realtime, new version of 3D 360 camera which gives us possibility to see places which weren’t be really photographed, but modelled by it’s software.

There are not much tech companies with Facebook scale, but most of them (like Google, Amazon, Netflix and others) are very keen to share their knowledge and experience with the world. Azimo’s tech team is proudly taking part in this contribution too, so we are even happier to hear about those great initiatives:

Facebook Developer Circles — similarly to Google Developer Groups or Apple educating the next generation how to code, Facebook builds its own knowledge sharing groups to help local communities, startups, entrepreneurs, and developers succeed.

Open source — It’s also unchanged that a lot of new technologies and solutions invented by facebook are shared with developers for free. Not telling about things released over the year, on F8 we could hear about some new projects for Android, JavaScript (React) and VR. There was also great announcement in field of Artificial Intelligence: Caffe2 — Deep Learning framework optimised to work on mobile devices (another one after Google’sTensorFlow). For sure it’ll bring more intelligence into mobile apps, especially in fields of computer vision, speech and language processing and much more for sure.

Connectivity — still there is a lot of places in the world where internet connection is limited or unavailable. Facebook continues their research in this field to make this medium of endless information widely distributed and high-speed.

One year after Facebook launched beta of Messenger Platform it seems that chatbots and rich messaging experience will stay with us for longer. During F8 we could hear about new list of features packed under Messenger Platform 2.0 version. Here you have list of the most important ones:

Discover Tab — That’s right, Messenger Bots have their own “apps store” where they are grouped by categories, popularity, location and others.

Smart replies — currently this feature is a bit mysterious, available for limited audience of businesses (mostly restaurants in US). While probably everyone heard about Artificial Intelligence, currently it’s still not easy to use it in reality. Even the simplest implementations require well qualified experts spending weeks to make it real. For small businesses it’s still beyond their reach.
With Smart Replies Facebook would like to change this situation — their AI is able to analyse fanpage, generate answers for frequently asked questions and use them in chatbot conversation. No additional implementation should be required on business side. With a few clicks you should be able to build your own chatbot powered by wit.ai, automatically answering questions like what are opening hours or what is today’s special.

Chat Extension — when one year ago Facebook presented Messenger Bots they worked as standalone apps with conversational interface. Now with Chat Extension you can finally add them directly into group or individual conversation. And it changes a lot! Let’s assume that you would like to send some money to your friend. Previously chatbot had to ask you who is your friend and how you would like to accomplish this action. Now, when bot knows the context (people who take part in conversation), setup steps aren’t needed — you can focus on what to do, not how.

Parametric Messenger Codes — QR-like codes for Messenger which should be able to open bot conversation with given context. Just imagine — you want to buy “this” hat. Instead of talking to clothing store chatbot, helloing him, simply scan Messenger Code on a label. Your conversation will start with the most intended question: “Would you like to buy it now? Yes/No.”

M — AI powered assistant currently available for really small group of users. Have you heard about Google Assistant, Samsung’s Bixby or Siri? That’s right — Facebook is catching up this area too.

At the end there is something what impressed us the most — Building 8. Like Google X, Facebook also wants to have it’s own moonshots. What it is? Wireless communication with brainwaves and speechless “talking” through skin. The technology of tomorrow is coming. To make the world a better place.

And that’s all from us, but definitely not everything what was announced on F8. Do you want more? You can see videos from all F8 talks.

Historical intro to AI planning languages Not only Machine Learning drives our autonomous cars

This is my 2nd publication in field of Artificial Intelligence, prepared as a part of my project in AI Nanodegree classes. This time the goal was to write research paper about important historical developments in the field of AI planning and search. I hope you will like it 🙂.

Planning or more precisely: automated planning and scheduling is one of the major fields of AI (among the others like: Machine Learning, Natural Language Processing, Computer Vision and more). Planning focuses on realisation of strategies or action sequences executed by:

  • Intelligent agents — the autonomous entities (software of hardware) being able to observe the world through different types of sensors and perform actions based on those observations.
  • Autonomous robots — physical intelligent agents which deliver goods (factory robots), keep our house clean (intelligent vacuum cleaners) or discover outer worlds in space missions.
  • Unmanned vehicles — autonomous cars, drones or robotic spacecrafts.

Continue readingHistorical intro to AI planning languages Not only Machine Learning drives our autonomous cars

Understanding AlphaGo How AI beat us in Go — game of profound complexity

One of required skills as an Artificial Intelligence engineer is ability to understand and explain highly technical research papers in this field. One of my projects as a student in AI Nanodegree classes is an analysis of seminal paper in the field of Game-Playing. The target of my analysis was Nature’s paper about technical side of AlphaGo — Google Deepmind system which for the first time in history beat elite professional Go player, winning by 5 games to 0 with European Go champion — Fan Hui.

The goal of this summary (and my future publications) is to make this knowledge widely understandable, especially for those who are just starting the journey in field of AI or those who doesn’t have any experience in this area at all.

The original paper — Mastering the game of Go with deep neural networks and tree search:

http://www.nature.com/nature/journal/v529/n7587/full/nature16961.htm

Continue readingUnderstanding AlphaGo How AI beat us in Go — game of profound complexity

Learning the Artificial Intelligence Why I’m taking AI Nanodegree program at Udacity

There is a time in our life when we mostly learn. Then over the time we learn less and use our knowledge more. And very often it’s not just one-way process but more like a cycle which ends and starts from beginning.
For me now is the moment when a new cycle is slowly starting.

Continue readingLearning the Artificial Intelligence Why I’m taking AI Nanodegree program at Udacity

Iron Man’s Jarvis — is it still a fiction?

Who doesn’t dream about Iron Man’s suit? Infinite power source — Vibranium Arc Reactor, ability to fly and dive thanks to Repulsors and oxygen supplies, almost indestructible single-crystal titanium armor with extremely danger weaponry.

Since we’re still years or even decades (are we?) from having at least prototype of flying metal suite, there is one piece of it which can be closer than we think.

JARVIS

While Vibranium Arc Reactor is a heart of Iron Man suit, the equally important thing is its brain — Jarvis.
Jarvis is a highly advanced computerized A.I. developed by Tony Stark, (…) to manage almost everything, especially matters related to technology, in Tony’s life.” Does it sound familiar? Continue reading “Iron Man’s Jarvis — is it still a fiction?”