Learning the Artificial Intelligence Why I’m taking AI Nanodegree program at Udacity

There is a time in our life when we mostly learn. Then over the time we learn less and use our knowledge more. And very often it’s not just one-way process but more like a cycle which ends and starts from beginning.
For me now is the moment when a new cycle is slowly starting.

My professional career is all about mobile. I’m Head of mobile at Azimo, leading the team of developers. Most of my time I spend coding and making sure that everyone is happy — engineers with challenging problems to solve, business with fulfilled deadlines, company with more opportunities from research and the most important — users, satisfied with our product. While at work I don’t always have enough time for coding, I do it also after hours. I build open source, share my knowledge on medium and my blog, sometimes I speak at conferences and meet-ups. All about mobile. Of course.

But besides that (and my addiction to Netflix) there is the greatest passion — curiosity about the Future. Future of high-tech, Artificial Intelligence, 🤖 and the World.

The (future of) AI

Believe me or not, but Artificial Intelligence is already around us. We use it every day asking Siri for direction to the closest store. Or being informed about traffic jams before we leave for a meeting. Or by getting personal suggestions about TV series.
And we are still alive. Skynet doesn’t rule the world, Grey goo didn’t eat us and the whole planet.
Instead, we have technology which can describe photography to help blind people “see” it. We have cars which are (or soon will be) able to drive the owner to the hospital during emergency. Or connected to the city net can help solve traffic jams by routes load balancing. There are of course many others like diseases detection and translations (even between language pairs never seen explicitly by the system). But besides these huge things, there are also smaller, but still great opportunities for AI which can affect thousands or millions of people.

From mobile first to AI first

Smart data analysis and machine learning are enabling the new era of technology. In the mobile-first world we use to build apps for masses. The more people like it, the better. We do A/B testing, analyse funnels and rejections and tweak things to make our product adjusted to the bigger segment of users. But segment means not to everyone.

What if instead we could build the app for each person separately. The app which adjusts its interface or behaviour to user. App which makes smart shortcuts for repeatable actions. Are you walking now? Probably you would be happy with less detailed user interface with brighter call-to-action and a couple less steps to accomplish the intention.
Are you in city in the middle of the night? There is a big chance that you want to go home so this will be the first action proposition when you open new Uber app.

And what if there is hidden correlation between user’s age, state of weather, his nationality and current activity? Something what makes the user needs better explanation how to make things happen in our platform. Or maybe there is one common thing which users ask for your customer support team. Why not to let algorithms look for specific patterns in data to adjust the app or feed it from conversations (both, requests and answers) to automate support replies.

These and endless more examples can be done thanks to proper data analysis and smaller or bigger bits of Artificial Intelligence.

Artificial Intelligence Nanodegree program

I am super excited to be in the very first group of people accepted into Artificial Intelligence Nanodegree program from Udacity.

https://www.udacity.com/course/artificial-intelligence-nanodegree–nd889

My expectations about it? For sure I’m not going to build self-driving car. I’m not a part of team researching and building speech recognition too. But I definitely will be the one who will use these. While I’m going to be just an owner of that car in some time in the future, I will be much closer to voice commands systems and natural language processing for sure. Sooner or later a lot of products will be moved from mobile apps to voice commands because of simple reason — speech is less distracting for people than touch and visual interaction. We can talk to our car and still drive it. We can do home stuff like cooking, cleaning or watching tv and in the meantime book a cinema tickets, check latest headlines or switch the lights off with our voice. The truth is that for some of these things mobile apps were invented just because we didn’t know how to interact with them “directly”. Without any visual distractions like small screens of our phones.

But AI is not only about human-machine interface. There are a lot of direct and indirect use cases waiting to be discovered. I’m really keen to meet these which can be applied into the mobile world. To prove that AI-first is not something completely new, but it’s more like a natural evolution of mobile-first. And this is what I’d like to get from these classes — unlock another level of imagination and understanding which can make the good even better.

Years ago Alan Kay said:

The best way to predict the future is to invent it.

So here I am. Ready for the next challenge.

Author: Mirek Stanek

Head of Mobile Development at Azimo. Artificial Intelligence adept 🤖. I dream big and do the code ✨.

Leave a Reply

Your email address will not be published. Required fields are marked *