Google Home at home 🏠

Voice interfaces are slowly showing up in our lives. In most cases, they replace complexity of mobile devices. But there are also new features which do make sense only when they are used just with our voice.

To see quick summary where Google is with its Google Assistant take a look at this video from last Google Developers Day (starting from 33:00):

Continue reading “Google Home at home 🏠”

3 great TED talks about Artificial Intelligence

Artificial Intelligence is here. Still in its very limited form, but there are more and more places where we, as a humanity are soundly bitten by “intelligent” machines. From the simplest calculators which are hugely smarter than us in maths, to Google Translate which can translate whole sentences, keeping proper grammar and human-like language better than most people in the world.

Yes, AI will take our jobs, it already does. But should we be afraid of it? I believe, we shouldn’t. Instead, we need to adapt to the new reality as it happened many times in humankind history (agricultural revolution, industrial revolution, digital revolution — just name a few).

Some call it another revolution (4th industrial revolution?), some just an evolution which has been happening since the world began. But no matter how you call it, thanks to machines and different kinds of artificial intelligence we’ll for sure reach a new level as a humanity. There is so big potential in us — we all have passion, purpose, dreams.

Now just imagine what can happen to the world when there will be something that can replace us with tedious, repeatable tasks. Or if we could boost our creativity and passion by a help from machines and algorithms which are never distracted and can work unstoppable.

Of course, the transition to “the new world” will be hard. Adaptation will require revolutionary, global changes in how we live. And to start doing this we need to understand where we are and what is coming.
There are already people in this world who are trying to do this. Here are 3 of them, standing in front of us on TED stage and telling us about future of AI and humanity. I highly encourage to invest 45mins to catch-up what they wanted to share with us:

Throwback Thursday – building mobile app

About one year ago I had a privilege to share my experience with mobile dev community at Mobiconf conference in Cracow. Because of my introvert nature, it was a big personal challenge to stand next to a couple meters high screen and talk to the international audience. I also don’t like to watch my video recordings, but having in mind the message which I wanted to share, I believe that it’s good to remind this particular one from time to time.

As developers (and engineering team leaders) it is always good to remember what is the biggest value we can give to the company. In a time of data driven development, where companies constantly learn about their users and try to adjust the product to meet expectations I believe that the most important thing is the ability to iterate fast. To bring new value in the smallest release cycles with a product free of bugs.

And this is what my presentation is about. Why we should test, use dependency injection or automate delivery process. I show Android app as an example, but the same rules can be applied to iOS or any other client side platform.

Here is video recording from my talk:

(Book) Walt Disney

I’ve just finished another great biography: Walt Disney: An American Original.

The book about a great dreamer, creator of Mickey Mouse and Disneyland. This is American story about a poor kid living on the farm who became a millionaire, but who never measured his wealth by the amount of collected money. For Walt Disney money was just a tool to accomplish his master plan – to entertain people.

Disney dedicated his life to do it. And there was always something else to do to make the world better. Disney movie production, Disneyland, Disney World and never finished the prototype of the city of tomorrow: EPCOT. He was realizing his plan by the last days of his life.

Walt wasn’t cryonically frozen as the biggest urban legend about him tells us. He’s not coming back. But his legacy will live with us for the centuries, for sure!

Udacity + Google scholarships for digital skills

Today, during Google Developers Day in Kraków, we could see Google Developers + Udacity cooperation announcement which brings us 60 000 seats in their online courses (Web developer and Android developer, 30k seats in each one). The scholarship is dedicated to residents of Europe, Russia, Egypt, Israel and Turkey. All accepted students will have an opportunity to learn coding Android or Web during 3-months course with a help from Udacity and Google mentors.

Web and Android Scholarship Program

10% of the best ones will have an opportunity to take a part in Nanodegree program for free, which ends with certification and a great chance to find a real job in the picked area.

As a former student of Udacity Nanodegree program (Artificial Intelligence Nanodegree in my case), I can only say: thank you Google, thank you Udacity! This amazing initiative with high-quality materials will bring tons of value for IT industry all over the world:

  • For students and future developers who will have a chance to learn how to build a real product from experts working with those technologies on daily basis,
  • For employers who will have a chance to hire well motivated junior devs with practical experience and great will to gain new skills,
  • For entire IT industry (including Google, Udacity and others), because more IT specialist means faster growth this area,
  • And for the people all around the world. Online and cheap/free access to knowledge means that there are much fewer limitations for those who have a great will to learn and want to be professionals in the future.

Let’s build bridges, not walls.

The Keyword — Funding 75,000 Udacity scholarships to bridge the digital skills gap.

Daily Writing

This is my second day of the daily writing challenge. Still not sure how far I’ll go but I’m well motivated to give a try. Why? Because I’d like to try if it works, what are the profits and most important – because finally, I can! As non-native English speaker, I spent a lot of time practicing this language (including written form). And It seems that finally in less than 30 minutes, with no (or a little) help of Google Translate, I can produce something meaningful.

Besides various motivations, mostly from medium.com, my biggest motivations are: Tobias van Schneider with his weekly emails and daily blog by Fred Wilson, a venture capitalist who’s been writing every day since 2003 (14 years!).

Currently, I have no plan behind this – no reminder, no dedicated time during the day. I would like to see how natural it’s for me.
I’m writing at Google Docs, post by post. Subject? One, the most prominent thing from my head. Goal? Write at least 6 posts per week. If I can’t make it, I’ll think about more strict rules and long term plan.

Quote of the day:

Success is continuously improving who you are, how you live, how you serve, and how you relate. (source)

Basic Android app analytics in <60min One step towards data-driven development

Every big tech company today is data-driven. Products are more often built based on collected data rather than internal opinions. It’s very likely that in this moment some of apps on your device are serving you A/B test variants, checking how: new layout, text or even functionality affect your activity and engagement.
The biggest companies have dedicated Business Intelligence teams, their own data warehouses, custom analytics tools and big flat screens in conference rooms showing realtime charts.
And the most important — endless audience waiting to be analysed by pie charts or bars 📊.

Continue readingBasic Android app analytics in <60min One step towards data-driven development

WWDC17 The most important announcements in Apple world

WWDC17, the 3rd most important product/technology conference of 2017 happening in Silicon Valley (McEnery Convention Center in San Jose).
After Facebook F8 and Google I/O, this time we could see Tim Cook and the Apple’s team showing us what’s new and what the future holds for us in the world of Macs, iPhones, smartwatches and other  products. If you didn’t have spare 2hrs of your time to see Keynote, let me walk you through the most interesting things presented this year. Continue readingWWDC17 The most important announcements in Apple world

Google I/O 17 wrap up From mobile, through VR/AR, to Artificial Intelligence

May 17–19 — in that days at Shoreline Amphitheatre, Google organised the 9th edition of one of the biggest developer conference in the world — I/O17. This year I also had a great pleasure to spend that time in Mountain View with thousands of software engineers and techies from all over the globe. In this post I have collected all important announcements and experiences from this amazing 3-days festival. Just in case you would like to know what’s new coming in a world of new technology.

TL;DR
If you are reading this on your mobile device and want to see really short recap of Google I/O 2017 Keynote, I encourage you to check my Medium Series:

AI-first

Do you remember key announcements from I/O16? Probably the most important one was transition from mobile first to AI-first.
And here we are — one year later we have voice controlled device: Google Home, Google Assistant is a fact and Machine Learning is everywhere — also in our mobile devices, cars and smartwatches (literally doing some computation without accessing the cloud solutions). Again, during I/O17 Google’s CEO Sundar Pichai spent great time of Keynote showing the progress of AI revolution.

What does it bring to us? Even more intelligence in Google Photos like best shots selection or sharing suggestions (hey, your friend is on this photo, maybe you want to send it him?). But there will be also completely new level of vision understanding — Google Lens. Camera will be able to give more context (what kind of flower we are looking at) but also take an actions (automatically apply scanned WiFi password or book cinema tickets based on poster photography). Do you still receive paper bills? Soon whole payment process can be as quick as photo shot — Google Lens and Google Assistant (together with partners integrations) will do the rest for you.

More AI use cases: Gmail/Inbox smart replies (do you know that 12% of responses on mobile devices are sent in this way?), AutoML — neural networks to build other neural networks, so we can spend our time to think what to achieve, not how. Even more cloud AI services, partnerships and benefits for the best researchers. And new TPU, Google’s hardware to do cloud Machine Learning even faster.
Those and many other AI-initiatives finally have new home on the Internet: Google.ai.

Google Assistant

Number of announcements and talks related to voice/text interfaces during I/O17 shows that it’s not only an app but completely new ecosystem which can work across all of your mobile devices. Google Assistant is available on Google Home, smartphones (including iPhones!), smartwatches, TVs and soon also in cars and any other pieces of hardware (thanks to Android Things).
Finally besides just talking, you can communicate with Assistant through text messages (so you can use it outside, in crowded places without looking weird 😉). And response will come in the best possible format — if the words are not enough, Assistant will use one of your screens (phone, TV, tablet) to display intended informations. So you can ask Google Home for direction to your favourite store and map will be sent to your mobile phone or car!

There is of course much more — hands-free calling, multilingual support, bluetooth, proactive notifications (Assistant will be able to back to you with informations in the best possible time, e.g. when your flight is delayed or traffic jam can make you late on scheduled meeting).
But probably the biggest announcement here is that Google Assistant now supports transactions and payments 💸.
Why is it so important? The numbers show that most of people really don’t like any kind of input fields — the biggest drop-offs happen while entering payment details or during sign-up process. And it makes sense, we would like to order food or send some money to friends without filling another form (because we already did this so many times in the past!). And this is where Assistant can help with, especially for first-timers. With a simple permission request it can provide all needed data (user or payment details) to our app or platform, so user won’t need to care about it anymore.

Android O

Even if we still don’t know if it’ll be Oreo or Oatmeal Cookie, there are a couple great things which are coming to our mobile devices. Full picture-in-picture support will give you possibility to keep video conversation while looking at calendar or notes. Even more intelligence — to improve battery, security and user experience. With Smart Selection your device knows the meaning of text so it’ll be able to suggest Maps for selected address or dialer for phone number. Who knows, maybe when this API is opened, you will be able to send some money to your friend just by selecting his name?

O comes with another great initiative — Android Go. With fast growing global market for mobile, especially in emerging countries, we need to have system which works well on slower devices and limited data connection.And Go is an adaptation of Android O to meet those requirements — to make our apps truly global.

Side note: if you are Web developer it can be also very interesting for you how Twitter built their Lite app (<1MB) in Progressive Web App technology.

#AndroidDev
A lot of great announcements for developers: official support for Kotlin (and not deprecating Java in any time in the future!), Android Studio 3.0focused on speed improvements (including build process) with better debugging features to make our work even easier. And if you are just starting to build world class app, don’t miss Architecture Components — guidelines and solutions for making our code robust and scalable.

And Instant Apps — native lightweight apps which don’t require installation announced on I/O16 are now available for everyone to develop. Keep an eye on that — early partners report noticeable (double-digit) growth in purchases or content views.

For those who want to bring some intelligence directly to their apps, soon there will be TensorFlow Lite and Neural Network API — entire Machine Learning stack to make usage of our pre-trained models possibly the easiest.

AR/VR

Google also invests a lot of resources in Augmented and Virtual Reality. Besides new Daydream enabled devices, more apps and VR headsets there were two really interesting notes:

VPS — Visual Positioning System, Google’s extension to GPS for really precise indoor location. Thanks to Tango we’ll be able to locate inside buildings — with an addition of voice guidance it could be extremely helpful for visually-impaired people navigating through the world.

AR in education — the most exciting announcement: Tango will be actively used in education! After successful adaptation of Cardboards (more than 2 million students could take a part in virtual trips), Google announced continuation to their program — Expeditions AR (formerly Expeditions).

This time students will be able to gather around virtual objects placed in 3D space thanks to Tango enabled devices. If you want to see something very similar, I highly encourage to check one of the latest Tango integrations: ArtScience Museum in Singapore.

Everything else

There is of course much, much more. During I/O17 we could see 2 keynotes (main one and keynote for developers) and take a part in more than 150 sessions. We could see, hear or even try new YouTube 360 videos, new updates in Firebase (Crashlytics as an official crash reporter, new tools to performance analysis), Cloud for IoT, Android Auto or new smartwatches with Android 2.0.

If you would like to see literally all of announcements from 3 days of I/O17, Google published blog post with everything covered in one place:

And if you want to immerse yourself in the details, videos from all 156 sessions are available now on YouTube!

Open source your code!

Besides all of these fascinating announcements, there is one particular takeaway which I’d like to share with you. More than 1 year ago Google open sourced TensorFlow. They shared years of their experience with everyone to make Machine Learning easier. After that time there are a lot of projects from people around the world, who solve real issues without having full knowledge about theory behind TensorFlow or ML. But they have great ideas (check this inspiring video below!). By giving them proper tools, we can make all these ideas happen.

That’s why you and your dev team should share the code. Because it’s really likely that somewhere there is someone who you could help with making the world a better place. We, the tech team behind Azimo do it.

Do you?