Effective mobile engineering

Building a mobile app isn’t only about coding. It is the entire process, automations and testing, code architecture and of course people behind all of that. I was writing about all of this in my latest blog post Fail safe, not fast.

Today you can also see the video from my presentation at Mobiconf 2018.
I was talking about our experiences from building mobile apps at Azimo. So if you are curious about how the relatively small team can build effectively an app for the global market, I invite you to watch this:

I also had a chance to share my insight during this year’s Google DevFest in Coimbra, Portugal. Slides from the updated presentation can be found on my SpeakerDeck.

I hope you’ll enjoy it. 🍿📺
Soon I’ll publish more posts about doing effective mobile engineering. Stay in touch!

Multi-module Android project codebase Basic setup with Dagger 2, Unit tests, jacoco report and others

Breaking the monolith to microservices is a well-known concept to make backend solutions extendable and maintainable in a scale, by bigger teams. Since mobile apps have become more complex, very often developed by teams of tens of software engineers this concept also grows in mobile platforms. There are many benefits from having apps split into modules/features/libraries:

  • features can be developed independently
  • project structure is cleaner
  • building process can be way faster (e.g., running unit tests on a module can be a matter of seconds, instead of minutes for the entire project)
  • great starting point for instant apps

In case you are struggling with multi-module Android project, I have created example app showing how to deal with things like:

  • Dependency injection with Dagger 2
  • Unit test
  • Jacoco tests coverage report

Even if those are very basic things, it can take some time to make them working correctly in multi-module Android app. To have working solutions in one place, I have created example project on my Github account. There will be more over the time (proguard, instrumentation testing, instant apps). But even at this stage it is also worth sharing.

Just take a look: MultiModuleGithubClient. Your feedback is warmly welcomed!

 

Google I/O 18 AI everywhere

Spring geekfest goes on. Around one week ago we could see Facebook F8, taking place at San Jose’s McEnery Convention Center. Now, we are right after Google I/O 2018, probably the biggest developer and product conference (7000+ people attending), happening near to Googleplex, Mountain View. Here’s my short summary of what we could see in Shoreline Amphitheatre this year.

Continue readingGoogle I/O 18 AI everywhere

Facebook F8 2018 Data protection, AI ethics, people-first

F8, the annual Facebook’s event intended for software engineers and entrepreneurs is over. If you couldn’t attend McEnery Convention Center in San Jose at May 1st to 2nd to get your 200$ Oculus Go for free, here are some takeaways from Zuck himself and the Facebook team.

If you quickly compare 2017 and 2018 you will realize that main theme is a bit different this time. “Keep building services for connecting people” now have the second part — “keep people safe”. And this was the starting point of Mark Zuckerberg’s show.

Continue readingFacebook F8 2018 Data protection, AI ethics, people-first

Why you do the code? What is your reason for doing maintenance, tests, CI/CD, refactor?

If you watch TED Talks it’s pretty likely that you have seen one of the most viewed presentations: “How great leaders inspire action” by Simon Sinek.

The model proposed by Simon explains where the leadership success comes from. Apple, Wright brothers, Martin Luther King — they all have one thing in common. Something that makes people follow them — their dreams, their vision, their plans.

It is the well-explained reason why.

Continue readingWhy you do the code? What is your reason for doing maintenance, tests, CI/CD, refactor?

How VUIs change our lives Voice user interface, a great step in mobile-first to AI first transition

A couple days ago Google published the 2017 summary of their voice-first solutions: Google Home (hardware) and Google Assistant (software). And it seems that the new way of how we interact with the technology knocks on our door. With “Google Home usage increased 9X this holiday season over last year’s”, and one Google Home Mini sold in each second since its premiere, it’s become clear that voice interfaces are slowly going out of an early adoption stage and they’ve begun to settle for good in our homes and minds.

But what is so revolutionary in VUIs and what are the real benefits of having voice-controlled devices around?

Continue readingHow VUIs change our lives Voice user interface, a great step in mobile-first to AI first transition

Surface Capabilities in Google Assistant Skills Adjust your conversation to audio and screen surfaces

This post was published in Chatbots Magazine: Surface Capabilities in Google Assistant Skills.

This post is a part of series about building the personal assistant app, designed for voice as a primary user interface. More posts in series:

  1. Your first Google Assistant skill
  2. Personalize Google Assistant skill with user data
  3. This post

Continue readingSurface Capabilities in Google Assistant Skills Adjust your conversation to audio and screen surfaces

Personalize Google Assistant skill with user data Actions on Google — permissions handling

This post was published in Chatbots Magazine: Personalize Google Assistant skill with user data.

This post is a part of series about building the personal assistant app, designed for voice as a primary user interface. More posts in series:

  1. Your first Google Assistant skill
  2. This post
  3. Surface capabilities in Google Assistant skills

Continue readingPersonalize Google Assistant skill with user data Actions on Google — permissions handling

Your first Google Assistant skill How to build conversational app for Google Home or Google Assistant

This post was published in Chatbots Magazine: Your first Google Assistant skill.

Smart home speakers, assistant platforms and cross-device solutions, so you can talk to your smartwatch and see the result on your TV or car’s dashboard. Personal assistants and VUIs are slowly appearing around us and it’s pretty likely that they will make our lives much easier.
Because of my great faith that natural language will be the next human-machine interface, I decided to start writing new blog posts series and building an open source code where I would like to show how to create new kind of apps: conversational oriented, device-independent assistant skills which will give us freedom in platform or hardware we use.
And will bring the most natural interface for humans – voice.

This post is a part of series about building personal assistant app, designed for voice as a primary user interface. More posts in series:

  1. This post
  2. Personalize Google Assistant skill with user data
  3. Surface capabilities in Google Assistant skills

Continue readingYour first Google Assistant skill How to build conversational app for Google Home or Google Assistant