Building a mobile app isn’t only about coding. It is the entire process, automations and testing, code architecture and of course people behind all of that. I was writing about all of this in my latest blog post Fail safe, not fast.
Today you can also see the video from my presentation at Mobiconf 2018.
I was talking about our experiences from building mobile apps at Azimo. So if you are curious about how the relatively small team can build effectively an app for the global market, I invite you to watch this:
I also had a chance to share my insight during this year’s Google DevFest in Coimbra, Portugal. Slides from the updated presentation can be found on my SpeakerDeck.
I hope you’ll enjoy it. 🍿📺
Soon I’ll publish more posts about doing effective mobile engineering. Stay in touch!
Breaking the monolith to microservices is a well-known concept to make backend solutions extendable and maintainable in a scale, by bigger teams. Since mobile apps have become more complex, very often developed by teams of tens of software engineers this concept also grows in mobile platforms. There are many benefits from having apps split into modules/features/libraries:
features can be developed independently
project structure is cleaner
building process can be way faster (e.g., running unit tests on a module can be a matter of seconds, instead of minutes for the entire project)
In case you are struggling with multi-module Android project, I have created example app showing how to deal with things like:
Dependency injection with Dagger 2
Jacoco tests coverage report
Even if those are very basic things, it can take some time to make them working correctly in multi-module Android app. To have working solutions in one place, I have created example project on my Github account. There will be more over the time (proguard, instrumentation testing, instant apps). But even at this stage it is also worth sharing.
Smart home speakers, assistant platforms and cross-device solutions, so you can talk to your smartwatch and see the result on your TV or car’s dashboard. Personal assistants and VUIs are slowly appearing around us and it’s pretty likely that they will make our lives much easier. Because of my great faith that natural language will be the next human-machine interface, I decided to start writing new blog posts series and building an open source code where I would like to show how to create new kind of apps: conversational oriented, device-independent assistant skills which will give us freedom in platform or hardware we use.
And will bring the most natural interface for humans – voice.
This post is a part of series about building personal assistant app, designed for voice as a primary user interface. More posts in series:
As a technical people, we usually see AI solutions as a bunch of really smart algorithms operating on statistical models, doing nonlinear computations. In general something extremely abstract, what its roots in programming languages.
But, as “neural network” term may suggest, many of those solutions are inspired by biology, primarily biological brain.
Some time ago, DeepMind researchers published paper: Neuroscience-Inspired Artificial Intelligence, where they highlighted some AI techniques which directly or indirectly come from neuroscience. I will try to sum it up, but if you would like to read full version, it can be found under this link:
One of many definitions describes AI as hypothetical intelligence, created not by nature but artificially, in the engineering process. One of the goals of it is to create human-level, General Artificial Intelligence. Many people argue if such an intelligence is even possible, but there is one thing which proves it: it’s a human brain.
It seems natural that neuroscience is used as a guide or an inspiration for new types of architectures and algorithms. Biological computation very often works better than mathematical and logic-based methods, especially when it comes to cognitive functions.
Moreover, if current, still far-from-ideal AI techniques can be found as a core of brain functioning, it’s pretty likely that in some time in the future engineering effort pays off.
At the end, neuroscience can be also a good validation for existing AI solutions.
Every big tech company today is data-driven. Products are more often built based on collected data rather than internal opinions. It’s very likely that in this moment some of apps on your device are serving you A/B test variants, checking how: new layout, text or even functionality affect your activity and engagement.
The biggest companies have dedicated Business Intelligence teams, their own data warehouses, custom analytics tools and big flat screens in conference rooms showing realtime charts.
And the most important — endless audience waiting to be analysed by pie charts or bars 📊.
Some time ago I published unofficial Google Actions SDK written in Java. Source code and documentation can be found on Github: Google-Actions-Java-SDK. Library can be also downloaded from Bintray jCenter:
This is my 2nd publication in field of Artificial Intelligence, prepared as a part of my project in AI Nanodegree classes. This time the goal was to write research paper about important historical developments in the field of AI planning and search. I hope you will like it 🙂.
Planning or more precisely: automated planning and scheduling is one of the major fields of AI (among the others like: Machine Learning, Natural Language Processing, Computer Vision and more). Planning focuses on realisation of strategies or action sequences executed by:
Intelligent agents — the autonomous entities (software of hardware) being able to observe the world through different types of sensors and perform actions based on those observations.
Autonomous robots — physical intelligent agents which deliver goods (factory robots), keep our house clean (intelligent vacuum cleaners) or discover outer worlds in space missions.
Unmanned vehicles — autonomous cars, drones or robotic spacecrafts.
One of required skills as an Artificial Intelligence engineer is ability to understand and explain highly technical research papers in this field. One of my projects as a student in AI Nanodegree classes is an analysis of seminal paper in the field of Game-Playing. The target of my analysis was Nature’s paper about technical side of AlphaGo — Google Deepmind system which for the first time in history beat elite professional Go player, winning by 5 games to 0 with European Go champion — Fan Hui.
The goal of this summary (and my future publications) is to make this knowledge widely understandable, especially for those who are just starting the journey in field of AI or those who doesn’t have any experience in this area at all.
The original paper — Mastering the game of Go with deep neural networks and tree search: