Breaking the monolith to microservices is a well-known concept to make backend solutions extendable and maintainable in a scale, by bigger teams. Since mobile apps have become more complex, very often developed by teams of tens of software engineers this concept also grows in mobile platforms. There are many benefits from having apps split into modules/features/libraries:
- features can be developed independently
- project structure is cleaner
- building process can be way faster (e.g., running unit tests on a module can be a matter of seconds, instead of minutes for the entire project)
- great starting point for instant apps
In case you are struggling with multi-module Android project, I have created example app showing how to deal with things like:
- Dependency injection with Dagger 2
- Unit test
- Jacoco tests coverage report
Even if those are very basic things, it can take some time to make them working correctly in multi-module Android app. To have working solutions in one place, I have created example project on my Github account. There will be more over the time (proguard, instrumentation testing, instant apps). But even at this stage it is also worth sharing.
Just take a look: MultiModuleGithubClient. Your feedback is warmly welcomed!
This post was published in Chatbots Magazine: Your first Google Assistant skill.
Smart home speakers, assistant platforms and cross-device solutions, so you can talk to your smartwatch and see the result on your TV or car’s dashboard. Personal assistants and VUIs are slowly appearing around us and it’s pretty likely that they will make our lives much easier.
Because of my great faith that natural language will be the next human-machine interface, I decided to start writing new blog posts series and building an open source code where I would like to show how to create new kind of apps: conversational oriented, device-independent assistant skills which will give us freedom in platform or hardware we use.
And will bring the most natural interface for humans – voice.
This post is a part of series about building personal assistant app, designed for voice as a primary user interface. More posts in series:
- This post
- Personalize Google Assistant skill with user data
- Surface capabilities in Google Assistant skills
Continue reading “Your first Google Assistant skill How to build conversational app for Google Home or Google Assistant“
As a technical people, we usually see AI solutions as a bunch of really smart algorithms operating on statistical models, doing nonlinear computations. In general something extremely abstract, what its roots in programming languages.
But, as “neural network” term may suggest, many of those solutions are inspired by biology, primarily biological brain.
Some time ago, DeepMind researchers published paper: Neuroscience-Inspired Artificial Intelligence, where they highlighted some AI techniques which directly or indirectly come from neuroscience. I will try to sum it up, but if you would like to read full version, it can be found under this link:
Roots of AI
One of many definitions describes AI as hypothetical intelligence, created not by nature but artificially, in the engineering process. One of the goals of it is to create human-level, General Artificial Intelligence. Many people argue if such an intelligence is even possible, but there is one thing which proves it: it’s a human brain.
It seems natural that neuroscience is used as a guide or an inspiration for new types of architectures and algorithms. Biological computation very often works better than mathematical and logic-based methods, especially when it comes to cognitive functions.
Moreover, if current, still far-from-ideal AI techniques can be found as a core of brain functioning, it’s pretty likely that in some time in the future engineering effort pays off.
At the end, neuroscience can be also a good validation for existing AI solutions.
In current AI research, there are two key fields which took root in neuroscience — Reinforcement Learning (learning by taking actions in the environment to maximise reward) and Deep Learning (learning from examples such as a training set which correlates data with labels). Continue reading “Where does AI come from? Summary of “Neuroscience-Inspired Artificial Intelligence”“
Every big tech company today is data-driven. Products are more often built based on collected data rather than internal opinions. It’s very likely that in this moment some of apps on your device are serving you A/B test variants, checking how: new layout, text or even functionality affect your activity and engagement.
The biggest companies have dedicated Business Intelligence teams, their own data warehouses, custom analytics tools and big flat screens in conference rooms showing realtime charts.
And the most important — endless audience waiting to be analysed by pie charts or bars 📊.
Continue reading “Basic Android app analytics in <60min One step towards data-driven development“
Some time ago I published unofficial Google Actions SDK written in Java. Source code and documentation can be found on Github: Google-Actions-Java-SDK. Library can be also downloaded from Bintray jCenter:
The goal for this project is to give Android/Java developers possibility to build solutions for Google Home without learning new language (official Actions SDK is written in Node.js). Continue reading “Hello world Google Home Github Actions — building first agent for Google Assistant in Java“
This is my 2nd publication in field of Artificial Intelligence, prepared as a part of my project in AI Nanodegree classes. This time the goal was to write research paper about important historical developments in the field of AI planning and search. I hope you will like it 🙂.
Planning or more precisely: automated planning and scheduling is one of the major fields of AI (among the others like: Machine Learning, Natural Language Processing, Computer Vision and more). Planning focuses on realisation of strategies or action sequences executed by:
- Intelligent agents — the autonomous entities (software of hardware) being able to observe the world through different types of sensors and perform actions based on those observations.
- Autonomous robots — physical intelligent agents which deliver goods (factory robots), keep our house clean (intelligent vacuum cleaners) or discover outer worlds in space missions.
- Unmanned vehicles — autonomous cars, drones or robotic spacecrafts.
Continue reading “Historical intro to AI planning languages Not only Machine Learning drives our autonomous cars“
One of required skills as an Artificial Intelligence engineer is ability to understand and explain highly technical research papers in this field. One of my projects as a student in AI Nanodegree classes is an analysis of seminal paper in the field of Game-Playing. The target of my analysis was Nature’s paper about technical side of AlphaGo — Google Deepmind system which for the first time in history beat elite professional Go player, winning by 5 games to 0 with European Go champion — Fan Hui.
The goal of this summary (and my future publications) is to make this knowledge widely understandable, especially for those who are just starting the journey in field of AI or those who doesn’t have any experience in this area at all.
The original paper — Mastering the game of Go with deep neural networks and tree search:
Continue reading “Understanding AlphaGo How AI beat us in Go — game of profound complexity“
Voice interfaces are definitely the future of interaction between people and the technology. Even if they won’t replace mobile apps (at least in next years), for sure they extend their possibilities. It means that for many mobile programmers, assistants like Actions on Google or Amazon Alexa will be next platforms to build their solutions. Continue reading “Building Google Actions with Java Move your code from Android to Google Assistant“