Surface Capabilities in Google Assistant Skills Adjust your conversation to audio and screen surfaces

This post was published in Chatbots Magazine: Surface Capabilities in Google Assistant Skills.

This post is a part of series about building the personal assistant app, designed for voice as a primary user interface. More posts in series:

  1. Your first Google Assistant skill
  2. Personalize Google Assistant skill with user data
  3. This post

Continue readingSurface Capabilities in Google Assistant Skills Adjust your conversation to audio and screen surfaces

Personalize Google Assistant skill with user data Actions on Google — permissions handling

This post was published in Chatbots Magazine: Personalize Google Assistant skill with user data.

This post is a part of series about building the personal assistant app, designed for voice as a primary user interface. More posts in series:

  1. Your first Google Assistant skill
  2. This post
  3. Surface capabilities in Google Assistant skills

Continue readingPersonalize Google Assistant skill with user data Actions on Google — permissions handling

Your first Google Assistant skill How to build conversational app for Google Home or Google Assistant

This post was published in Chatbots Magazine: Your first Google Assistant skill.

Smart home speakers, assistant platforms and cross-device solutions, so you can talk to your smartwatch and see the result on your TV or car’s dashboard. Personal assistants and VUIs are slowly appearing around us and it’s pretty likely that they will make our lives much easier.
Because of my great faith that natural language will be the next human-machine interface, I decided to start writing new blog posts series and building an open source code where I would like to show how to create new kind of apps: conversational oriented, device-independent assistant skills which will give us freedom in platform or hardware we use.
And will bring the most natural interface for humans – voice.

This post is a part of series about building personal assistant app, designed for voice as a primary user interface. More posts in series:

  1. This post
  2. Personalize Google Assistant skill with user data
  3. Surface capabilities in Google Assistant skills

Continue readingYour first Google Assistant skill How to build conversational app for Google Home or Google Assistant

Where does AI come from? Summary of “Neuroscience-Inspired Artificial Intelligence”

As a technical people, we usually see AI solutions as a bunch of really smart algorithms operating on statistical models, doing nonlinear computations. In general something extremely abstract, what its roots in programming languages.
But, as “neural network” term may suggest, many of those solutions are inspired by biology, primarily biological brain.

Some time ago, DeepMind researchers published paper: Neuroscience-Inspired Artificial Intelligence, where they highlighted some AI techniques which directly or indirectly come from neuroscience. I will try to sum it up, but if you would like to read full version, it can be found under this link:

https://deepmind.com/documents/113/Neuron.pdf

Roots of AI

One of many definitions describes AI as hypothetical intelligence, created not by nature but artificially, in the engineering process. One of the goals of it is to create human-level, General Artificial Intelligence. Many people argue if such an intelligence is even possible, but there is one thing which proves it: it’s a human brain.

It seems natural that neuroscience is used as a guide or an inspiration for new types of architectures and algorithms. Biological computation very often works better than mathematical and logic-based methods, especially when it comes to cognitive functions.
Moreover, if current, still far-from-ideal AI techniques can be found as a core of brain functioning, it’s pretty likely that in some time in the future engineering effort pays off.
At the end, neuroscience can be also a good validation for existing AI solutions.

In current AI research, there are two key fields which took root in neuroscience — Reinforcement Learning (learning by taking actions in the environment to maximise reward) and Deep Learning (learning from examples such as a training set which correlates data with labels). Continue readingWhere does AI come from? Summary of “Neuroscience-Inspired Artificial Intelligence”

Basic Android app analytics in <60min One step towards data-driven development

Every big tech company today is data-driven. Products are more often built based on collected data rather than internal opinions. It’s very likely that in this moment some of apps on your device are serving you A/B test variants, checking how: new layout, text or even functionality affect your activity and engagement.
The biggest companies have dedicated Business Intelligence teams, their own data warehouses, custom analytics tools and big flat screens in conference rooms showing realtime charts.
And the most important — endless audience waiting to be analysed by pie charts or bars 📊.

Continue readingBasic Android app analytics in <60min One step towards data-driven development

Hello world Google Home Github Actions — building first agent for Google Assistant in Java

Some time ago I published unofficial Google Actions SDK written in Java. Source code and documentation can be found on Github: Google-Actions-Java-SDK. Library can be also downloaded from Bintray jCenter:

The goal for this project is to give Android/Java developers possibility to build solutions for Google Home without learning new language (official Actions SDK is written in Node.js). Continue readingHello world Google Home Github Actions — building first agent for Google Assistant in Java

Historical intro to AI planning languages Not only Machine Learning drives our autonomous cars

This is my 2nd publication in field of Artificial Intelligence, prepared as a part of my project in AI Nanodegree classes. This time the goal was to write research paper about important historical developments in the field of AI planning and search. I hope you will like it 🙂.

Planning or more precisely: automated planning and scheduling is one of the major fields of AI (among the others like: Machine Learning, Natural Language Processing, Computer Vision and more). Planning focuses on realisation of strategies or action sequences executed by:

  • Intelligent agents — the autonomous entities (software of hardware) being able to observe the world through different types of sensors and perform actions based on those observations.
  • Autonomous robots — physical intelligent agents which deliver goods (factory robots), keep our house clean (intelligent vacuum cleaners) or discover outer worlds in space missions.
  • Unmanned vehicles — autonomous cars, drones or robotic spacecrafts.

Continue readingHistorical intro to AI planning languages Not only Machine Learning drives our autonomous cars

Understanding AlphaGo How AI beat us in Go — game of profound complexity

One of required skills as an Artificial Intelligence engineer is ability to understand and explain highly technical research papers in this field. One of my projects as a student in AI Nanodegree classes is an analysis of seminal paper in the field of Game-Playing. The target of my analysis was Nature’s paper about technical side of AlphaGo — Google Deepmind system which for the first time in history beat elite professional Go player, winning by 5 games to 0 with European Go champion — Fan Hui.

The goal of this summary (and my future publications) is to make this knowledge widely understandable, especially for those who are just starting the journey in field of AI or those who doesn’t have any experience in this area at all.

The original paper — Mastering the game of Go with deep neural networks and tree search:

http://www.nature.com/nature/journal/v529/n7587/full/nature16961.htm

Continue readingUnderstanding AlphaGo How AI beat us in Go — game of profound complexity

Building Google Actions with Java Move your code from Android to Google Assistant

Voice interfaces are definitely the future of interaction between people and the technology. Even if they won’t replace mobile apps (at least in next years), for sure they extend their possibilities. It means that for many mobile programmers, assistants like Actions on Google or Amazon Alexa will be next platforms to build their solutions. Continue readingBuilding Google Actions with Java Move your code from Android to Google Assistant

FrameMetrics — realtime app smoothness tracking

A couple months ago, when Android Nougat was announced, among new features like Multi-window, enhanced notifications, VR support, 72 new emojis đź‘Ť and others, there was a new addition to monitoring tools: Frame Metrics API. Continue reading “FrameMetrics — realtime app smoothness tracking”