Personalize Google Assistant skill with user data Actions on Google — permissions handling

This post was published in Chatbots Magazine: Personalize Google Assistant skill with user data.

This post is a part of series about building the personal assistant app, designed for voice as a primary user interface. More posts in series:

  1. Your first Google Assistant skill
  2. This post
  3. Surface capabilities in Google Assistant skills

In first post we have built very first version of WaterLog, a Google Assistant skill which lets us track daily water intake by our voice or text written in natural language.

https://assistant.google.com/services/a/id/12872514ba525cc6/

Personalising experience with user data

Today we’ll make use of permissions and basic user information to make our app‘s experience a bit more personal. It’ll also fix one of the biggest limitations of our app — being a timezone independent.
WaterLog app gives user possibility to sum up amount of water which was drunk during the day. But until now “the day” was calculated according to Firebase Cloud Function local time. Every time when we load data from database, day starts from 0:00 UTC. It means that for users in New York day starts at 20:00, London: 0:00, Warsaw 1:00 and so on.
Google Assistant API doesn’t give information about user local time so we need to calculate it on our own. Fortunately we can ask assistant for user’s device location and use it to find local timezone.

“Update settings” scenario

Let’s start with adding new scenario to our app. Here are example conversations:

Happy path

WaterLog: 
User: Update settings
WaterLog: Sure, to customise settings, I’ll just need to get your name from Google. Is that ok?
User: Yes
WaterLog: Ok Mirosław. My clock shows me 10:24 am. If that’s not your time, to adjust my settings to your timezone, I’ll just need to get your street address from Google. Is that ok?
User: Sure
WaterLog: Ok, now it’s all set!
//End of conversation

User denies permission

WaterLog: 
User: Change settings
WaterLog: Sure, to customise settings, I’ll just need to get your name from Google. Is that ok?
User: No
WaterLog: Sure thing! You can ask me to personalise your experience whenever you want.
//End of conversation

As you can see, our app now has ability to ask for user name (will be used to make our conversations a bit more personal) and user’s device location (so we’ll be able to find local timezone).
Here you can official documentation about user information.

Dialogflow Agent

Let’s extend our Dialogflow agent by 2 new intents:

update_settings

This intent is fired when user is asking for updating app settings.

— Config —
Action name
update_settings
User says:

Fulfillment: ✅ Use webhook


user_data

This intent is fired under the hood by Google Assistant every time when user interacts with permissions (accepts or denies). Based on our scenarios, here are the places in our app which will perform it:

WaterLog: Sure, to customise settings, I’ll just need to get your name from Google. Is that ok?
User: Yes ← here
WaterLog: Ok Mirosław. My clock shows me 10:24 am. If that’s not your time, to adjust my settings to your timezone, I’ll just need to get your street address from Google. Is that ok?
User: Sure ← here

To handle it properly, we need to configure this intent a bit different:

— Config —
Action name
user_data
User says: leave it empty
Eventsactions_intent_PERMISSION — there are special built-in intents which can be fired by Google Actions platform. Full list can be found in the documentation.
Fulfillment: ✅ Use webhook


That’s it. Our Dialogflow Agent is able to handle our new scenarios to customise app settings. If you would like to see full config, you can download it and import into you agent from the repository (WaterLog.zip file, tag: v0.1.1).

The code

Now let’s build backend implementation. We’ll add new, recently defined intents in assistant-actions.js and index.js:

Logic for conversation.actionUpdateSettings() is pretty straightforward:

We use Dialogflow helpers to ask user for permissions.askForPermission(context, permission) method will generate permission request which looks like this: “{your context}, I’ll just need to get your name from Google. Is that ok?” .

When user replies to this question (“yes”, “no”, “sure”, “of course”, “no way” etc.), our Firebase Cloud Function will call conversation.actionUserData():

Logic should be self-explaining. What is important here, we should remember that this method is called when:

  • User denies permission request (dialogflowApp.isPermissionGranted()returns false )
  • User allows permission — when we ask for name and device location in the same conversation session, the second request can contain data from the first one (so: user name + device location).

Side note about permissions

If you checked SupportedPermissions documentation, you have probably noticed DEVICE_COARSE_LOCATION permission. So why do we need the exact device location when we want to find a timezone? Two reasons:

  • There are libraries which help to translate exact coordinates into timezone (moment-timezone). Coarse location returns us just a city and zip code (which with some additional work should give us similar results when using any geocoding API).
  • Unfortunately during my development time there were no way to get any information from Google Assistant API for DEVICE_COARSE_LOCATION. Probably bug, probably my oversight.

More app logic

While that is pretty much it when it comes to Google Actions SDK, there were also some other things to code:

TimeManager class helps with time management operations. Here our app converts coordinates into timezone and saves it to Firebase Realtime Database. It also returns platform and user-local time and calculates local time for start of the day.

Now when we have user’s given name we can also use it to personalize our experience a bit:

Unit tests

Did I mention that you should always write unit tests which makes your work million times faster, esp. in long term coding?

$ npm test:

Source code

Full source code of WaterLog app with:

  • Firebase Cloud Functions
  • Dialogflow agent configuration
  • Assets required for app distribution

can be found on Github:

https://github.com/frogermcs/WaterLog-assistant-app/

Code described in this post can be found under release/tag v0.1.1.

Thanks for reading! 😊

Author: Mirek Stanek

Head of Mobile Development at Azimo. Artificial Intelligence adept 🤖. I dream big and do the code ✨.

Leave a Reply

Your email address will not be published. Required fields are marked *