api_ai_tutorial
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
api_ai_tutorial [2017/01/04 03:19] – created dwallace | api_ai_tutorial [2017/01/04 11:43] (current) – dwallace | ||
---|---|---|---|
Line 8: | Line 8: | ||
\\ | \\ | ||
\\ \\ | \\ \\ | ||
- | {{ }} | + | {{ youtube> |
\\ \\ | \\ \\ | ||
Line 15: | Line 15: | ||
===== Overview ===== | ===== Overview ===== | ||
+ | API.AI is a cloud-based interface that allows developers to create dialogs between their users, and to implement Natural Language Understanding (NLU) and Machine Leanring (ML) into their applications. The API.AI framework is based on a web GUI that allows developers to create custom Intents, Entities, Actions, and Integrations for their Agents. API.AI allows integration with many platforms such as Facebook Messenger, Amazon Alexa, Google Home, and even Microsoft Cortana. It is a very powerful tool for building interactive chat bots and agents that make the user-experience even more seamless for your app. | ||
+ | Agents are the individual conversation or command packages that have their unique set of intents, entities, and actions. An example of an a agent is a chatbot for finding new recipes for common foods. The Agent is really the assistant as a whole, and API.AI allows you to manage multiple agents with their web GUI. For more info on Agents, visit the [[https:// | ||
+ | |||
+ | Intents in API.AI are meanings that are mapped to a user's speech/text input. If a user asked "Ok Google, what the weather like?", | ||
+ | |||
+ | Entities are data fields that are to-be-filled by user input. So let's take the example of an AI assistant that finds clothes based on the parameters that the user sets. The entities for an app like this would be things like clothing_type, | ||
+ | |||
+ | Finally, Actions are what the agent executes when an intent is triggered. This can be done with either webhooks, SDK integration, | ||
===== Downloads ===== | ===== Downloads ===== | ||
+ | To implement API.AI into an Android App, we simply need to download the [[https:// | ||
+ | |||
+ | ===== Creating an API.AI Agent ===== | ||
+ | |||
+ | First, you will need to register for an API.AI account. This is most preferably done with an existing developer google account. You can do this on the [[https:// | ||
+ | |||
+ | ==== Agent ==== | ||
+ | |||
+ | Once you have created your account, create a new agent using the top-most left tab on the Web GUI. Fill in the details as you please. Now, we have an agent to work with. For this tutorial, we will use the example of an agent that sets the wake-up time for a given date. This has the possibility to integrate with things such as the Nest API and Hue API to control lighting and temperature for wake-up scenarios. Here is a picture of what your Agent should end up looking like after you create it: | ||
+ | \\ \\ | ||
+ | {{ dylanw: | ||
+ | \\ | ||
+ | |||
+ | As you can see, API.AI has generated both a client and developer token for use when developing your own API.AI apps. We will use these later when integrating the SDK on Android. | ||
+ | |||
+ | ==== Entities ==== | ||
+ | |||
+ | The first step for customizing your agent is adding entities to store any data that you may gain from the users' requests. For our purposes, we will only use the built-in system entities. There are many built-in system entities for common data types such as location, date, time, etc. Since all we need is date and time, we don't need to create any custom entities. However, for tutorial purposes we can add an entity that will be the mood we want to be woken up to. Here is a picture of what that looks like: | ||
+ | \\ \\ | ||
+ | {{ dylanw: | ||
+ | \\ | ||
+ | |||
+ | As you can see, we can define multiple moods that the user may be in, and even synonyms for these moods. We will provide some of the basic ones that we can think of, and then check "Allow Automated Expansion" | ||
+ | |||
+ | ==== Intents ==== | ||
+ | |||
+ | The next step is creating intents for your users' commands. We will name our intent set_wakeup. To start, enter some phrases that you would imagine a user saying to trigger this intent. Try and be variable with your commands, and include examples that leave out parameters as well. Here is what that might end up looking like: | ||
+ | \\ \\ | ||
+ | {{ dylanw: | ||
+ | \\ | ||
+ | |||
+ | As you can see, API.AI has automatically mapped some of the words in the commands to system entities that we will use. This is done automatically here, but you can also do it manually for your own entities by highlighting the word that you want to fill an entity, and selecting the appropriate entity from the drop-down menu. | ||
+ | \\ \\ | ||
+ | {{ dylanw: | ||
+ | \\ | ||
+ | |||
+ | Below our User Says section, we have the action that will be performed. We will define the name for this action as set_wakeup (this does not need to be the same as our intent, but it can be). Since we had parameters auto-highlighted, | ||
+ | \\ \\ | ||
+ | {{ dylanw: | ||
+ | \\ | ||
+ | |||
+ | Finally, we need to define the speech response for our agent. We can add parameters to fill the response by using the $parameter format, and we can also add multiple responses that will happen depending on what parameters are filled. As you can see, we want to vary our responses for a wide combination of optional parameters. | ||
+ | |||
+ | Note that there is a section at the top of the intent called Contexts. This is an area where we can make more advanced conversations by defining input and output contexts for intents. For this tutorial' | ||
+ | |||
+ | Once we have fully defined our intent, we are ready to deploy to our Andriod SDK. | ||
+ | |||
+ | ===== Setting up Android Studio Project ===== | ||
+ | |||
+ | For this tutorial we will use the sample app to interface with our custom agent. First, we need to setup the SDK in Android Studio. To do this, follow the same steps outlined in the [[nest_tutorial# | ||
+ | |||
+ | Once you have imported the project, do a clean build to ensure no errors occur. If they do, fix them by installing the necessary libraries/ | ||
+ | \\ \\ | ||
+ | {{ dylanw: | ||
+ | \\ | ||
+ | |||
+ | Finally, in order to interface our agent that we created, with the SDK, we need to provide the Client Access Token which can be found under the Agent Settings. Copy this token and add it to the ACCESS_TOKEN field of the Config.java file. Save your changes and rebuild the project. If you are without errors, then we should be ready to deploy. | ||
+ | |||
+ | ===== Deployment ===== | ||
+ | |||
+ | For deployment, we will follow the same steps as outlined in the [[nest_tutorial# | ||
+ | |||
+ | ===== Testing ===== | ||
+ | |||
+ | In order to test, simply use the Button Sample on the app. This will take in the speech, send it to the Google' | ||
+ | For questions, clarifications, |
api_ai_tutorial.1483528773.txt.gz · Last modified: by dwallace