User Tools

Site Tools


api_ai_tutorial

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
api_ai_tutorial [2017/01/04 11:31] dwallaceapi_ai_tutorial [2017/01/04 11:43] (current) dwallace
Line 17: Line 17:
 API.AI is a cloud-based interface that allows developers to create dialogs between their users, and to implement Natural Language Understanding (NLU) and Machine Leanring (ML) into their applications. The API.AI framework is based on a web GUI that allows developers to create custom Intents, Entities, Actions, and Integrations for their Agents. API.AI allows integration with many platforms such as Facebook Messenger, Amazon Alexa, Google Home, and even Microsoft Cortana. It is a very powerful tool for building interactive chat bots and agents that make the user-experience even more seamless for your app. API.AI is a cloud-based interface that allows developers to create dialogs between their users, and to implement Natural Language Understanding (NLU) and Machine Leanring (ML) into their applications. The API.AI framework is based on a web GUI that allows developers to create custom Intents, Entities, Actions, and Integrations for their Agents. API.AI allows integration with many platforms such as Facebook Messenger, Amazon Alexa, Google Home, and even Microsoft Cortana. It is a very powerful tool for building interactive chat bots and agents that make the user-experience even more seamless for your app.
  
-Agents are the individual conversation or command packages that have their unique set of intents, entities, and actions. An example of an a agent is a chatbot for finding new recipes for common foods. The Agent is really the assistant as a whole, and API.AI allows you to manage multiple agents with their web GUI.+Agents are the individual conversation or command packages that have their unique set of intents, entities, and actions. An example of an a agent is a chatbot for finding new recipes for common foods. The Agent is really the assistant as a whole, and API.AI allows you to manage multiple agents with their web GUI. For more info on Agents, visit the [[https://docs.api.ai/docs/concept-agents|API.AI Documentation]].
  
 Intents in API.AI are meanings that are mapped to a user's speech/text input. If a user asked "Ok Google, what the weather like?", the corresponding intent would be get_weather_info or something along those lines. Intents are simply a way for developers to translate user input (speech or text) into actionable data. For more info on Intents, visit the [[https://docs.api.ai/docs/concept-intents|API.AI Documentation]]. Intents in API.AI are meanings that are mapped to a user's speech/text input. If a user asked "Ok Google, what the weather like?", the corresponding intent would be get_weather_info or something along those lines. Intents are simply a way for developers to translate user input (speech or text) into actionable data. For more info on Intents, visit the [[https://docs.api.ai/docs/concept-intents|API.AI Documentation]].
  
-Entities are data fields that are to-be-filled by user input. So let's take the example of an AI assistant that finds clothes based on the parameters that the user sets. The entities for an app like this would be things like clothing_type, color, size, etc. When a user says "Ok Google, find me red shirts in large, API.AI would map the value "red" to color, "shirt" to clothing_type, and "large" to size. This is a process that is called slot-filling, which fills the entities that are needed for an action with the necessarry data.+Entities are data fields that are to-be-filled by user input. So let's take the example of an AI assistant that finds clothes based on the parameters that the user sets. The entities for an app like this would be things like clothing_type, color, size, etc. When a user says "Ok Google, find me red shirts in large, API.AI would map the value "red" to color, "shirt" to clothing_type, and "large" to size. This is a process that is called slot-filling, which fills the entities that are needed for an action with the necessarry data. For more info on Entities, visit the [[https://docs.api.ai/docs/concept-entities|API.AI Documentation]].
  
-Finally, Actions are what the agent executes when an intent is triggered. This can be done with either webhooks, SDK integration, speech response, or even a combination of the 3. This gives the developer a lot of flexibility when creating dialogs and actions with their users. Actions have parameters, which are entities that are used for an action. You can choose to require certain parameters for actions, and the agent will prompt the user to fill these if they are not already, which ensures that everything goes smoothly during conversation.+Finally, Actions are what the agent executes when an intent is triggered. This can be done with either webhooks, SDK integration, speech response, or even a combination of the 3. This gives the developer a lot of flexibility when creating dialogs and actions with their users. Actions have parameters, which are entities that are used for an action. You can choose to require certain parameters for actions, and the agent will prompt the user to fill these if they are not already, which ensures that everything goes smoothly during conversation. For more info on Actions and parameters, visit the [[https://docs.api.ai/docs/concept-actions|API.AI Documentation]].
  
 ===== Downloads ===== ===== Downloads =====
Line 79: Line 79:
  
 Once you have imported the project, do a clean build to ensure no errors occur. If they do, fix them by installing the necessary libraries/platforms for the SDK. Once you have imported the project, do a clean build to ensure no errors occur. If they do, fix them by installing the necessary libraries/platforms for the SDK.
 +\\ \\ 
 +{{ dylanw:apiaitoken.jpg?800 }}
 +\\ 
 +
 +Finally, in order to interface our agent that we created, with the SDK, we need to provide the Client Access Token which can be found under the Agent Settings. Copy this token and add it to the ACCESS_TOKEN field of the Config.java file. Save your changes and rebuild the project. If you are without errors, then we should be ready to deploy.
 +
 +===== Deployment =====
 +
 +For deployment, we will follow the same steps as outlined in the [[nest_tutorial#deployment|Nest Control Tutorial]]. Make sure that both Developer mode and USB Debugging are enabled in order for the app to deploy. Install the app, and open it up for testing.
 +
 +===== Testing =====
 +
 +In order to test, simply use the Button Sample on the app. This will take in the speech, send it to the Google's NLU service, and then back to API.AI for processing through your intents. Then, the agent will return the speech response to the phone, and also will display the returned JSON text from the response. This JSON text will allow us to utilize a custom app to act on intents. This could allow us to integrate things such as the Nest API and Hue API into our app to develop a Smart Home Integration App.
 +
  
 +For questions, clarifications, etc, Email: <wallad3@unlv.nevada.edu>
api_ai_tutorial.1483558289.txt.gz · Last modified: by dwallace