using_alexa_skills_kit
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
using_alexa_skills_kit [2016/12/29 17:33] – tbrodeur | using_alexa_skills_kit [2017/05/30 16:42] (current) – tbrodeur | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | <fc # | + | <fc # |
<color # | <color # | ||
Line 78: | Line 78: | ||
-In the Alexa architecture, | -In the Alexa architecture, | ||
- | -For our "Hello World" example, we’ll want to be able to ask Alexa to say "Hello World" to the user when requested. For this we use intents. | + | <fc # |
- | //If you are following the "Hello World" tutorial, we use one intent consisting on no slots: | + | -Alexa |
- | + | ||
- | * Hello: say "Hello World" to the user// | + | |
- | + | ||
- | -Additionally we’ll have Alexa listen out for a few standard Intents that Custom Skills should implement: | + | |
* AMAZON.HelpIntent: | * AMAZON.HelpIntent: | ||
* AMAZON.StopIntent: | * AMAZON.StopIntent: | ||
* AMAZON.CancelIntent: | * AMAZON.CancelIntent: | ||
- | * | + | |
- | Our completed | + | - An Intent Schema |
+ | |||
+ | <fc # | ||
+ | |||
+ | <code JSON> | ||
+ | { | ||
+ | " | ||
+ | { | ||
+ | " | ||
+ | }, | ||
+ | { | ||
+ | " | ||
+ | }, | ||
+ | { | ||
+ | " | ||
+ | }, | ||
+ | { | ||
+ | " | ||
+ | } | ||
+ | ] | ||
+ | } | ||
+ | </ | ||
+ | |||
+ | <color # | ||
+ | </ | ||
+ | |||
+ | <fs large> | ||
+ | |||
+ | -“Sample Utterances” are where we define the phrases that Alexa needs to hear in order to invoke each of our Custom Skill’s Intents. This is done using a simple text list of the format: | ||
+ | |||
+ | <code json> | ||
+ | <Intent Name>""< | ||
+ | </ | ||
+ | |||
+ | -Where “” can optionally contain references to Slots (arguments). | ||
+ | |||
+ | <fc # | ||
+ | |||
+ | <code json> | ||
+ | Hello say hello | ||
+ | Hello speak hello | ||
+ | Hello hello | ||
+ | </ | ||
+ | |||
+ | -Note also that we don’t need to provide sample utterances for the stop, cancel, and help intents. Alexa understands appropriate phrases to invoke each of these out of the box. | ||
+ | |||
+ | -Once the interaction model is complete, click next and continue on to the configuration tab to link your skill with your function. | ||
+ | |||
+ | <color # | ||
+ | </ | ||
+ | |||
+ | <fs large> | ||
+ | |||
+ | |||
+ | {{: | ||
+ | |||
+ | -If you are running your code with AWS Lambda, choose AWS Lambda ARN, else choose HTTPS. | ||
+ | |||
+ | <fc # | ||
+ | |||
+ | -Depending on the region you are in, choose either North America or Europe and insert your Amazon Resource Code (ARN) listed in the top right corner of your lambda function in your AWS console. | ||
+ | |||
+ | {{: | ||
+ | |||
+ | |||
+ | -Also make sure to change the application id in your lambda function that handles the event to the id code listed under your skill name: | ||
+ | |||
+ | {{: | ||
+ | |||
+ | -Click " | ||
+ | |||
+ | <color # | ||
+ | </ | ||
+ | |||
+ | <fs large> | ||
+ | |||
+ | |||
+ | {{ youtube> | ||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | |||
+ | The Custom Skill can be tested using the Service Simulator within the Amazon Developer portal and without the need to use an Echo hardware device. We can type in sample sentences that the user might speak, view the JSON message that is sent to the Lambda function and its response JSON as well as listen to how the response would sound when played back on an Echo. | ||
+ | |||
+ | Here we can see that Alexa determined that this request related to the “Hello” intent (because we used a phrase from our sample utterances that are associated with that intent). | ||
+ | |||
+ | The Lambda function read “Hello” as the intent and called the say_hello Python function that we wrote. | ||
+ | |||
+ | If an Echo device is associated with our Amazon developer account, we can go ahead and directly test this with that device. If not, we can hear what the response would sound like simply by pressing the “Listen” button in the Service Simulator. | ||
+ | |||
+ | <color # | ||
+ | </ | ||
+ | |||
+ | <fs large> | ||
+ | |||
+ | -If you are following the "Hello World" tutorial, you are now finished. | ||
+ | |||
+ | -Using this base knowledge, go on to create custom skills such as a [[echo_stock_price|stock price skill]]. | ||
- | -An interaction model is the link between the user talking to the Alexa enabled device—for example, Amazon Echo—and the Lambda function we set up earlier. In a way, it is the user interface definition for our piece of voice-driven software. | ||
- | -The model consists of an Intent Schema that defines the set of Intents used by the skill, intent parameters, known as Slots, and Utterances, phrases that the user will say to execute one of the intents defined in the Intent Schema. |
using_alexa_skills_kit.1483061635.txt.gz · Last modified: by tbrodeur