Misha Volf's Portfolio Project
Misha Volf comes from a diverse background of architecture, psychology, and marketing. He saw taking the VUI course at CareerFoundry as an exciting opportunity because it brought together his interests in UX, language, and sound.
Recipe Reader Skill: an Alexa skill which helps navigate recipes by voice
Space: Food, Productivity
Roles: Conversation (VUI) Design, User Testing
Challenge: Understanding how to guide users through the recipe-selection process.
Finding: Users preferred to ask for the ingredients list as needed, rather than have it told at predetermined points in the conversation.
Methods: Personas, User Stories, Sample Dialogs, User Flows, Scripts
This case study presents a voice user interface (VUI) project I developed over five weeks for the voice interface class at Career Foundry. A demo of the skill is shown in the video below.
The aim of this recipe reader project was to design a skill for Alexa that allows the user to navigate and interact with a recipe primarily using voice commands. This skill ultimately ended up getting certified and published by Amazon as Quik Bites (which you can enable and try yourself, here).
The brief for the project was as follows – allow users to select easy-to-make dishes from several options and to follow step-by-step directions to prepare the meal. The skill should have the following features:
- Breakfast, lunch, dinner, and snack recipe buckets.
- The ability to ask for a different recipe if the user doesn’t like a suggestion.
- A strategy for dealing with users who miss a preparation step or need something repeated.
- A way to check whether your user is ready to move on to the next step.
What follows is a step-by-step account of how I developed the project and the thinking that went into each of the steps.
A persona card for “Liz – a synthesis of preliminary user interviews I conducted as part of research.
Following these preliminary definitions of personas, I developed a collection of user stories – short statements which, based on background research, sketch a situation where a successful interaction between a user and the product takes place. User stories are typically presented in the first person. They characterize the user in some relevant way, present that user’s specific need, and postulate on a product feature that might resolve that need.
User stories segue nicely into the sample dialogs method. A sample dialog is an imagined scripted conversation between the user and the system, which elaborates on the scenario presented in the user story. Put differently, sample dialogs are speculations on what a user might want to say and hear in response during an exchange with a system.
Sample dialogs are an important step in this design process because it is here that the tone and manner of the interactions really begins to take shape.
Information architecture and user flows
After the sample dialogs are sufficiently developed and honed, the next step is to describe them in logical terms, which will in turn help build the actual interaction model. I found it helpful to first develop the information architecture of how different recipes and the relevant pieces information about them would be related (see diagram above).
Below is a screenshot of a script section showing the “Welcome” response that is part of the Pick Recipe Intent.
Having worked out the user flows, the next step is developing scripts. In the context of VUI’s, the script is a different type of document than conventionally understood. Here, a script is an organized database of system prompts and responses, along with sample user utterances. At the highest level, the script document is organized by intents, or states, each of which is represented as a separate tab in a spreadsheet.
A test of the skill in the Alexa Skill Kit environment.
With the scripts finalized, it was time to produce the next level of prototype and to test the system. While I could test the functionality of the interaction model with some developer tools (see above), the test that would be most important is from actual users who were not involved in the development of the skill. Specifically, I was interested in the following research questions:
- Can users effectively select a recipe from the available options, with navigation setup by meal type?
- Can users advance through the recipe with verbal commands?
- Do users have all the necessary information when they are in the instructions state?
Conclusion and next steps
With this round of research complete, I made the appropriate updates and published the skill on Amazon’s skill store. Still, with only one round of initial testing, follow up tests remain outstanding. Refining the interaction model through additional, repeated testing, would be among the most immediate next steps. In summary, when working through the problems typical of a VUI design process, I further developed my skills as a UX designer, and added new ones which are specific to voice.