Edwards.ai

Edwards.ai asked us to design a mobile app that leverages their artificial intelligence platform to solve human problems in the ride-share industry.

Methods Used: Stakeholder Interview, Secondary Research, Competitor Analysis, Survey, Directed Storytelling, Interview, Affinity Diagramming, Mind Mapping, Personas, Low-Fidelity Prototyping, User Flow Diagramming, Kano Analysis, Wireframing, Wizard of Oz method, usability testing, High-Fidelity Prototyping

 

Tools Used: Otter, Trello, Post-It Notes, Google Forms, Pen & Paper, Whimsical, Google Hangouts, Sketch, Adobe XD, Google Slides

Key Themes: Voice User Interface, Artificial Intelligence, Research, Interaction Design


This whiteboard representation of findings from the stakeholder interview shows some of the many mind maps and lists we created throughout the project

Designing a proof-of-concept application

Edwards.ai was developing a platform as a service that people without coding experience could use to make their own mobile applications equipped with artificial intelligence capabilities. They needed a proof-of-concept to show investors, and their founder was a former Uber driver, so they thought the ride-share industry would be the perfect space. Tasked with designing this proof-of-concept application, we scoped out our project, then started with a stakeholder interview, secondary research, and competitor analysis to better understand the space. My focus was on research and the voice user interface, but I ended up making the prototypes for several features, and I also served as the point of contact for the client.


Understanding user needs

We used a combination of surveys and interviews to discover pain points that drivers and riders faced. Recruiting riders was easy, but finding enough drivers was a challenge that we addressed by utilizing online forums for Uber and Lyft drivers. My philosophy background informed my follow-up questions while I moderated our interviews, and it was especially helpful for understanding how people balanced values like privacy and security. Once we had an understanding of users’ goals and values, we made personas to build empathy with users.

On the left is our team calendar, with research sessions, planned milestones, and other key dates written on post-its. The middle consists of notes on guiding ideas, along with short-term goals. We also used a Kanban board for short-term goals. On the right is some of our feature clusters, which we formed via affinity diagramming


While we were sketching ideas for the financial tracking pages, I suggested combining some graphs into one to make it easier to compare them and find the total. These kinds of changes are less costly to make early on in the process

Using low-fidelity sketches to explore ideas

After ideating features that would meet user needs and serve client goals, we started sketching our ideas for these features. I also made a low-fidelity interactive prototype in Adobe XD to explore how our features could come together, and to demonstrate the kind of voice user interface I was imagining to my team. Having voice controls was important for this project, both because it demonstrated important parts of our client’s platform, and because of a new hands-free law in Minnesota that prohibits drivers from handling their phone while driving.


Prioritizing features with Kano analysis

We had a long list of features that we had to narrow down, so I worked with a teammate to make a Kano survey — a special kind of survey that asks users how they would feel if certain features were or were not present. Our results were puzzling at first because many features that seemed desirable based on previous research scored poorly in the Kano survey. I suggested breaking our aggregate graph with average values down into individual feature graphs with actual values to see what was going on. Doing so revealed that many users felts neutral about most features, and that these neutral scores were brining the averages down. I hypothesized that the neutral scores resulted from familiarity with similar features in competitor apps. If correct, this would suggest that (i) our features fared better than they seemed to, and (ii) our app had to distinguish itself from competitors more than we previously thought. We cautiously moved forward on the assumption that my hypothesis was correct, and adapted our remaining user interviews to test the hypothesis.

We listed the features we had come up with, along with some unifying themes, so that we could select some features for further testing


Here I am asking some follow-up questions after conducting a usability test with a former Uber driver

“Pay no attention to that man behind the curtain!”

The Wizard of Oz method is a great way to conduct usability testing for voice user interfaces. For in-person testing, I had a teammate control an InVision prototype of our app while I controlled computer voice commands in response to user interactions. For remote testing, I controlled both the visuals and the audio for our prototype, but I only screen-shared the visual part with users. This gave the illusion of talking to our app before we had actually programmed the voice interactions.

On the left is the InVision prototype I shared with drivers. On the right is the software I used to control the computer voice clips


Developing user flows for voice and gesture commands

With a voice interface, it is important for users to be able to control the app in a variety of ways. I made user flows in order to (i) plan out how key features should operate, (ii) identify alternative voice commands that should work in addition to the recommended voice commands, (iii) show the client how the pages of our prototype can interact, and (iv) map hand gestures onto corresponding voice commands. Some drivers were concerned about voice commands interrupting conversations with passengers, so I came up with some gestures that phone cameras could sense as commands. I tested which gestures drivers would use for which commands to make sure that their mental models matched mine, and also color coded our visual prompts for voice commands so that users could see which gesture to use (green for affirmative gestures like a thumbs up or upwards swipe; red for negative gestures like a thumbs down or downwards swipe).

A collection of user flows I made for our app


Some of the prototypes I made for the driver-side phone app.

Iterative prototyping

The features I was primarily responsible for prototyping on the driver’s side included calls/texts, trip queuing (which automatically searches both Uber and Lyft to line up optimal passengers based on the driver’s settings), and object recognition (which scans the back for forgotten belongings after passengers leave the car). In addition to designing an app for drivers to use on their phones, we also designed an app for tablets that could be secured to the back of a seat for passenger use. I designed the home page, a locations tab, and the music pages for the tablet app. Our prototypes changed a lot as we gathered more research and made refinements, so it was important to have a system to keep our designs consistent. Symbols and masters helped in this respect, as did our sharing conventions. If someone made changes to their prototypes, they would add them to the most recent shared file and document the changes they made.


Bringing it all together for the client

We presented our work to an audience that included our stakeholders, speaking to our artificial intelligence personality “Ed” in real time. The video below goes through a part of our script that I wrote for this presentation, with one of my teammates (Erin) playing the role of Penny, our driver persona. Our final deliverables included a research summary report, annotated wireframes, and a voice-controlled interactive prototype. Our final features on the driver-facing mobile app included calls/texts, trip queuing, navigation (including the option to scan alternate routes for dirt roads and pot holes registered with an accelerometer), financial tracking, maintenance reminders, object recognition, and a feature that suggests the optimal times and places to get gas. On the passenger-facing tablet app, we moved forward with locations (which displays interesting landmarks near the user’s destination), events (which helps users find upcoming local events), music (which gives passengers control over the stereo subject to certain constraints set by drivers), and topics (which offers a customizable conversation menu to get the conversation rolling, or in the case of the “nothing” option, to keep things quiet). A version of the tablet app would also be available via a QR code, for passengers who want to use their phones or drivers who don’t want to invest in a tablet.

Me narrating a story I wrote to demonstrate our app’s capabilities