Design by Lai Wei © 2020

LOOMO

“Hi, I’m Loomo,  

 I’m sent from the Future. ”

Type: 

Personal Robot Product

Role: 

UX Designer

Team:

30+ People Department of Segway Robotics

Time: 
18 Months

Story about me and LOOMO

LOOMO is a Mobile Robot Sidekick, both a self balance vehicle and a personal assistant. LOOMO went on sale in the US market in April 2018. Since the launch, Loomo has been integrated into the lives of 1000+ satisfied customers.

Enjoy sunlight with LOOMO
Service at the party
Interact with kids

I joined the LOOMO team in May 2017, worked as UX designer and went through the whole design and market process. I worked on both Robot Product and the robot Controlling APP.

When designed each feature, I pay attention to Multi-devices Design (Robot system, iOS, Android), and lead a smooth experience between APP and robot itself.

I worked on many features and the whole product experience design, but here I'd like to mention the features I dominate designed as the UX designer in the team.

Robot Status System
Gesture Interaction
Voice Auto-shot
Interactive Tutorial
Over-the-Air (OTA) Update
User testing &
Product testing

This page I'd like to mainly introduce the complete design process of Gesture Interaction feature. Other feature design details could be found above.

 

Gesture Interaction

 

Quick view

Gesture interaction is a feature for quick active specified functions with gesture.

Why I Designed Gesture?

Gesture function is not just a exploration of a more natural interactive method, but also serve practical scenarios. LOOMO is used both indoors and outdoors, in noisy outdoor environment, voice interaction usually can not convey valid commands.

In these situations, gesture interaction is a more effective and accurate way for users to give emergent commands.

Besides, after LOOMO lunched, out team also got feedbacks in our supports web and reviews from the users that they are looking forwarded to a cool gesture feature. Gesture feature was also designed by users demand.

Why not other ways?

Is gesture the best way? I considered this question at the beginning of process. A controller or our APP could also conduct the same function. However, after consideration, I thought gesture is the right direction for quick active several core function in a outside environment. A controller or APP would be less effective.

Controller
LOOMO APP
Goal for Gesture

After analyzing the function positioning, target users, and implementation methods, and design decision meeting with team, I set the goal of gesture function:

1. Easy operation flow, quickly active specific functions.

2. High accuracy of gesture recognition 

3. Smooth and natural user experience

Ideation
Define Gesture Poses

Customize gesture poses for potential commands, I researched positions from daily life, especially thought about what will people do when they interact with another human. At last, the chosen positions followed 2 rules.

           1. Poses semantics match the meaning of commands

           2. The posture should seem nature and comfortable

From Algorithmic to Experience

Next step: after discussion and let coworkers test for positions (to figure out do they feel comfortable when doing these positions), I made adjustments and delivered the designed positions to algorithmic team, they spend a couple of weeks for training and testing the accuracy of the positions. Then I adjusted once more time based on the reports.

 

This was the gesture recognization demo I got from the algorithmic team, during the gesture design process, I closely cooperated with algorithmic and software programers, I learned a lot and had a deeper understanding about the meaning of UX design:

algorithmic demo

Transform the technical (algorithmic) language into User-friendly experience.

The work of UX designer is not only making things looks good, more important, I transform technology into practical functions, bring it to users life.

Iterations
Excellent Experience Born from Iterations

Comparing with interface-only interaction, the gesture is more likely to influenced by Multiple Factors. UX depends on synthetical combination of interactive flow, visual& sound feedback, precise algorithms recognization and software logic details.

Before function finally released, I gave 3 Key Iterations and multiple optimizations for this feature.

1
Initial Vision: Test in Demo

Select the gesture functions

 

Avoid overly complicated operations, the first version selects several cores, useful features.

Auto-shot
Start Follow
Stop Follow

Interaction Flow 

For the initial version, I add gesture response to Idle Status, for testing initial effect.

 

However,

recognition sensitively affected by environmental conditions, causing False Triggering and Recognition Failure, both lead to negative user experience.

Safety Risks

Starting Follow gesture (putting the palm on the chest) is easily triggered by similar actions. Sudden follow-up caused by false triggering is a danger event for users.

False Triggering

2
2nd Version: Solve the problem

Goals for 2nd version:  

1. Security guarantee: solve the False Triggering

2. Experience enhance: adjust feedback for gesture recognition

I researched innovation interaction products, including Alexa and Google Home, and got inspiration in voice interactive products. To avoid smart assistant mistakenly think that people are talking to them, they usually be "waken up" by a waken words, same as : Hi Siri".

Why not added a waken gesture to solve the false triggering problem?

Alexa
How's the weather?

Besides, I also researched and tested other products with gesture interaction, such as X-Box, to check how could I make a more clear gesture feedback.

After research and a couple round of design thinking, I delivered  Wireframe V2.0

Waken Gesture

The first essential design is added waken gesture. The waken gesture was chose as a "Hi" gesture which usually been used in daily life.

After recognized waken gesture, the robot will active recognization mode of all gesture commands. 

Recognize Countdown Bar

After gesture mode active, the robot could receive next commend. At this stage, I added a visual  feedback on robot interface to show how long the gesture should be kept.

It's also a protection mechanic for robot to receive user's order, not some random position.

Measure the Design: Testing

After second version of design, testing was essential to measure how the adjust work. With the developed version I tested the whole exercise flow, recognization rate and especially the rate of false triggering

According to the testing result, the false triggering rate is almost zero. As I was glad about false trigger solved, another problem occurred:

With so many insurances for false triggering, the recognition rate went down, which made the robot seem not do smart, and caused a negative experience.

3
3rd Version: Balance between Recognition Rate &  False Triggering

In the 3rd version of design, to enhance the recognition, I go back to adjust the balance for better/easier user experience.

 

             1. More agile feedback

             2. Flow adjustment: For easier waken

I worked closely with software engineer and algorithm engineer, kept digging about details of required gesture recognition time, how much continuous recognition of images should be detected, and also the fault tolerance in the algorithm layer.

After adjustment, I delivered  Wireframe V3.4

Delivery: How did gesture work?

Start Follow

Stop Follow

Auto-shot

Gesture Tutorial

Without tutorial, guidance and notification, a new function is NOT completed.

Notification & tutorial - Both on APP and robot

Considering that the gesture is based on the CAMERA and FOLLOW, user's permission of using gesture control should be after unlocking & handle CAMERA and FOLLOW.

The most efficient way of teaching is to let the robot teach itself.

Robot Interactive Tutorial
Release

After multiple iterations and testings, the gesture feature was released in October 2018. I received positive evaluate and sincere suggestions from LOOMO support website.

Screen-shot from support.loomo.com
The Republic of Tatarstan President Praised LOOMO and Gesture Function
in China Hi-tech Fair 2018