IMG_2506.jpg

Hi there!

This is where I document some of the questions I've answered as a designer and researcher.
You can also find my resume here.

How can you interact with a VR world using your voice?

How can you interact with a VR world using your voice?

"IBM Speech Sandbox illustrates the use of Watson speech services to enable a speech interface within Virtual Reality."

A whole new wooooorld

Becca Shuman (UX researcher) and I started to work with Kyle Craig and Harrison Saylor (developers) in our team to create a voice interface for virtual reality. This was a fun project because there weren't any guidelines or established interaction patterns for voice in VR, so it was up to us to define them. For this project we decided to use the HTC Vive. 

We had to focus our research and experimentation on a few things:

1. How will users interact with the HTC Vive controllers? What will they use the different buttons for?
2. Would users be interested in a voice interface for VR?
3. What type of verbal commands would they like as part of the experience?
4. What would they like to create in this sandbox?
5. What type of technical constraints would we have to work with?

The developers had started exploring this new technology before we were engaged in the project. They had defined what type of experience they wanted: a sandbox. This would allow the user to explore and play around with the technology and create a world of their own. We were able to test with a very basic built prototype from very early on, which really helped us gather great user feedback and therefore, allowed us to heavily shape the experience.

ManUsingVRHeadset.gif

"Create a ball. Create a ball. Cre-ate a baaall."

We began testing with very simple tasks. The users had to make a ball appear in the VR environment. Surprisingly, their first reaction was to say "create a ball" or "make a ball". I was expecting them to use the buttons in the controllers to try and accomplish this task, but they chose a voice command instead. So far, so good. 

During this early stage, we also learned that:
1. The Watson speech services were limited
a) It would easily understand deep voices, but it would struggle with medium to high pitched voices
b) It needed to be trained to understand different pronunciations due to people's accents (for example "ball" was also "bawl")

2. Users preferred a laser pointer coming out of the controller in order for them to know what they were selecting in the environment instead of using a reticle. 

3. Users enjoyed hitting, throwing, dropping, insert-destructive-verb-here, things in the environment.

4. Users demanded world-like physics in the environment.

5. Users expected verbal queues and guidance

"Alright, now create a dragon."

Little by little, we started to create a more complete experience. We began to add more items users could spawn, as well as increased the level of interaction. For example, they could now change the material, color, size, etc of the item they created using voice commands. The experience didn't feel complete until the designers joined the project. They really made it all come together and made the VR environment come to life. We included a tutorial at the beginning which the users quite enjoyed and helped set expectations on what they were about experience. 

VRTutorial.gif

For additional information on some more of our findings, feel free to read our case study.

The IBM Speech Sandbox experience was later shown at SXSW 2017 and was also a key contributor to the Ubisoft - IBM collaboration for the upcoming Star Trek: Bridge Crew game.

How do you build a UX Research team from scratch?

How do you build a UX Research team from scratch?

How do you create a successful SXSW experience?

How do you create a successful SXSW experience?