Google Glass and the Future of Customer Service and Usability Testing

Google Glass is getting a lot of buzz lately because it is starting its “Explorer” beta program shortly. Most pundits who have been given a demo of the new device talk about its usefulness in capturing point-of-view (POV) video of your kids, or things like that. In today’s column we’ll explore some business uses for the devices.

Usability Testing

Usability testing can be done many different ways. The old-fashioned method entails giving a user a specific task to complete and then watching her try to complete that task. Recording her saying what she is doing, and recording the screen while she is doing it, gives us a good glimpse into where our design either succeeds or fails in helping her complete the task. Newer methods let us do this remotely via websites that record mouse movements, audio, and also take a video of the screen.

But while we can certainly gather a lot of data points that hint at what the user sees or doesn’t see, we don’t often have the ability to actually see things from her perspective, to understand where she was looking on the screen (in the event she wasn’t “looking” with her mouse at the same time).

A product like Google Glass would come in extremely handy in usability testing. With the ability to record POV movies, it is easy to imagine augmenting current techniques with a synchronized POV video of the user. Not only could this lead to better usability testing, it also opens up testing to new areas that are not easy to capture at the moment. For example, real-world usability testing (such as the layout of a grocery market, the design of a kiosk, or instructions on IKEA furniture) could all benefit from POV video, synced with the other data being collected.

Customer Service

Customer service is another area that could benefit from Google Glass. With the ability to “share” your view with someone, customer service could benefit in at least two ways.

If the customer service representative (CSR) has Google Glass, she could walk through whatever the user is supposed to do and share her feed with the user. That way the user can mimic exactly what the CSR is telling (and showing) the user to do. Conversely, if the user had Google Glass, she could show the CSR exactly what is going on when the user tries to do something, giving the CSR an immediate insight into what the problem might be as if she were sitting there next to the user.

Beyond simple video applications, however, augmented reality provides yet another huge wealth of possibilities. Imagine if the user is looking at her screen and the CSR is trying to explain what button to push: “It’s the third icon in the second navigation. It kind of looks like a sun.” Instead, the CSR could highlight the button itself via Google Glass by drawing on the Glass, as if she were annotating the view the user was seeing.

Store/Museum/Commercial Extra Information

QR codes currently let users get more information about something they are seeing. Store windows with QR codes feed data about the products shown in the window. But with Google Glass, adding information to what the user is seeing can be done much more effectively.

A store, museum, or even a TV show could easily embed a signal that tells Google Glass to display more information about what the user is seeing. Imagine a Best Buy commercial for a new computer. When watching the commercial while wearing Google Glass the commercial might instruct the Glass to pop up inventory information for that product in the user’s local Best Buy store, or let the user buy it online with one click.

The possibilities for Google Glass are limitless, assuming Google provides an API that makes programming these types of applications easy.

What would you use Google Glass for? Leave me a note below and let me know.

Until next time…

Related reading