Google Glass: Half Empty or Half Full?

shutterstock-140433580Back in March I wrote an article about Google Glass positing the kinds of applications for which one could envision Glass being useful. That was before Google had actually released Glass into the hands of their “Explorers” (the term they use for “beta testers who pay us a lot of money to try out the product”).

Last week, I finally got my invitation to buy Glass. I’ve been using it since and will be writing about Glass — and how it pertains to marketing, user experience and eCommerce — occasionally in my column.

My first impression of Glass is that it’s not at all what I thought it would be. I really thought that Glass was a lens one looked through directly. In fact, they position Glass on your face such that you need to look up to see it.

It’s not intended to be viewed through when it’s not on and is just outside of your central vision in your upper periphery. That makes looking at it a little bit of a strain, frankly. That also means that any sense of Augmented Reality seems impractical, because you’d be staring up at a screen and not straight ahead.

When you buy into the Explorer’s program, Google does a whole one-on-one fitting with you, so I know that the Glass is sitting correctly on my head. Google’s philosophy, my trainer told me when I went in for the Glass fitting, was that Glass had information you needed at the moment, but that it doesn’t get “in the way” of your life.

For example, Glass has very few “applications” as we know them. The Glass philosophy means you don’t often sit inside a program and do something. So you wouldn’t load up Microsoft Word and write a paper, and you wouldn’t open up Outlook (or Gmail, I guess) and write a long email. In fact, you can’t read a long email, either.

Instead, if you get an important email through Gmail, the first line or so (or maybe just the subject) shows up and you can opt to have Glass read it to you — but you can’t scroll through the message. I think Google realizes that the way they have placed the glass above your line of sight means you would really be straining to look at the glass for more than a few seconds at a time.

But Glass can’t just be about taking photos and videos. There are too many nicer cameras that can do this without requiring you to wear a very visible thing on your head all the time.

The only actually useful application Glass has so far is Navigation. I have tried it out a few times, but the placement of the glass means you have to stare up (and off of the road) to see the navigation. For that matter, why not just look down at your dashboard at your GPS? In both cases, your eyes leave the road for a few seconds, so what’s the point of having the navigation on your Google Glass instead of on a GPS. Plus, you don’t have to strain your eyes looking at the dashboard, whereas looking up at Google Glass is a little uncomfortable.

There are, however, a lot of useful applications I can think of (beyond what I already wrote in the previous article) that have to do with eCommerce.

Amazon could create a barcode scanner to check their prices of an item you find in a store… but they’ve already done this in the world of the smart phone. What is the reason to need something like Google Glass if it just emulates applications available on any smart phone?

That’s not to say there won’t be a time when applications come out that require the form of Glass to work. But history has shown that every new medium emulates the medium that came before it, until it finally becomes its own thing. This happened when TV started and had radio-style variety shows, when the web launched and looked like pamphlets and when smartphones launched with WAP-enabled websites instead of apps. Glass is in a very initial phase right now: not really having software that makes you say, “this could only happen on Glass.”

I am sure it will. But for now, Glass is a novelty waiting for a purpose.

Until next time…

Image credit: Joe Seer/

Related reading