Google Glass–using, hacking, loving

Motivation

I’ve been obsessed with gadgetry since I was a little girl, much before I got into hacking and making which was the inevitable progression. I got a pager before I made more than 5 friends (I recall primarily paging myself and ignoring my mother), and was on my fourth palm pilot, second laptop, and 5th or 6th cellphone when I was admitted to MIT, not to mention a smattering of futuristic-at-the-time tech of questionable utility including translators, hand held scanners, foldable keyboards, anything with a circuit board.

Since the beginning I’ve thought about the possibilities of a Heads Up Display, and wished that a major company would take on the baggage of making one user friendly, production ready, and configurable. Google is making the first stab I’m convinced may stick as market validation (I’ve tried everything from Skymall productions to research from the Media Lab and Microsoft Research on), though I also know others–Apple primarily, must follow. Business wise, with smartphone sales eventually leveling off (some data shows they are even starting to already), wearable tech is the future financially for these companies to keep their growth, and for mobile developers to keep their financial edge. So the age of the HUD will soon be upon us.

When I heard rumors Google was making a HUD, I knew I had to have it, and as soon as possible. Is the Glass as is itself worth the money? Absolutely not. It’s prohibitively expensive, limited in capabilities, not quite consumer ready. But it gives me a view of the future I hope to partially share in this post, and that view is priceless to me. I’ve been preparing for a day which I’m now convinced will soon come, studying computer science, electrical engineering, and making production applications and devices in mobile app development, artificial intelligence, embedded real time machine translation, and more. It’s one of the Life Projects I habitually collect Skills for, doing other important work en route but always thinking in the back of my head “this will be useful when I get my hands on my ultimate HUD”.

A device that Google Glass channels is absolutely transformational for the human experience: externalizing memory and reshaping the acquisition of skills into something much more symbiotic with an external intelligence. The new age of AI isn’t Artificial Intelligence at all but Artificially Augmented Intelligence. AAI, and for that you need a seamless integration, of which the HUD is the first logical step toward an even more fantastical future, as yet still living in the realm of science fiction.

Obtaining

It was a production to get my Glass. I was supposed to be in the initial Google I/O batch, but that turned out not to be the case. Next a pair earmarked for me was instead subjected to a teardown and ended up as much needed knowledge for the hacker community instead of wrapping my perhaps too philosophical brain. Finally, I was accepted into the Explorers program, but due to a family tragedy was unable to make the pickup in San Francisco.

Luckily, my friend Otavio, producer of the futuristic WordLens stepped in, and just days before his wedding committed to picking them up and overnighting them to me, just in time to be intercepted hours before I had to be on a transatlantic flight. The pickup was likewise dramatic: he showed up to a specified Pier in San Francisco, and was taken on a cruise to party in an abandoned air traffic control tower, where he did his best impersonation of a girl a foot shorter, and did his best to enjoy the view and the champagne. I was assured it wasn’t the worst of experiences, and I’m hoping his brief view of glass will inspire some of the future of WordLens. Otavio, pictured with my Glass in orange, with a smart looking crew.

IMG_4773

IMG_4761

Using

Using the Glass is straightforward, I’ll let you see through my eyes. Glass has Wifi but no sim card, so connection is reliably obtained by tethering to the phone in your pocket or bag. It has a bone conduction speaker, enabling decent audio that mostly you can hear, a front facing camera, and a few other sensors detailed in Star and Scott’s takedown. The home screen, with a clock and command is pulled up by tapping the trackpad on the side, or by tilting your head back in a “What’s up?” head nod.

startscreen

You can either interact through a voice controlled menu, pulled up from the home screen by the command “OK Glass”

menu

or through swiping horizontally on a trackpad to navigate, vertically to “go back” or “exit”, and tapping to wake up or select.

record

You cannot type, but the voice recognition is industry leading edge.

droid@screen-27

More complicated input, such as Wifi passwords, are done on another machine via e.g. a computer and picked up by the Glass via a QR code. Just glance at it, and it’s input.

Screen Shot 2013-08-03 at 9.51.02 PM

The standard actions are Google, Take a picture, Record a video, Get directions*, Message, Call*, Video Call. External Apps currently available, which are utilizing the Mirror REST API only, include: Facebook, Twitter, CNN, Google+, Google Now, NYT, Evernote, Facebook, Path, Tumblr, Elle. *require phone tethering via bluetooth.Screen Shot 2013-08-03 at 10.05.20 PM

and custom server based applications approved by the user (here’s the login to a Reminders app I created).

Screen Shot 2013-08-03 at 11.23.52 AM

Besides actions, communication with the user is currently done by posting cards, a few examples given

droid@screen-18

droid@screen-17

droid@screen-16

to a user’s timeline. The Timeline is a horizontal row of cards, in receipt order which can be navigated by the swipe action on the trackpad.

timeline

The internal apps enable using phone, video and camera hardware seamlessly without taking your smartphone out of your pocket or interrupting the flow of the conversation. It will be socially transformational to the level that passing around an iPad at a party doesn’t interrupt the flow but opening up a laptop pretty much signals its death. I was once told at dinner by a Google Engineer who spent more time interacting with his phone than the dinner party that “he didn’t see the point of Glass”. That’s the point, well one of them anyway.

share

I’ve been able to take some cool photos and videos–Muay Thai boxing, doing pushups, playing ping pong, that would otherwise never have happened. Do this at your own, and your Glass’s own risk. I’m pretty excited to take them for a spin on the Roller Derby track during the contactless drills.

firstpersonpingpong

Reading your email on the fly is pretty neat, and surprisingly useful

email

Googling on the fly is an even quicker way to settle a bet or discretely educate

googlesearch

Perhaps terrifyingly, it enables a whole new level of the bathroom selfie

20130803_100805_342

Mostly the external apps currently enable simple posting to the user’s timeline

twitter

some limited replies

appwithreplies

and the user being able to share a picture or video with the appropriate subset of friends on the appropriate social network.

share2

Hacking

I’m at a weekend hackthon for the course I’m a co-lecturer for this summer, Technology Entrepreneurship hosted at the University of the Philippines, Diliman. We’re visiting another UP campus, the beautiful Los Baños for two nights while the students work on their startups. In between doing some mentoring, I’ve had the chance today to compile the above notes and create a few applications for Glass, via the Mirror API and the ADK.

Mirror API

For ease of use, you currently have one option. The Google Mirror API. Getting setup is straightforward, enable the the Mirror API in the API console, configure a few parameters for OAuth, and setup the playground for posting to help you quickly see what’s possible in the constraint’s of the API while getting the delight of posting to your timeline without a single line of code.

Once you’re ready to go further clone a sample project in the available language of your choice. I chose python, and the project builds on GAE with the webapp2 framework. Deploying to GAE can be done from the commandline or via GAE’s GUI.

The sample app is comprehensive and I made two tweaks to create simple yet useful demos. The first, having someone introduce themselves while I take a picture, and then say their name once more. Then their picture is echoed back to me overlaid with their name on my Glass.

introduction

introduction2

The second app, only slightly more involved, had both a front and a backend where I could send reminder notes to my Glass timeline

reminders1

reminders

via a web application frontend

Screen Shot 2013-08-03 at 10.36.56 PM

you can check it out here if you have Glass yourself.

ADK and the coming GDK

Glass runs a modified version of Android, and the Android Development Kit in fact produces apps compatible, partially with Glass. They run, but they don’t necessarily work as intended with the hardware/software. Native Glass features will be more accessible with the coming Glass Developer Kit, but for now it’s best to make apps that require no keyboard or touch interaction on the part of the user.

There are a few demo applications written with the ADK that are cloneable on github. A compass, a level, and a stopwatch all showcase the power of native Glass apps to come. Simply enable debugging on your Glass device, connect it to your computer, and build and run the Android app as usual choosing Glass as the target. Make sure the screen is active.

compass

leve1

stopwatch

Using the compass was especially neat. I’m hoping to soon have a preternatural sense of direction. I recently had to, during my Advanced Open Water PADI diving certification, navigate pitch black waters 10m under with a compass. It’s tough managing a dive watch and compass at one time while swimming, and it will be cool with HUDs come to under the sea as well priced for the average consumer.

Privacy

I won’t go into privacy issues in detail here. There are many. They are terrifying in several dystopian future branches. I am nonetheless optimistic at pushing this technology cautiously to the limits.

Just for kicks I checked if my anti violence app, Circle of 6 compiled and ran on the phone. It did, while it was disoriented and unable to be interacted with properly. It was gratifying to see that all my hard work on relative layouts largely succeeded, even on so small of a screen. We’ll have to do some work to make the concept compatible, but in principle being able to call for help with a head nod or record an assault with a wink is world changing for rape prosecution (for some futuristic undetectable contact lens embedded Glass)

circleof6googleglassThis sort of stuff opens a bag of worms, but with every tool that could be potentially abused (by totalitarian or democratic governments alike, or individual persons) could also be use to effectively combat that abuse. With such a paradigm shift in what’s possible, we’ll have to work harder than ever to make sure technology is on the side of increasing human development and protection of human rights. I’m cautiously optimistic that we are up to the task and the rewards are worth the challenge.

0 thoughts on “Google Glass–using, hacking, loving”

  1. Thank you very much for sharing this! Great read and makes me curious. I would love to explore applications in business environments.

Leave a Reply to Andi Cancel reply

Your email address will not be published. Required fields are marked *