I’ve been obsessed with gadgetry since I was a little girl, much before I got into hacking and making which was the inevitable progression. I got a pager before I made more than 5 friends (I recall primarily paging myself and ignoring my mother), and was on my fourth palm pilot, second laptop, and 5th or 6th cellphone when I was admitted to MIT, not to mention a smattering of futuristic-at-the-time tech of questionable utility including translators, hand held scanners, foldable keyboards, anything with a circuit board.
Since the beginning I’ve thought about the possibilities of a Heads Up Display, and wished that a major company would take on the baggage of making one user friendly, production ready, and configurable. Google is making the first stab I’m convinced may stick as market validation (I’ve tried everything from Skymall productions to research from the Media Lab and Microsoft Research on), though I also know others–Apple primarily, must follow. Business wise, with smartphone sales eventually leveling off (some data shows they are even starting to already), wearable tech is the future financially for these companies to keep their growth, and for mobile developers to keep their financial edge. So the age of the HUD will soon be upon us.
When I heard rumors Google was making a HUD, I knew I had to have it, and as soon as possible. Is the Glass as is itself worth the money? Absolutely not. It’s prohibitively expensive, limited in capabilities, not quite consumer ready. But it gives me a view of the future I hope to partially share in this post, and that view is priceless to me. I’ve been preparing for a day which I’m now convinced will soon come, studying computer science, electrical engineering, and making production applications and devices in mobile app development, artificial intelligence, embedded real time machine translation, and more. It’s one of the Life Projects I habitually collect Skills for, doing other important work en route but always thinking in the back of my head “this will be useful when I get my hands on my ultimate HUD”.
A device that Google Glass channels is absolutely transformational for the human experience: externalizing memory and reshaping the acquisition of skills into something much more symbiotic with an external intelligence. The new age of AI isn’t Artificial Intelligence at all but Artificially Augmented Intelligence. AAI, and for that you need a seamless integration, of which the HUD is the first logical step toward an even more fantastical future, as yet still living in the realm of science fiction.
It was a production to get my Glass. I was supposed to be in the initial Google I/O batch, but that turned out not to be the case. Next a pair earmarked for me was instead subjected to a teardown and ended up as much needed knowledge for the hacker community instead of wrapping my perhaps too philosophical brain. Finally, I was accepted into the Explorers program, but due to a family tragedy was unable to make the pickup in San Francisco.
Luckily, my friend Otavio, producer of the futuristic WordLens stepped in, and just days before his wedding committed to picking them up and overnighting them to me, just in time to be intercepted hours before I had to be on a transatlantic flight. The pickup was likewise dramatic: he showed up to a specified Pier in San Francisco, and was taken on a cruise to party in an abandoned air traffic control tower, where he did his best impersonation of a girl a foot shorter, and did his best to enjoy the view and the champagne. I was assured it wasn’t the worst of experiences, and I’m hoping his brief view of glass will inspire some of the future of WordLens. Otavio, pictured with my Glass in orange, with a smart looking crew.
Using the Glass is straightforward, I’ll let you see through my eyes. Glass has Wifi but no sim card, so connection is reliably obtained by tethering to the phone in your pocket or bag. It has a bone conduction speaker, enabling decent audio that mostly you can hear, a front facing camera, and a few other sensors detailed in Star and Scott’s takedown. The home screen, with a clock and command is pulled up by tapping the trackpad on the side, or by tilting your head back in a “What’s up?” head nod.
You can either interact through a voice controlled menu, pulled up from the home screen by the command “OK Glass”
or through swiping horizontally on a trackpad to navigate, vertically to “go back” or “exit”, and tapping to wake up or select.
You cannot type, but the voice recognition is industry leading edge.
More complicated input, such as Wifi passwords, are done on another machine via e.g. a computer and picked up by the Glass via a QR code. Just glance at it, and it’s input.
The standard actions are Google, Take a picture, Record a video, Get directions*, Message, Call*, Video Call. External Apps currently available, which are utilizing the Mirror REST API only, include: Facebook, Twitter, CNN, Google+, Google Now, NYT, Evernote, Facebook, Path, Tumblr, Elle. *require phone tethering via bluetooth.
and custom server based applications approved by the user (here’s the login to a Reminders app I created).
Besides actions, communication with the user is currently done by posting cards, a few examples given
to a user’s timeline. The Timeline is a horizontal row of cards, in receipt order which can be navigated by the swipe action on the trackpad.
The internal apps enable using phone, video and camera hardware seamlessly without taking your smartphone out of your pocket or interrupting the flow of the conversation. It will be socially transformational to the level that passing around an iPad at a party doesn’t interrupt the flow but opening up a laptop pretty much signals its death. I was once told at dinner by a Google Engineer who spent more time interacting with his phone than the dinner party that “he didn’t see the point of Glass”. That’s the point, well one of them anyway.
I’ve been able to take some cool photos and videos–Muay Thai boxing, doing pushups, playing ping pong, that would otherwise never have happened. Do this at your own, and your Glass’s own risk. I’m pretty excited to take them for a spin on the Roller Derby track during the contactless drills.
Reading your email on the fly is pretty neat, and surprisingly useful
Googling on the fly is an even quicker way to settle a bet or discretely educate
Perhaps terrifyingly, it enables a whole new level of the bathroom selfie
Mostly the external apps currently enable simple posting to the user’s timeline
some limited replies
and the user being able to share a picture or video with the appropriate subset of friends on the appropriate social network.
I’m at a weekend hackthon for the course I’m a co-lecturer for this summer, Technology Entrepreneurship hosted at the University of the Philippines, Diliman. We’re visiting another UP campus, the beautiful Los Baños for two nights while the students work on their startups. In between doing some mentoring, I’ve had the chance today to compile the above notes and create a few applications for Glass, via the Mirror API and the ADK.
For ease of use, you currently have one option. The Google Mirror API. Getting setup is straightforward, enable the the Mirror API in the API console, configure a few parameters for OAuth, and setup the playground for posting to help you quickly see what’s possible in the constraint’s of the API while getting the delight of posting to your timeline without a single line of code.
Once you’re ready to go further clone a sample project in the available language of your choice. I chose python, and the project builds on GAE with the webapp2 framework. Deploying to GAE can be done from the commandline or via GAE’s GUI.
The sample app is comprehensive and I made two tweaks to create simple yet useful demos. The first, having someone introduce themselves while I take a picture, and then say their name once more. Then their picture is echoed back to me overlaid with their name on my Glass.
The second app, only slightly more involved, had both a front and a backend where I could send reminder notes to my Glass timeline
via a web application frontend
ADK and the coming GDK
Glass runs a modified version of Android, and the Android Development Kit in fact produces apps compatible, partially with Glass. They run, but they don’t necessarily work as intended with the hardware/software. Native Glass features will be more accessible with the coming Glass Developer Kit, but for now it’s best to make apps that require no keyboard or touch interaction on the part of the user.
There are a few demo applications written with the ADK that are cloneable on github. A compass, a level, and a stopwatch all showcase the power of native Glass apps to come. Simply enable debugging on your Glass device, connect it to your computer, and build and run the Android app as usual choosing Glass as the target. Make sure the screen is active.
Using the compass was especially neat. I’m hoping to soon have a preternatural sense of direction. I recently had to, during my Advanced Open Water PADI diving certification, navigate pitch black waters 10m under with a compass. It’s tough managing a dive watch and compass at one time while swimming, and it will be cool with HUDs come to under the sea as well priced for the average consumer.
I won’t go into privacy issues in detail here. There are many. They are terrifying in several dystopian future branches. I am nonetheless optimistic at pushing this technology cautiously to the limits.
Just for kicks I checked if my anti violence app, Circle of 6 compiled and ran on the phone. It did, while it was disoriented and unable to be interacted with properly. It was gratifying to see that all my hard work on relative layouts largely succeeded, even on so small of a screen. We’ll have to do some work to make the concept compatible, but in principle being able to call for help with a head nod or record an assault with a wink is world changing for rape prosecution (for some futuristic undetectable contact lens embedded Glass)
This sort of stuff opens a bag of worms, but with every tool that could be potentially abused (by totalitarian or democratic governments alike, or individual persons) could also be use to effectively combat that abuse. With such a paradigm shift in what’s possible, we’ll have to work harder than ever to make sure technology is on the side of increasing human development and protection of human rights. I’m cautiously optimistic that we are up to the task and the rewards are worth the challenge.