4540851506_3385d360a3.jpg

Ten years from now, your everyday environment will be completely different. Little will have changed physically beyond your access to some new hardware and some extremely subtle (possibly invisible) changes in physical appearance to the manufactured objects around you. But you’ll be able to walk up to a restaurant, point a device at it, and instantly see commentary and reviews about it. You’ll be able to point the same device at a famous monument whose name you don’t know and pull up its entire history. And you’ll be able to play a game where you chase invisible ghosts through the streets of your city, trying to trap them by catching them in your crosshairs.

Actually, you can do all that stuff now with a smart phone and the right app.

But in ten years, it’ll be much cooler. You’ll be able to look at a person and run facial recognition software almost instantly, other people’s personal area social networks will be perceptible, and walking directions will show up in your field of vision as a red line extending outward from your feet. You’ll likely be doing this either through spectacles, a really slick tablet computer, or a combination of the two. People who don’t have smart phones by this time will operate at a disadvantage.

All of this becomes true because of augmented reality, an emerging technology that overlays our perceptions of the real world with information from computers. AR does this through intermediating devices, most commonly smart phones. Smart phones are the preferred hardware because they bring together all of the capabilities commonly used by AR apps: locative technologies like GPS, cameras, good resolution displays, sufficient computing power to perform analysis on the input from the first three, and internet connectivity.

As a result, augmented reality is suddenly making the leap from science fiction to the arsenals of ad agencies and smart phone developers. When I started blogging about AR, about nine months ago, there were still only a handful of standout apps available. By the time I sat down to research this piece, there’d been an explosion of apps offering AR features — some offering real value, others merely gimmicky.

With awareness of AR comes a new way of thinking about what we’re building online. We’re not leaving for cyberspace; we’re building out augspace. Augspace is the totality of data about the real world (and virtual worlds, too) available for access by users of AR-capable devices — particularly mobile ones. It’s virtual reality turned inside out, and it’s growing deeper and richer as we create more connections between data on the Internet and the physical world. Let’s explore five ways that Augmented Reality is making daily life more shareable…

1. Directory Apps

In the realm of directory apps, Layar and Superpages are two of the more impressive current offerings. Both bear a superficial resemblance to familiar web-based map applications at first glance. You search for a location, and you get back a list of hits on a two-dimensional map – as long as you’re holding the device roughly level to the ground. Point it at the air in front of you, though, and your view changes. You’re now looking through your camera at the street in front of you. A minimap shows which way you’re pointing, along with the direction you need to be facing to see your chosen search term. The application uses information about your position and the direction you’re facing to label buildings on the street around you in this view.



While Superpages was built solely as an AR interface to a directory application, Layar, with the release of Layar 3.0, has begun to offer additional, user-created augspace content such as 3D art and information about local architecture. Their site includes a tutorial on how to share 3D art and user-created Google maps through their application. How far Layar goes in moving from promoting itself primarily as a directory app to being a public augspace remains to be seen, however.

2. Travel Guides

Wikitude World Browser uses a similar process of gathering positional information and then showing a little window with information on what you’re facing, but World Browser’s function is to be a travel guide. Point your phone at the Taj Mahal or Big Ben, and you’ll get back a little blurb about it.

Wikitude augmented reality browser in London (step 2 of 4)Wikitude looks at London. Credit: Francis Storr

3. Recognize & Search

Google Goggles doesn’t have quite the same quality of instantly overlaying your surroundings with labels, but it illustrates another capability to which good AR apps must aspire: it runs image recognition on things at which you point it. So far, I’ve mostly used Goggles on wine bottles. I don’t know wine terribly well, but Goggles can look at a wine bottle for me and run a search on what it sees. The algorithm still needs some work, as it tends to get the vintner right, but not the specific variety and year. Still, pretty cool, and it’s much more accurate if you point it at a book cover.

In a similar vein, TAT Augmented ID mashes up Polar Rose’s facial recognition technology (originally implemented as an add-on to Flickr) to perform facial recognition on photos. True AR facial recognition — the kind that will enable us to never forget the name that goes with a face — is still in the future.

4. Gaming

AR has also produced some impressive results in gaming. Ghost hunting games, where an AR ghost is overlaid on your camera view for you to chase down and capture (by actually running after it!) have popped up for the Nineto DSi (Ghostwire) and Android (SpecTrek). My first AR-related accident occurred during a spirited session of SpecTrek on my Android phone. I was chasing down an AR spectre on the Rose Kennedy Greenway when I stepped in a pile of something awful. Not everyone in Boston is terribly conscientious about cleaning up after their dogs, I’m afraid.

A recent game for the iPhone, Sky Siege, is a gestural shooter that fills the air around you with little attack helicopters. Holding your iPhone in front of you, you have to twist and turn to take aim at the bogies before blasting them with an antiaircraft cannon. Sky siege also has a mode in which the backdrop is a cloudscape. I love it as an example of what I’ll call “hallucinatory AR,” but it has one major limitation, which is that the app isn’t actually performing any analysis on your environment to decide where to position things.

Another class of AR games lets you lay out a game level using black edge markers or fiduciary markers on an actual surface. People have created versions of Pac Man, a marble maze game, and Super Mario-style jump and dodge games that can be played in this way. One of them, Edge Bomber, is projected onto a wall (with floors and obstacles defined by strips of tape you’ve laid out). Players can later share the levels they create with codes.

There are some frivolous apps out there, too, but even some of the silly ones point to interesting possibilities down the road. For example, the only function of the iPhone app Nude It is to superimpose a picture of someone in their underwear on a real person. However, the app performs some fairly well implemented image analysis to produce this view.

5. Social Mashups

TwittARound enables users to spot other people’s tweets by location, populating an AR window with tweets in little boxes, complete with user icons, but it begs the question: what type of AR content is really useful? Is it helpful to know who is tweeting in my immediate area when most of the tweets carry no actual information about the location itself? Probably not. While it might be useful as proof of concept for the idea of personal area networks, geotagged tweets, unless they carry useful info about the location where they originated, contribute to AR fog — the cluttering of augspace with information of interest only to the originating party. Still, this demo of TwittARound (via Gizmodo) makes it look like a lot of fun to play around with:



Layar, too, is now offering some Twitter integration through a partner service called Tweetmondo. Twitter integration looks somewhat more interesting with Layar involved, as Layar’s other augspace content offers the possibility of some interesting mashups.

Lynchpin Technologies

AR builds on ideas that have been with us for a long time. The crosshairs on a telescopic gun sight or the distance information shown by a rangefinder are arguably primitive AR applications. AR Toolkit, the first SDK to address the needs of AR developers, has been around since 1999. But rich AR apps demand a technology stack that has only recently been brought together in handheld devices like smart phones, PDAs, and handheld game consoles. For example:

Cameras and Displays: Most AR apps use a camera to capture the scene in front of the device and then draw graphics over the scene on its display. In the more advanced apps, the software analyzes camera input, as well. Today, this works best when the scene contains fiduciary markers (see below), because few devices have the processing power to do complex image recognition in real time. Monitor displays are the prevailing mode of presentation, but one project, MIT’s Sixth Sense, uses a hybrid device to project augspace data onto the person or object to which they pertain.

GPS, Positional, and Locative Technologies: Today, most apps get around the problem of figuring out where graphics should be positioned relative to where the camera is pointed by using orientation tracking. The camera is used only to show the user the scene, with the device’s other senses (accelerometers are key here) providing the app with the positional input it needs to put graphics in the right place. On a larger scale, some of the most useful AR apps are locative, feeding the user data about her surroundings once her position is established. This can be done using GPS, wireless networks, or a combinatin of the two. AR map applications like Layar and SpecTrek use both: GPS to figure out where you’re standing, the accelerometer to establish which direction you’re looking.

Image Facial Recognition: Image recognition technology is key to apps like Google Goggles and TAT Augmented ID. Current smart phone processors can’t do recognition seamlessly, but efforts to improve the situation are underway. A marriage of faster processors with research like that being done at Cambridge University to create better outdoor image recogntion-based tracking will ultimately mean that AR apps can recognize what they see, vastly improving functionality.

Fiduciary markers: Where image recognition technology falls short, AR developers have turned to fiduciary markers: blocky, geometric markers on objects or surfaces designed to be machine-readabile. Once an app recognizes a fiduciary markings, it can use that data to correctly draw graphics on a device display. Fiduciary markers are a relative of other machine readable markings like bar and QR codes, but they contain fewer bits of data relative to size so that they can be scanned at longer ranges.

Credit: centralasian

This is the first in a two-part series. Next: In “Everything is Clickable,” I talk about where AR is headed in the near future and take a look at Aure, one of the first apps to make augspace a truly shareable experience.

Jack Graham

ABOUT THE AUTHOR

Jack Graham

Jack Graham is an interactive producer and software business analyst. His clients and employers have been primarily in the travel, publishing, financial and entertainment industries. In his spare time, he


Things I share: stories, games, hikes & bikes