In "5 Ways Augmented Reality is Making Your Life More Shareable," Jack Graham explains new technologies that are now allowing the Internet to intersect with the physical world. In this sequel, he explores how these trends might shape the future…
It's 2031, and I'm sitting on Chicago's lakefront. Because it's a Saturday morning, I have most of the usual information feeds that I leave running in my field of vision turned off, but a mini-map in the top right corner of my vision shows the lake front with my position and, about ten meters behind me, a constantly moving pair of green blips representing my daughter and our dog playing an improvised game of girl/dog ultimate frisbee.
My AR display is a pair of specs — a fairly normal-looking pair of glasses that draws AR graphics into my field of vision. Thanks to advances in neural interfacing, I control the GUI through a combination of thought, eye movements, and gestures made where my specs' cameras can see them.
I look over toward a field house on the shore to my left. My specs recognize it based on my position and the shape of the building, allowing me to pull up a schedule of events going on there. There's a kung fu class that I might be interested in, but the non-official notes people have left on the building suggest that it isn't very good. I go to collect my daughter and the dog.
"Look, Dad!" My daughter points back toward the lake and shares an AR tag — #architeuthis — with me as we walk toward home. I de-filter the tag to see what she's seeing, and a pink giant squid appears in the sky over Lake Michigan, its tentacles drifting lazily in an augspace breeze modeled using data from nearby weather stations and, where possible, crowdsourced from boats on the lake's surface.
Credit: Copper fingers
Augspace has turned the city into an onion — if every layer of the onion had a distinct flavor. The divisions between the layers are mediated by user tagging — an ever-changing folksonomy of human experience that defines channels in public augspace.
"Gibson," I say.
"Huh?" She stops for a second, looking down at her chest and gesturing. She's cleared the graphic she was wearing on her shirt earlier and is now stretching a smaller version of the squid that the artist made available to fit in its place.
I chuckle. "That was in a William Gibson novel. Spook Country." Gibson flirted a bit with augspace, but I'm not sure he imagined how heavily we'd rely on it — or abuse it.
Duncan, our dog, busily noses at the corner of a lamp post as we stop in front of the market. I briefly wonder what it would be like if dogs could leave locational tags on objects for each other the way we can. "#oddsmell, 2031.05.24. Intriguing lamp post. Pee, with notes of banana peel and energy drink yielding gradually to biodiesel exhaust particulate."
Instead, the outline of this lamp post — a heavily annotated one, being outside the neighborhood market — is almost obscured by a mass of augspace fliers, notes, and video windows. Almost reflexively, I flag five of the notes I see for abuse. Augmented reality fog — or just "faug" — can be a real nuisance, even with careful filtering. Right now I'm seeing a very narrow set of tags: #crankyaginghipster, #scifi, #60657, and #carbon (the latter only because I'm about to go shopping for produce). On a commercial street like this one, this still generates such a thick faug that if I were bicycling, I'd turn it all off to avoid crashing. Some of the annotations have audio, too, although I usually keep my speakers turned off — especially here.
"Can you watch Duncan while I run inside?" I ask my daughter.
"Okay." She takes the leash, leans against the lamp post (causing the graphics to shift around, looking for space in my new field of view), and pulls open an AR window of her own to play a game while she waits.
As for who's watching my daughter — I am. If she starts moving away on my mini-map, I'll know something is up and can open a window to a public video feed from one of the many cameras on the street. And if anything did happen, cars, bicycles, and ten year olds can all blink red and emit loud, wailing sounds on #alarm — a civil tag that cuts through all filters. (Luckily, they made it so that cars flip over to #caralarm, which only police have to listen to, after the first few minutes).
Inside, the market is a riot of faug. I pull a window with my shopping list open over my end of the cart. I filter out everything the market, growers, and manufacturers have tagged onto their wares. Now the graphics floating around the piles of fruit, boxes of cereal, and fish in the coolers are all coming from #carbon and #foodwiki (both of which are sufficiently policed by augspace vigilantes to be trustworthy). Unlike the big, flashy commercial annotations, user-generated notes on products normally show up as a small tab that you can expand if you want to see more.
Credit: E rock
Twenty minutes later, I'm leaving with an armload of groceries and a reasonably good balance on our carbon account. Along the way, I successfully avoided buying some herring from an over-fished area and found a bottle of wine that was a steal at $30 (ah, inflation), thanks to notes left by other users of this market and a hundred others.
"Pipe wrench." My wife sticks her hand out from under the sink, and I pass her the tool.
She's adding a composter kit onto the garbage disposal in our sink. Water will drain as usual, but most of the slurry of veggies and other stuff that goes in our drain will now go out a chute to the back yard. The kit uses standard plumbing parts, all of which bear fiduciary markers. Someone who bought the kit before us recorded and distributed an AR walkthrough, which I can see in a window she's sharing with me. The fiduciary markers allow the walkthrough to identify and highlight each part, showing her which one from the pile she needs next and where to attach it.
Our daughter is sitting on the edge of the kitchen counter, peering out the window. "Mr. Melendez is in our yard," she says.
"Philips head." I pass my wife the next tool.
I look up toward my daughter. "Check #gatoperdido," I tell her.
"Oh, yeah," my daughter says. I stand up, looking out in the direction of her gaze and unfiltering #gatoperdido. Sure enough, Melendez has a graphic on his personal area network of his cat with a big question mark over it, indicating that he's searching for it and not peeping or looking to pilfer lawn gnomes. Cats, even if you can get them to wear one, are harder to track with a GPS collar than dogs; their tendency to hide under bushes and porches amounts to a natural instinct for evading satellite surveillance.
Another of our neighbors jogs past and stops to chat with him as he abandons his search of the perimeter of our porch. Her personal area network is showing a bit more info than is usual for someone her age; she's recently divorced.
My daughter says, "She knows he's gay, right?"
My wife stands up next to us, washing her hands in the sink. "Just because you're a big personal area network whore doesn't mean you bother reading other people's profiles."
My daughter giggles while I shoot my wife a dirty look. The other neighbor and Melendez are tracing lines on a plane in thin air in front of them; it looks like she spotted the cat and is showing him on a map of the neighborhood.
"Are you done?" I ask, "I need to get to practice soon."
"Why so late in the day?" my wife asks.
"I've got them practicing against Willie's kids in Washington today — figured I'd let them sleep in."
She turns back to the sink. "Get going, coach; I can finish here."
My son and his girlfriend meet me on the practice field and help me attach cameras to some nearby trees, filling in a few holes where public video feeds don't cover the whole playing field. Just as well it's biking weather, because even with the cameras set up, the back of my wagon is still packed with LARPing gear: padded swords, armor, and assorted weaponry, all equipped with fiduciary markers, rumble packs, and edges boldly delineated with black tape or other markings. I've also got a box full of heavy duty athletic AR specs with ear phones, still warm from the charger. The rest of the team shows up, and they start buckling on their equipment.
I put them through about half an hour of warm-ups, and then it's time to log in. The game world's AR skin overlays my team of high schoolers with graphics of a party of fantasy adventurers. The graphics sync perfectly to their movements — almost. Positional data come from a combination of cameras watching the fiduciary markers on their equipment and accelerometers in the gear itself. A central game server coordinates the output back to their specs.
"PJ, reset the accelerometer on your shield." He does so, and the big orcish warrior covering his real body starts moving more smoothly.
The kids are standing at one of two entrances to a cavernous dungeon maze. A friend of mind designed it when he was still coaching, but my team has never seen it before. Their objective is to navigate the illusory maze, fighting monsters and avoiding traps. They'll be racing against Willie's team, three time zones away. I can cut away pieces of the maze for a clear view of my kids or watch the action from above on one of the AR windows floating in front of me.
I message Willie: <your team ready?>
My kids advance cautiously on the cave mouth, using a formation that protects their lightly-armored wizard while not bunching them up to tightly. Within moments, they're fighting a tribe of goblins. Hits from their real weapons make their augspace opponents take damage and reel back. When an augspace goblin hits one of them, it triggers the rumble packs in their armor, and the heads up display showing their health goes down. Healing spells from their priest bathe the team member being healed in a golden light, and fireballs from their wizard whiz through the air.
To an observer not watching their AR channel, it looks like a bunch of kids in hockey pads threading their way through an invisible maze while swinging foam-covered sticks and gesturing at thin air.
The game goes on for almost three hours, culminating in the two teams having to unexpectedly join forces to beat back a many-headed, fire-breathing hydra that rises out of a pit in the center of the maze. The combined team prevails, and there's a friendly round of augspace shoulder clapping and exchanging of social network links.
My son's girlfriend is mostly unscathed, but he's looking slightly charred and still has several goblin arrows sticking out of him. I drop the AR skin, and they change back from elves into a couple of sweaty high school kids.
"All right, good practice, guys. Now let's go over where you screwed up." The team gathers in a circle, and I open a big AR window of the whole maze flat in front of them. In Seattle, Willie is probably doing the same thing with his team; we can watch their debriefing later. I give a few pointers on spell targeting, remind one of the strikers that she keeps forgetting to flank enemies, and we call it a day.
My wife and I are taking the eL a few stops south to meet some friends for dinner. At some point she pokes me and says, "Hey, turn on your audio and unfilter #orexic." I put on my specs and turn audio on. From a background of scratchy, incomprehensible whispers, a creepy female voice says, "I'm soooooo hungry," and then says something else which fades into the background.
"Yipes, where's that one coming from?" I ask.
She points at an ad for a fashion blog featuring an impossibly thin model on the other side of the train. When my specs recognize it, the whispers become more distinct; it's someone reading a dinner menu in spooky, digitally altered whispers.
My wife is good at spotting stuff like this. As we walk to the restaurant, we leave #orexic on. Most of the fashion ads in our path, some of which are AR graphics themselves, trigger audio that sounds like the ghost of a starved supermodel longing for a good meal.
"I wonder where that menu's from," I say, "Sounds good."
"You're terrible," she says. But you have to have a sense of humor about these things, and she knows it. On top of all the other information with which our world is saturated, constant social commentary is now embedded in our physical surroundings.
Every object has a story, every place an opinion. Everything is clickable.
Postscript: Aure & Building Out Augspace
The vision presented here of what augspace could become — helpful advice, spam, faug, and all — is based on the idea that it will be a public space. As with the rest of the Internet, open standards for augmented reality content will appear, allowing us to annotate our environment in a way that can be shared across many platforms and applications.
Other than Layar, which is explicitly a commerical directory app, Aure is the first app I've seen that allows users to build out their own customized augspace, populating it with "aures" that annotate physical objects or locations. Even better, Aure offers sharing. This vision of augspace has a long way to go, but it seems worth doing. Augmented reality will only become viable as a public communications medium when users are no longer tied to a single way of adding to it.
At the first article in this series, I stated that AR was the opposite of VR, but this map of the Second Life virtual world overlaid on San Francisco Bay offers an interesting twist. If we can see virtual worlds overlaid on the real world around us, there's nothing to prevent them becoming coterminous with reality. Beyond just annotating real objects, augmented reality allows for populating the real world with quite a lot of stuff that was never there in the first place — including meeting points between virtual worlds and the real. Imagine taking a boat out into San Francisco Bay and finding yourself in the middle of all this!
Why do I think we'll have goggles or specs something?
Because it's just so damned useful, far more than VR (the other goggle-demanding technology) might have been. A woman asked me in a talk I gave on AR to Social Media Club Boston whether I thought it likely that we'd see dedicated, single use AR devices. Early on, for some specialized applications, we might. For example, the military is already experimenting with an AR device that guides mechanics through assembling and maintaining equipment. Applications like these will use wearables, and eventually the wearables will become good enough to be spun off for the private sector.
The only other likely candidates for everyday AR are picoprojectors, tiny projectors that can show you information about what you're looking at by projecting it onto the object. MIT's Sixth Sense project mashed up a cell phone with a picoprojector to good effect. Sixth Sense put the projector on a lanyard around users' necks, but my guess would be that even if we're not using the lenses of our glasses as the display, the most obvious place to mount a picoprojector down the road is on the frame of a pair of glasses. This way, the projector's output will always be directed into the user's field of view. Picoprojectors are already getting to small that this doesn't seem like a crazy idea. Projectors have one big problem, though, which is that they're not so great for outdoor use during the day, so for now, I'm betting on display specs.
AR specs have a style barrier to overcome, too. Even the best-designed units now available (Vux, I'm looking at you) look incredibly dorky. No one's going to walk around with these things on their faces until they get a serious makeover.
The only really good alternative to specs are data tablets. My disappointment with the much-maligned iPad stemmed from the lack of a camera, which makes it useless as an AR device. By omitting a camera, Apple missed out on being an important platform in a burgeoning field. But the unavailability of a live video layer for iPhone developers has already demonstrated that Apple isn't much interested in AR.
But Apple isn't the only company in the tablet field, and other hardware makers did seize on the opportunity they missed. Until we have specs, tablets look like the primary venue for second generation AR apps.
As hardware smaller and less obtrusive and common standards for building augspace emerge, we'll see richer applications that mash up AR techniques with technologies that make people smarter (like locative technologies and search) and technologies that make our software smarter (like natural language processing). AR will finish the work started by the internet, a process that will substantially alter our experience of reality, almost seamlessly injecting a layer of data into everyday experience.