Peter Diamandis continues our AR journey – “The best way to predict the future is to create it yourself”

How do you want to see the world? As an on-going game? A constant shopping extravaganza? A classroom that spans the planet and never stops teaching? How the Earth appeared 100 years ago?

Augmented Reality (AR) will allow everyone to view the ultra-dynamic and highly immersive digital world of their choice.

Turn on Game mode, and when you’re walking down the street, everything you look at becomes part of a game… Star Wars, Pokémon or Harry Potter. Entertainment information layers permeate all physical environments.

In Shopping mode, online merges with offline environments. Every storefront is only showing you things you desire; your AI knows exactly what you’re shopping for, what you need, what colors and fabrics you prefer. Specifications from price and color, to carbon footprint and designer, are preferentially displayed by your AR headset. Even the advertisements you see (or lack thereof) are filtered by your AI.

Turn on Education mode, and wherever you look, your AI-enabled AR headset can deliver you physics lessons, historical rundowns, or linguistics courses. Virtual educators appear on demand, answering nearly any question you could possibly think of. (Every action, from an arching football to an ocean wave, can teach you about physics.) Every moment is filled with constant learning and growth.

Or switch on Tourism mode: Explore any location with a personalized, expert guide. See a site as it appeared during any time period—rewind 600 years or 6,000. And don’t miss any of the most renowned local food spots, particularly those your AI knows best match your palate.

The AR Cloud

As AR hardware advances within its deceptive growth phase, the business opportunity for AR content creators is now—whether building virtual universes or digitizing our physical one.

But to create multi-player games, social media communities, and messaging platforms linked to the same physical space, a centralized AR Cloud must first unify all headsets within a synced virtual overlay.

Just as search engines like Google serve multiple operating systems, the AR Cloud will serve every headset. Yet unlike today’s Cloud computing infrastructure, the AR Cloud will need to churn constant input-output loops in real-time, crunching and serving up far more data than we can currently comprehend.

While most AR apps available today offer one-time wonders like furniture try-outs or human anatomy lessons, AR-native apps linked to daily tasks in the physical world will change the way we do everything.

“A real-time 3D (or spatial) map of the world, the AR Cloud, will be the single most important software infrastructure in computing,” believes Ori Inbar, co-founder of Augmented World Expo. “In a nutshell, with the AR Cloud, the entire world becomes a shared spatial screen, enabling multi-user engagement and collaboration.”

But the AR Cloud is also set to transform how information is organized. Currently, we actively input our questions and find answers through 2D mediums. But the AR Cloud will soon enable a smart environment that feeds us what is relevant, when relevant.

Local business that are inherently pertinent to you and your problems will auto-populate individualized data in your AR interface. Individuals’ backgrounds will pop up at networking events, particularly of those who share your industry, interests or might be great partners for your next joint venture. That computing system you’ve just been shipped will guide you interactively through the assembly process—just give it a gaze and activate instructions with a blink.

Technological Requirements

But how do we actually build the AR Cloud?

As I’ve mentioned in previous blogs, the closest we have come to a widespread communal AR experience was Pokémon Go. To function, the game’s servers store geolocation, player activity, and specific location data. But even in the case of this sophisticated online-merge-offline AR experience, there is no shared memory of activities occurring in each location.

In tomorrow’s AR Cloud, a centralized AR backend would incorporate shared memory data, allowing us both individual gamification and seamless shared experience.

But to do so, the AR Cloud requires us to perfect point cloud capture, a method of capturing and reconstructing 3D areas. Several techniques—laser scanners like LiDAR, depth sensors like Kinect, or drone and satellite camera footage—will together enable a universal, high-integrity point cloud.

Along a similar vein, a tremendous upcoming business challenge involves inputting scans from countless hardware devices and outputting data accessible to a range of platforms. I.e., the process of digitizing and updating every square foot of physical space as user-worn sensors collect data.

To achieve this, we might think of solutions similar to (but far more sophisticated than) Google Tango’s “area learning,” wherein devices use camera footage and location data to recognize places they’ve seen before. Depth sensing and motion tracking will also play a critical role in environment creation.

And in terms of AR self-orientation, companies will need to develop universal localizers that give devices ultra-fast positional awareness. In this instance, crowdsourced 3D mesh stitching might be employed to stitch together all data generated by AR users, thereby recreating digital versions of shared physical environments.

Finally, the AR Cloud will ride on massive surges in connectivity. As 5G, balloons and satellite networks proliferate worldwide, latency (i.e. the delay in data transfer) will vastly improve across AR devices, allowing constant real-time updates to the cloud.

Even today, network giants like Cisco, Microsoft, and IBM are already starting to tackle the AR Cloud’s infrastructural components.

Take Cisco, which now innovates across various IoT platform solutions—think: Cisco Kinetic, Cisco Jasper, and Cisco DNA (Digital Network Architecture)—supporting the ever-increasing bandwidth needs of smart, connected devices.

Or global non-profit Open AR Cloud (OARC), which spans projects from spatial indexing, to edge-computing and 5G, to security and privacy.

The Implications….

So what does it all mean?

Instant skills training: Anyone capable of following decent audiovisual explanations can become an expert on anything, whether in the middle of NYC or in rural Bangladesh, on-demand.

Screens go away: Your AR headset can project your watch, phone screen, health metrics, entertainment, anywhere and to the scale you desire. We first dematerialized radios, calculators, measuring tapes, and almost every computing tool into a handheld device. But now, we are dematerializing screens themselves— seeing through interfaces, not looking into them.

Control what you see: Eliminate what you don’t want to see and populate ordinary environments with your desired reality. Your office floor becomes a calm pond, your windows a mountain view. Your kids might be surrounded by open canvases, how-it-works rundowns on any tool, or written vocabulary as you speak to them. Imagine telling your AI, “every time you see a coffee cup in the world, fill it with flowers.”

Never forget anyone’s name or birthday: The combination of facial recognition, AR and AI will allow you to recognize anyone by name. You immediately know a familiar face when you see one, how you know that person, and relevant information at the right time.

Instantly recognize any “thing:” Look at any tool, piece of art, product (you name it!) and know exactly who made it, what it costs, what it does, how it might be assembled or disassembled, and the supply chain that brought it about.

Advent of digital fashion: Digital garments are overlaid seamlessly on your body in the AR Cloud, and digital copies of yourself might model new styles or innovative fashion ideas at whim. You can control who sees you in what clothing. Your colleagues can see you wearing one outfit, pedestrians another, your family members a third.

Training your AI: AR headwear will know where you’re looking, tracking your facial expressions, eye dilation and focus—all working with your personal AI to learn what you love, how you think, and what catches your imagination most.

Final Thoughts

Consider how companies, governments, artists and leaders will vie for priority in presenting AR-delivered data to your visual cortex.

Or ponder how you (or your AI) will curate your digital world. How you might maintain privacy (of which information and how much?). Do you want people looking at you to know your name? Your profession or birthdate?

AR will not only transform our world. It will fundamentally redefine it. Your combined AR/AI system can help you focus on what’s important, block out distractions, or help lift your mood when required.

The convergence of AR, gigabit/low-latency networks (such as 5G), IoT (i.e. sensors), AI and Blockchain is about to change almost every industry in the decade ahead, and create more opportunity for wealth creation than was possible in the past century!

Entrepreneurs pay attention! Consider these two economic predictions to understand the magnitude of what is coming.

  • First, McKinsey predicts that IoT will create $6.2 TRILLION of new economic value by 2025.
  • Second, Gartner predictions that AI augmentation will create $2.9 TRILLION of business value and 6.2 billion hours of worker productivity globally by 2021.

AR will play heavily in both.

My advice to everyone… DON’T BLINK!