“At FRL, our research teams are starting to build the core infrastructure that will underpin tomorrow’s AR experiences.
How does this work? Think of the 3D spaces we showed above, and now picture this capability at planet scale. Using machine vision alongside localization and mapping technology, LiveMaps will create a shared virtual map. To deliver this technology at this scale, we envision that LiveMaps will rely on crowd-sourced information captured by tomorrow’s smart devices. To populate the first generation of maps, our researchers are exploring mapping our own campuses and the use of small pieces of geotagged public images to generate point clouds — a common technique used in navigation mapping technology today.
Rather than reconstructing your surroundings in real time, AR glasses will one day tap into these 3D maps. That means drastically reduced compute power will be needed for your glasses, enabling them to run on a mobile chipset. With these 3D spaces, your avatar could teleport anywhere in the world.”