We use cookies to ensure that we provide you with the best experience on our site. To learn more about how they are used please view our Cookie Policy.
If you continue to browse our website we will assume that you are happy to receive cookies. However, click here if you would like to change your cookie settings. [X]

Insights

Google’s Project Tango – ‘Seeing’ in 3D

10 December, 2015 - Source: Knit

In February 2014, Google announced the launch of ‘Project Tango’ a powerful Android tablet device, which tracks 3D motion, generating a 3D model of the surrounding environment. 

The Knit team have developed applications for both virtual and augmented reality and were eager to see Google’s take on this emerging technology. So when developer kits went on sale in August we made sure to get our hands on one. Over the last couple of months we’ve been exploring a range of use cases for Project Tango as part of our internal R&D programme. This has allowed us to gain an understanding of the mechanics of the device, its strengths and weaknesses, and to formulate some ideas about where we think this technology could have a real impact for both businesses and users.

The technology

Project Tango’s core technologies are motion tracking, depth perception, and area learning. It can therefore perceive depth, shape, form and ‘remember’ pre-mapped areas just as human eyes can.The motion tracking of Project Tango via its custom sensors enables understanding of position, and provides real-time information about the 3D motion of the device.

Depth sensors measure the shape of areas being scanned allowing the virtual world to interact with the real world. Project Tango devices can also use visual cues to help recognise the world around them. They can self-correct errors in motion tracking and re-localise in areas they've seen before. The series of built-in sensors in the tablet include an accelerometer, gyroscope and compass, plus a motion-tracking 4-megapixel camera, an ambient light sensor, GPS, a barometer and 3D depth-sensing sensors. The outlined sensor technology can then be paired with an inbuilt software library so all of the information can be correlated to give the device an understanding of its surroundings.

How it works

Upon scanning a localised area the sensors first cast light patterns onto objects in the space, which creates an invisible physical grid from which to map the areas. A camera then syncs to the light-casting sensor and captures the grid as it’s laid out. The camera will feed this data into an algorithm that creates an accurate map of the area on the device. Creating 3D mapping in this way has never been seen before and it represents an exciting development in technology and specifically augmented reality.

What we think…

Our team is impressed with how powerful the hardware is for a tablet, especially one which costs just £250. We found that its strengths lie in the very precise position and orientation sensing, along with the distance sensing to surrounding objects. Some weaknesses we’ve identified are the relatively low resolution of the texture mapping, and short battery life.

The texture mapping issue means that although the generated 3D mapped environments are architecturally accurate, the textures over them have a slightly cartoon-like appearance. However, we expect that these observations will no doubt be addressed and improved as the technology evolves.

Project Tango in practice

One of the best commercial applications of Project Tango we’ve seen is a pilot conducted by US grocery store, Walgreens. Their ‘augmented shopping’ experience allows users to easily locate the items they need, with personal and relevant offers popping out from the shelves making it a great tool for retailers to market products. There is also the potential for a gaming element to be added to the shopping experience, whereby shoppers can collect loyalty points simply by walking down an aisle.

Project Tango also has great potential for use in the property market. The devices’ ability to rapidly generate an accurate 3D map means that it can give potential buyers an immersive tour of the property remotely, opposed to simply viewing a 2D panoramic image.

We can also see there being benefits for visually impaired users. The indoor navigation technology could allow users to navigate spaces using the three-dimensional maps by reading the names of the various rooms. 

The gaming industry is particularly excited by the opportunities Project Tango creates for developing world-sensing games in which a player’s physical movement in real-time influences their position within the virtual space.

Artistic impression is a use-case which has so far been underdeveloped and the team are excited to see how the tech could be used in this way. The idea of allowing an artist to draw in the air using a tablet to create impressive 3D shapes is an exciting idea. Plus, it wouldn’t even have to be a person deliberately drawing in the air, but it could be that a device is attached to a creative performer and their actions would be visualised in 3D.

There is also scope to enable people to ‘visit’ physical spaces through Project Tango. Using a headset or tablet we could control a robot to take a tour around a museum. The depth perception would make museum exhibits more realistic when viewed with a 3D viewer. This could open up whole new environments for people with limited mobility where they can experience otherwise inaccessible locations using augmented reality.

Long term, Google will likely turn Tango into a wearable device and with Google Glass back in development, we expect a convergence at some point in the future.We’re excited to see how this technology develops and see huge potential for numerous industries. 

Your session will expire in xx.xx
Continue or Log Out