Advertisement
Engadget
Why you can trust us

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.

Microsoft Research project turns a smartphone camera into a cheap Kinect

Microsoft's been awfully busy at this year's SIGGRAPH conference: embers of the company's research division have already illustrated how they can interpret speech based on the vibrations of a potato chip bag and turn shaky camera footage into an experience that feels like flying. Look at the list of projects Microsofties have been working on long enough, though, and something of a theme appears: These folks are really into capturing motion, depth and object deformation with the help of some slightly specialized hardware.

Consider the work of researchers from Microsoft Research's Redmond and Cambridge outposts -- they figured out a way to turn a run-of-the-mill 2D camera like the one embedded in your phone or perched atop your monitor into an infrared camera usable for capturing depth data, sort of like a Kinect. The team made working depth sensors out of a tweaked Android smartphone and a Microsoft webcam, and both were able to track a user's hands and face with aplomb, making them awfully interesting (and relatively cheap) hacks for tinkerers who want to create and test gesture-centric projects without much hassle.

Yet another project saw a team of researchers develop their own RGB-depth camera out of off-the-shelf parts. Why? So they could figure out a way to meld it with software to capture 3D models of people and objects that deform, shift and shimmy in real-time. Imagine holding an inflatable ball in the palm of your hand -- it'd be a piece of cake for an RGBd camera to capture it and for modeling software to render it as a sphere. Now imagine squeezing that ball; imagine the bulges and depressions that take shape as your grip tightens. Between their camera and their software, these researchers have managed to create deformable models much faster than before; it might not be long before such objects wind up in your next frag session.