- Very recent. (Submitted on 7th Feb 2011)
-They describe it as:
"We extract SURF features from the camera image and localize them in 3D space. We match these features between every pair of acquired images, and use RANSAC to robustly estimate the 3D transformation between them. To achieve real-time processing, we match the current image only versus a subset of the previous images with decreasing frequency. Subsequently, we construct a graph whose nodes correspond to camera views and whose edges correspond to the estimated 3D transformations. The graph is then optimized to reduce the accumulated pose errors."
Juergen Hess (email@example.com) and Felix Endres (firstname.lastname@example.org) from the University of Freiburg own this work.
Got the Next Pointer
Felix Endres pointed me to two great resources:
1) The source code of the for the above -mentioned algorithm is available here.
2) Dieter Fox, who's a professor at University of Washington has a publication and some work in modelling with Kinect. His publication is titled "RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments". I will go through it shortly.
RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments
What is the paper about?
Is it about/applicable to Kinect?
How does it relate to my work?