I was trying to browse the Internet for how to use Fakenect. But couldn't find a good straightforward description. After it got working for me, I thought I would myself write one :)
What is Fakenect
In essence, it's a way to get Kinect applications working without actual physical Kinect available with you. You fake the Kinect by replaying a pre-recorded Kinectcapture.
(1) Build Library
assuming that you have the OpenKinect source code, to first build the fakenect library, you would just "cmake ." and then "make" in the OpenKinect directory (the one I have is "OpenKinect-libfreenect-2ea3ebb"). Now you must have a libfreenect.so at OpenKinect-libfreenect-2ea3ebb/fakenect/lib/fakenect/.
Now, to record, just go to OpenKinect-libfreenect-2ea3ebb/utils/ And run "./record some_directory_name". A new directory name some_directory_name will be created as "/utils/some_directory_name", all the record data will be dumped there. While the record program is running, you can stop recording by hitting cntrl+C.
Now, you should have a record-dump in the specified directory.
(3)Run Your Recording
Say, you want to run the glview program now, so go to the OpenKinect-libfreenect-2ea3ebb/examples folder and run the following:
"We extract SURF features from the camera image and localize them in 3D space. We match these features between every pair of acquired images, and use RANSAC to robustly estimate the 3D transformation between them. To achieve real-time processing, we match the current image only versus a subset of the previous images with decreasing frequency. Subsequently, we construct a graph whose nodes correspond to camera views and whose edges correspond to the estimated 3D transformations. The graph is then optimized to reduce the accumulated pose errors."
Juergen Hess (firstname.lastname@example.org) and Felix Endres (email@example.com) from the University of Freiburg own this work.
1) The source code of the for the above -mentioned algorithm is available here.
2)Dieter Fox, who's a professor at University of Washington has a publication and some work in modelling with Kinect. His publication is titled "RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments". I will go through it shortly.
RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments
If you already are familiar with usual camera calibration techniques, then the Kinect RGB camera is no different.
(2) RGB-Depth Cross Calibration
There is no hardware registration performed on the images. In other words, the pixels in both the buffers do not correspond to each other one-to-one at hardware level by default. So unless you explicitly come up with an affine matrix that does the correspondence for you Kinect (it differs from one Kinect to another), the depths and colors are going to look inconsistent.
Nicholus Burrus's 'RGB-Demo' helps you calibrate your Kinect. More information here. It can be downloaded and compiled easily.