Interactive video journal, week 2: 3D depth-capture video with RGBD Toolkit

This week I focused my efforts on getting the RGBD toolkit working. The RGBD system uses a kinect camera and a DSLR camera calibrated together to capture the depth range of a subject simultaneously with the video capture.  Depth maps are captured by the kinect using an infrared sensor that measures the distance of the subject from the kinect.  When calibrated with the DSLR, which captures the video, the two footage formats are synced and essentially composited as a single image by the RGBD software.  So this week I assembled the mount that holds the kinect in place with the DSLR.  The two must be mounted together on a tripod when shooting, and cannot be adjusted once they have been calibrated together or the footage will not capture correctly.  I then calibrated the cameras through a careful process of shooting video of a checkerboard image with the DSLR while simultaneously capturing stills of the checkerboard in the RGBD software.  By doing this at various distances from the camera rig, I was able to indicate to the capture software the area that the kinect camera would capture so it could then map the two video formats together.

After the first run of calibration, I was able to capture footage of myself and manipulate it in the RGBD software.  This also revealed to me some of the issues with calibration, as you need an immense amount of light to calibrate correctly, and after getting my captured footage into the software I was able to see where the calibration had failed because there was video data missing from the image.  It was sort of blotchy in appearance, with large areas of black pixelation where the calibration process had been unable to map the range of movement.  I also learned that outputting video from the software required a lot of rendering and then exporting frame by frame into video editing software like Premiere, so I decided to re-calibrate and reshoot with better shooting conditions to make my efforts in outputting my first test sample worth it.  So I re-calibrated to cameras, with better results the second time.  I set up lighting and a backdrop at home, and had the dancer that will be in the final video perform for the camera to the song we will use.  The shoot went well and we successfully captured footage and got it back into the RGBD software for editing.

I spent a lot of time in the RGBD software working with the footage.  It has been quite time consuming as there is a decent learning curve with this software and few resources out there for assistance.  I’m uploading the sample footage now to the link below, along with photos documenting the set-up process this week.  I’m really excited about the results with the RGBD software, and realize that need to keep putting a significant amount of time into working with the footage to get optimal results.  This week I will continue capturing video samples and learning what I can about using this software.  I’m also going to return to my max patches and start considering how I can integrate the max footage with the RGBD footage for the final video.

Share this
Older Post Newer Post