First test of using two Flip Video Ultra cameras to record stereo (3D) video. The cameras were gaffer-taped to a hard metal ruler with the lenses separated by three inches — more than a realistic intra-ocular distance, but I thought it might work out well to exaggerate the effects of the 3D separation. The videos were then synced and combined in After Effects (above) by assigning the left camera to the red channel of the image, and the right camera to the green and blue channels (creating a cyan look).
This is meant to be viewed with old-style red/cyan 3D glasses (red on the left eye). If you don’t have a pair laying about, you can get a free pair by mail from [Rainbow Symphony](http://rainbowsymphony.com/freestuff.html).
The cameras were not aligned exactly, but seemed to give pretty decent result nonetheless. The only major issue is the Flip cams really tend to smear when panning, so it probably works best if you go *really* slow. Also, optimizing alignment and intra-ocular spacing will need some work. Still, looks promising for cheap 3D video.
Next up, color anaglyphs. 😉
I was going to post a video of it here, but no matter what web-suitable format I tried (and I even tried AVI and WMV9), they all compressed the color in such a way that the red-green separation was damaged, and the 3D no longer looked very good. Surprisingly, in the case of H.264, it smeared the colors together into a double image, no mater how high a data rate I allowed it. I’m now wondering with the way these codecs reduce the color information, if perhaps a color anaglyph will actually fare better than an old-school black-n-white one.
I will test that soon, but in the meantime, the morbidly curious can [download a 243 MB Apple Pixlet version here](http://www.vimeo.com/download/video:59656443?e=1209623015&h=9ae97c5cd09f0e257ef7a348d7b30ee2), that’s as small as I could get it.