This week I had more fun finding new ways to display the still frames exported from RGBD Visualizer. I continued working with the composited images that I created in Photoshop, producing stills from the other takes (see below), as well as making prints of the images. I made full-sized (approx. 11” x 25”) prints, with 6 of the 12 from the Reach series printed half-sized on a single large sheet. I also printed versions of each of the images below, again with a set of smaller prints from the series. I’m pleased with the results, and if nothing else it has definitely given me plenty ideas about other variations of images I can print. I’ll be showing these in critique on Monday, and am interested to get some ideas and feedback from everyone about what I should try next.
The new composite images include a variation on the Guns composite from last week. While working with Guns frames in Photoshop, I noted that the most visually appealing part of the imagery was the layering of the dots viewed close-up. So I re-exported a long segment of the Guns capture from RGBD Visualizer with the camera zoomed in on my torso region so as to fill the frame. With 1,110 frames, this image contains by far the most of any of the other composites.
The result is beautifully abstract, dimensional starburst pattern, exactly what I was hoping for when looking at the layered stills close-up in Photoshop. I noted that the total number of frames exported, 1,110, divided evenly by 5 to 222, so I decided to produce another series of stills by sequentially removing groups of frames as I did with the series from week 8. In this case I saved a still every time I removed 222 frames from the end of the sequence. Below are the results, which I’ve also made a print of.
I also made composites of the Reach 3 video frames, as well as the Plant frames.
While creating the composited images from each video, and watching the automated process of Bridge loading the frames into individual layers in Photoshop, I noticed how the compilation of the sequential images made for an interesting live animation of sorts. I started speculating as to how I could recreate this experience for viewers/users, and possibly create a truly interactive presentation of this material. I really like the idea of user control over the animation of the image stills, like a contemporary zoetrope.
I used Processing to make a basic app that loads 50 stills in sequence, and with some help from Peter Elsea, got it operating as an infinite animation loop. There is some basic interactivity in that the user can move the animation around inside the display window using the arrow keys. Ultimately users will be able to record their manipulated animations as either a sequence of still images or as a QuickTime video, but currently neither of those functions are working in Processing. I will show this in class on Monday as well.
I’m thrilled with the direction this work is going as I feel like there is a lot of opportunity for experimentation and more expansive related works. The concept of interactive animation is really driving my interests, and I am inspired to pursue other ways of displaying the frames in an interactive format. I will be spending more time investigating traditional forms of the zoetrope, praxinoscope, and related early cinematic technology, which is definitely inspiring the conceptual side of this project.