Here’s a rather interesting photography hack that involves putting together an unusually, large array of cameras together to take 4D pictures. Not sure what 4D really means but one thing for sure, you will have a bunch of great angles/multiple focus points for your photos.
Perhaps our future digital cameras will come equipped with at least 4 camera lenses, not just 1 or 2 (for 3d).
So, what does this thing do? The primary function of this array is to capture the Light Field, a four-dimensional function that is capable of describing all rays in a scene. Surrounding you, now, and always, is a reverberating volume of light. Just as sound echoes around a room in complex ways, bouncing from every surface, so does light, creating a structured volume. Traditional, single-lens cameras project this three dimensional world of reflected light onto a two dimensional sensor, tossing out the 3D information in the process, and capturing only a faint, sheared sliver of the actual light field. By taking many captures at slightly shifted locations, it is possible to capture a crude representation of the light field. The number of slices determines the resolution of capture; our 12 captures at 7cm separation is a bare minimum. What can you do with a light field? The lowest hanging fruit is computational refocusing. By computational refocusing, we mean focusing the image AFTER it is captured.