Here's how it works. First, create a description of the world in a Rayshade description file.
#include "/afs/cs/misc/rayshade/common/omega/include/colors.h" fov 45 /* Field of view 45 degrees */ light 1 point 5 -8 3 /* Two light sources */ light .4 point -3 -8 3 box diffuse RED 0 0 0 2 2 2 rotate 0 0 1 -15 translate -2 -3 -2 sphere diffuse YELLOW 2.5 2 2 .5 plane 0 0 -2 0 0 1Run it through Rayshade to get the left and right images and depth maps. Make sure to use the
-parallelswitch (only implemented at CMU) if you intend to generate a disparity map. In this example the baseline between cameras is .3, and the final images are 150x150 pixels.
rayshade -z left.hf -E .3 -l -parallel -R 150 150 easy.ray > left.rle rayshade -z right.hf -E .3 -r -parallel -R 150 150 easy.ray > right.rle
The depth maps, stored in Rayshade heightfield-format files
right.hf, can be hard to display if
the depth variation is too great. In this example, the tabletop surface
extends off to infinity, so a simple linear map from depth to intensity
would make it impossible to make out the nearby details. These images use
just the lower-order depth bits to bring out some details, but as a result
the tabletop depths fluctuate too much (and the background at infinity has
different colors). The images were generated using these commands:
hf2gil -preserve left.hf | gil2rle mult:10:- | rletogif > ldepth.gif hf2gil -preserve right.hf | gil2rle mult:10:- | rletogif > rdepth.gifBut of course, if you want to perform computations with the actual depth values, you'll have to deal with either the CMU-GIL or Rayshade Heightfield floating point formats. Once you convert to an 8 bit format like RLE or GIF, you've given up all your precision.
depth2dispprogram will correctly convert a GIL depth map into a GIL disparity map, if you ran rayshade with the
-parallelswitch. Just add this before the
hf2gil ... | depth2disp -right -baseline .3 -resolution 150 -fov 45 | ... hf2gil ... | depth2disp -right -baseline .3 -resolution 150 -fov 45 | ...But this time use
gil2rle -scalesince we don't have a problem with infinite depths any more. Note on depth2disp: use
-rightto get positive disparities,
-leftto get negative disparities. Don't ask.
As before, remember that RLE format is an 8 bit format. Once you convert the images out of CMU GIL format, you will have lost all the subpixel precision of the disparity measurements.
Now we get some nice-looking disparity maps:
Contrast these maps with one generated by horizontal Sum-of-Absolute-Difference stereo using a 5x5 window on the greyscale image pair, checking integral disparities from 0 to 20:
It is left as an exercise for the reader to try running the Kanade-Okutomi variable window stereo method demo on these images. Be forewarned, you might have to wait half an hour or more for the results. HINT: try smaller images instead.
Temporal sequences, i.e. animations, are a little trickier. Rayshade does have some built-in support for animations, but unfortunately that doesn't interact well with the depth map generating code. The problem is that Heightfield format doesn't support multiple images (it's just a raw floating point format). If you have a real need for this you could probably add some code to Rayshade pretty easily (e.g., writing to sequentially-numbered files). But in the short term your best option is to define your world model in terms of some time parameter which you can #define on the command line when you run Rayshade, using the -P switch. If you want really complex animations, check the Rayshade Home Page for user-contributed solutions (you'll have to dig through the rayshade-users mailing list archive).
/usr/local/pkg/img_utils/binon unfacilitized Suns. Those without man pages will give you some tips if you run them with the -help option.
2 May 95 (firstname.lastname@example.org) Created.