 Notes on perspectives, ray tracing, and all that. 
Perspective :
* first set up a 3D (x,y,z) coord system.
* Pick
* a viewpoint (camera/eyeball)
* pick a projection plane (computer screen)
* set up a model of some some objects
* For each point on an object,
draw a line (light ray) to the viewpoint.
* Mark where it crosses the projection plane, and find (u,v) coords.
(x,y,z)
*\
\(u,v)
\
 \
+* eye


proj.plane
* simplest geometry is eyeball at (0,0,d) and (x,y) plane as screen
Then (x,y,z) > ( x/(1+z/d), y/(1+z/d), 0 )
or u = x/(1+z/d), v = y/(1+z/d)
* Parallel
* putting d = infinity gives "parallel projection"
* simplest example: just ignore z axis and project down to x,y.
* Two variations
* orthagonal : projection plane perpendicular to view direction
* oblique : view direction hits projection plane at a slant
* various top, side, front and other standard directions
( 1, 0, 0) direction, yz plane and 3 similar choices
( sqrt(2)/2, sqrt(2)/2, 1) and xy plane "cabinet" projection
* see pg 236 in Foley's Computer Graphics text
* Perspective
* lines parallel to view plane stay parallel in projection
* all other sets of parallel lines converge at a "vanishing point".
* lots of these various "vanishing points" depending on
lines drawn in image
* Given principal model axes x,y,z,
there are 3 typical variations of whether lines in x,y,z
directions head towards a vanishing point,
depending on choice of viewplane
* viewplane parallel to xy plane (intersects Z axis only)
gives a single vanishing point for lines in Z direction
* viewplane that intersects x and z, parallel to y,
gives two vanishing points (left,right horizon)
while vertical (z) axis lines are parallel in projection
* viewplane that intersects all of x,y,z gives vanishing points
for all three x,y,z directions
(something left,right,up on 2D projection)
* See http://cs.marlboro.edu/docs/3Dprojectionnotes/3Dprojection.html
for the math; it's all parameterized vector stuff.
* Computer 3D algorithms often use 4D matrices,
which can be used to turn addition (translation) into
a matrix multiplication. Also called "homogeneous coordinates".

ray tracing tutorial (google search)
http://www.siggraph.org/education/materials/HyperGraph/raytrace/rtrace0.htm
which includes a Java applet of ray tracing algorithm.
4.2.3 in POVRay documentation "Short intro to raytracing"
http://www.povray.org/documentation/view/112/
Internet Ray Tracing Competition FAQ (uses POV, primarily)
http://www.irtc.org/stills/faq.html
Jan 2004 winner
http://www.irtc.org/ftp/pub/stills/20040229/jhemmyth.jpg
wikipedia
http://en.wikipedia.org/wiki/Ray_tracing

Radiance open source "synthetic imaging system"
http://radsite.lbl.gov/radiance/HOME.html
I installed radiance; for a demo do
ssh X mahoney@cs;
cd /usr/local/src/radiance/ray/obj/office
make
Radiance tutorial
http://radsite.lbl.gov/radiance/refer/tutorial.html

POVRay also comes in for a fair amount of Googlepriority.

Ordering of surfaces
* which surface is in front of which one
* first attempt: "painter's algorithm" : draw stuff in back first ...
but this fails if coverings are A over B over C over A.
* more typical is "Zbuffer" : pixelbypixel depth data
see http://en.wikipedia.org/wiki/Zbuffering
* Also, If model is made up of triangular planar mesh,
keep oriented and have a normal vector for each
that gives direction of surface.
* Each normal will point toward or away from view;
can ignore those heading away if not drawing back sides.
Rounding surfaces
* If planes are used, can get nicer results by interpolating to curves.
* not to mention tricks like "bump maps"
http://en.wikipedia.org/wiki/Bump_mapping to give texture
* Or can just store everything as splines (NURBS) first...

Ray tracing basic idea :
* For each direction (pixel resolution) from eyeball,
follow line until it hits something
(loop over all surfaces, solve math equation for intersection).
* To figure out what light will reflect from there,
in principle you'd like to know
* reflection properties of surface,
* light arriving from all possible directions
* Repeat ...
* This quickly becomes impractical if carried to extremes.
* There are a number of variations to make this computable;
all involve various approximations. A few are
(1) Phong lighting uses ambiant, specular, diffuse light
(blender, many other packages)
(2) radiosity ("form factors" between various surface "patches")
(3) photon mapping (ray loses "energy" on each bounce to keep it finite)
(4) random (monte carlo, "metropolis light transport") sampling methods
The basic ideas of these approximations are that
(a) "secondary" light after a few bounces doesn't vary quickly
(b) transparent, translucent,
or mirrorlike surfaces get special treatment
(c) a discrete number of primary light sources
also get special treatment

definitions from www.povray.org (a free quasiopen source ray tracer)
* radiosity : diffuse interrflection of light, all of the following
* diffuse : what makes the side facing the light brighter
* specular : puts sparkles on shiny things
* reflection : mirror surfaces
* ambiant : general allover light so shadowed stuff isn't black

from the wikipedia
* 3D_computer_graphics
* http://en.wikipedia.org/wiki/3D_computer_graphics
* global illumination
* class of 3D graphics algorithms that takes into account
not only light directly from light source (local illumination)
but also light reflected off other surfaces
* more photo realistic
* can compute ahead of time and then save for walkthroughs
* examples of algorithms are
* radiosity
* cone tracing
* ray tracing
* photon mapping
* etc
* diffuse interreflection
* is an important component of global illumination
* several ways to compute it
* is often done using radiosity
* radiosity
* first and most popular global illumination method
* 3D projection
* http://en.wikipedia.org/wiki/3D_projection
(eh)
* 4x4 matrices for translations and rotations combined
into "homogeneous" coordinates
(like Computer Graphics textbook does)
* NURBS  nonuniform rational Bsplines; one way of specifying 3D world
* Phong reflection model
* "for dummies" explanation at
http://www.delphi3d.net/articles/printarticle.php?article=phong.htm
* local illumination model (not complete raytrace)
* commonly used compromise between reality and computability
* three subcomponents: specular, diffuse, ambiant
* each light source is given
* Is = intensity (r,g,b) of specular light,
* Id = intensity (r,g,b) of diffuse light,
* Ial = intensity (r,g,b) of ambiant light
* Define Ia = global ambiant = sum of off light source Ial_j
* Ia may also have an additional ad hoc constant
* each material (typically a property of each object in scene) has
* Ks = specular reflection constant
* Kd = diffuse reflection constant
* Ka = ambiant reflection constant
* alpha = "shininess", decides how "evenly" light is reflected
* Given a point on a surface, define vector directions
* L_j = from light source j
* N = normal to surface at this point
* V = direction toward viewer
* R_j = reflection of L off the surface
* Then the "Phong shading equation" is
Ip = Ka*Ia + sum over lights ( Kd*Id*(L_j.N) + Ks*Is*(R_j.V)^alpha )
where any "A.B" term means the vector dot product,
and where A.B is always set to zero if it goes negative.
and where ^x means "to the x'th power".
* There are also several modifications to this equation
* att_j attentuation factor, often used ( 1  (d/r)^2) for light term
* glossiness g factor based on material for specular term
* directional light sources (from a far distance, no attenuation)
* spotlight cone light sources, with additional falloff term
depending on distance off beam axis
* Little basis in real physics  for more realism, use radiosity