Fist, let me introduce myself: I'm a french scientist in the field of Virtual Reality. More precisely, I work since several years on software rendering inside Virtual Environment (several screens representing the same Virtual World).
Although I'm not the original author, I plan to help on BlenderCave project (http://www.gmrv.es/Publications/2011/GBEO11/
). I think there is missing elements inside the BGE API to satisfy our requirements.
Actually, there is a segment between the eyes. Inside BGE, we can adapt its length (through RAS_OpenGLRasterizer::SetEyeSeparation), but it is "horizontal" and parallel to with screen (see RAS_OpenGLRasterizer::GetFrustumMatrix). In case of Virtual Environment, we must be able to "adapt" it to the orientation of the head of the user regarding the screen (imagine that you can walk on the screen and turn on yourself).
The BGE python API proposes to define the projection_matrix element. However, it would be usefull to define a projection_matrix_left and a projection_matric_right inside the KX_Camera object to allow the user define his own matrix inside Python scripts. As such, nothing would change to the "standard" user. But for powerfull user (the ones that use BlenderCave), it is possible to define "adaptativ" stereoscopy.
I can work on a patch, but I would like to be sure that this feature would be approve.
I am Jorge Gascon, one of the original authors of the paper you are referring.
It would be a good a idea to have accesible the eyes separation and additional parameters in Python. I have not very much experience in the Blender 3D rasterizer internals, but it seems a not very difficult change in the code.
I hope to catch the attention of more Blender developers, in order to hear their thoughts about this feature.