Blender ~ Match Perspective

General discussion about the development of the open source Blender

Moderators: jesterKing, stiv

Post Reply
odyssey
Posts: 18
Joined: Wed Oct 23, 2002 5:58 am
Location: 3rd Rock from the Sun

Blender ~ Match Perspective

Post by odyssey » Tue Apr 22, 2003 3:53 pm

Match Perspective.

What I’d like to see is a Blender camera function that would align the spatial 3D geometry of the current model to a static image background. This was a feature adopted long ago in the TOPAS modeler and has been built upon by many apps since.

While you can do some simple perspective match-ups, the precision and exactitude to render an architecturally correct image is lacking.

The method? Well, what you’d need is at least two polygon planes (each with 4 vertices) on the current model that would act as handles that you’d deform to match against the background image. To align properly, 2 vertices of one plane would have to be shared with the 2nd plane; imagine 2 rectangles at right angles to one another positioned so that 2 vertices of one occupy the same spatial coordinates of 2 vertices of the other.

Using the vertices as handles, you'd deform the planes to perspective references that would exist in the image you’re matching to.

While the model itself remains unchanged, the position, rotation and camera lens would be recalculated to align the model view to the static image in the background.

Sound do-able ?

Bischofftep
Posts: 0
Joined: Thu Feb 06, 2003 11:01 pm
Location: Richmond, VA
Contact:

A job for Python perhaps?

Post by Bischofftep » Tue Apr 22, 2003 6:21 pm

Hello, Odyssey!

Would this work?

Designate one plane as your "horizontal" reference, and another as your "vertical" reference, sharing an edge as you suggest.

Then rotate the two so that you end up with what appears to be a "match" with your background image.

Launch a Python script that will place the camera at the intersection of the normals of these two faces (i.e. a point perpendicular to both of them), pointing at the midpoint of the shared edge between the two.

Would this work?

-Bischofftep

cessen
Posts: 109
Joined: Tue Oct 15, 2002 11:43 pm

Post by cessen » Tue Apr 22, 2003 8:22 pm

This sounds like a worthy idea.

A quick correction: I believe that to match camera zoom and camera position, you need only three reference points. Of course, more points always help, and thus it shouldn't be limited to three points :)

This feature would also dramatically aid in the process of camera tracking (for purposes of compositing video sequences), so its usefulness would not be limited to still images.

As with all new features, the UI for this will have to be well thought-out, and should mesh as seamlessly as possible with the existing features in Blender.

Bischofftep
Posts: 0
Joined: Thu Feb 06, 2003 11:01 pm
Location: Richmond, VA
Contact:

Icarus

Post by Bischofftep » Tue Apr 22, 2003 8:27 pm

Hello again:

The Icarus program once Freeware now $3,000-ware, matched camera postions & tracking very well.

There is a python script that imports this data. Unfortunately, I can't use the Python Script (I'm on OS X...) and Icarus is no longer free.

-Bischofftep

odyssey
Posts: 18
Joined: Wed Oct 23, 2002 5:58 am
Location: 3rd Rock from the Sun

Post by odyssey » Tue Apr 22, 2003 8:42 pm

Hi Bischofftep,

Actually what you've described as horizontal and vertical references for each plane is exactly what I had in mind, with one minor exception; the areas of each plane don't necessarily have to be equal. For instance ..

One rectangular polygon (the vertical) could be used to map against the (inside) wall of a sidewalk, while the other is matched to an area of road.

The vertices where the 2 polygons meet would (hopefully) intersect with the vectors created by the vertices at the opposite ends of each plane. Oh gawd .. :shock: .. I already feel an aneurism unfolding !

This may call for decompiling some assembly; I'd honestly hoped to have this a standard inbred feature as opposed to a script or plug-in.

odyssey
Posts: 18
Joined: Wed Oct 23, 2002 5:58 am
Location: 3rd Rock from the Sun

Post by odyssey » Wed Apr 23, 2003 4:23 pm

Hi Cessen,

You're correct about the video sequences possibility; for right now though I'm figuring on matching against a static singular background. If this works to some good conclusion then video tracking would be the next logical step.
cessen wrote: A quick correction: I believe that to match camera zoom and camera position, you need only three reference points. Of course, more points always help, and thus it shouldn't be limited to three points :)
I thought about this, but the way it evolves to me (and gawd knows I've been mistaken before :wink: ) it has to be the deformation of the planes that pass on the information of perspective to the camera. If I get a few moments I'll try and create a fast render to illustrate my point.
cessen wrote:
As with all new features, the UI for this will have to be well thought-out, and should mesh as seamlessly as possible with the existing features in Blender.
Absolutely correct; and I've already looked into that possibility. Say for instance you did want to execute a Match Perspective function (maybe a background image of a bare office desk, and a model of a desk top with a Bells & Whistles 90 GigaQuad Exoset workstation - complete with hot-tub and refrigerated beverage dispenser); you'd select the camera and click once on the animate icon [F7], we currently have 2 free panels in which to place the Match Perspective GUI.
In practice, I think what would happen is you'd have to have the model high-lighted first, do an F7 and evoke Match Perspective.

At this point a prompt would appear that would have you choose one four point polygon (maybe the desk top) that also ajoins another 4 point polygon (the side edge of that desk). You match the vertices of each polygon to the corners of the desk that reside in your image background.

This deformation information is then passed on to the camera (information of the desk top current position and finally the desired position). If the heavens are properly aligned and the galactic tilt is only just slightly off perpendicular, everything should Match up 8) .

With this in mind, once the Match Perspective has been concluded, you'd no longer need the object model of the desk top (this thingy wouldn't render) since you now have that honkin workstation properly aligned to and positioned to the desk on the image background.

A lot to swallow I know but Hey .. it's a plan :lol:

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

I've got it..

Post by thorax » Tue Apr 29, 2003 11:35 am

I think everyone is saying this.. Basically this, you have four triangles..
You have a choice of functions, either you are trying to match perspective of the camera with a scene or you are trying to skew the scene to
match the image..


Say you were matching a scene to some
pre-persepective art from the 16th century (all those catholic frescos
with the Jesus's with big eyes and improper looking tables matched in a
really bad orthographic view).. You wouldn't want to match a camera
perspective because for the material its impossible since
human hands made the view and its imperfect by nature. What's
better is to use a skew-match-perspective.. We do this by constructing
dimensions from four triangles (quads are more humanly understandable
but the computer only needs triangles). We take the
side of a table, map the triangle to what we think is a
proper angle in the vertical direction. Then we take the next triangle and
map it horizontally to something that is something like a 90 degree angle.. Then we match this to angles on the stage of our scene. Great for hopping lamp in a lithorgraph by durer?


But if we know the scene is produced by a camera lens we must
reorient the camera (than skew the scene) to match the perspective..


The difference is in the images and what we can assume of them..
Some images we cannot match perspective on because the images were
drawn by an artist not produced by a camera lens.. The others are produced by camera lens. So in one case we are matching by perception (skewing the scene to match the image) and the other we are reconfiguring the camera..

We could reuse the lattice tool in blender for one (skewing the scene)
for the other it would require some rigid shape matching algorithm (heavy math??)..

sumedho
Posts: 0
Joined: Sat May 03, 2003 6:21 am

Post by sumedho » Sat May 03, 2003 6:36 am

Its a little more complicated than you guys think. But the problem has been tackled before. Do a search in google for camera matching and you will find links to algorithms which need to be used. The simplest solution for a single frame is something like what is used in Icarus. You need two planes defined i.e z-plane and y-plane...matched with parallel lines in your picture. With this and a ground plane point and a lot of matrix maths you can obtain a FOV and position in 3d space for your camera. But it shouldn't be to hard to implement if you have a good algorithm and someone who can code well. The lines could be implemented using planes in blender. Hope this helps

Post Reply