Get a camera's view and send it through socket communication

Scripting in Blender with Python, and working on the API

Moderators: jesterKing, stiv

AgentCain
Posts: 9
Joined: Thu Nov 29, 2012 12:45 pm
Location: Greece

Get a camera's view and send it through socket communication

Postby AgentCain » Thu Nov 29, 2012 1:01 pm

I am working on a server-client system which will estimate a person's pose
I would like to have blender function as a server, using python scripting and socket communication. It will manage a human articulated model, take as input the joints position (or movement in general) and respond with the models view both in depth and visual image. This will afterwards be compared to the actual scenery, so the pose can be estimated.

The reason I chose blender is because i dont want to have to deal with bones system and meshes rendering. It could be done in openGL but its such a pain you know where.

So is it possible to get a camera's view and store it somehow in a python variable? The only thing I can find is render the scene and store it in a file using scripting, but that would be extremly inefficient and slow for my purposes. I dont really have a need for quality rendering. In fact I would prefer low quality rendering in order to achieve better response times.

Im using windows 7 64bit and blender 2.64

AgentCain
Posts: 9
Joined: Thu Nov 29, 2012 12:45 pm
Location: Greece

Postby AgentCain » Thu Nov 29, 2012 9:27 pm

I just had an idea

Since you can load the result of opengl to opencv using glReadPixels(), I probably could do the same through the Blender's opengl wrapper

Will try and report. If anyone else has another solution, please dont be shy :P

stiv
Posts: 3646
Joined: Tue Aug 05, 2003 7:58 am
Location: 45N 86W

Postby stiv » Fri Nov 30, 2012 1:10 am

The glReadPixels() function is rather slow on consumer graphics cards.

Seems easier to render the image at your chosen resolution and write it to a socket or named pipe.

AgentCain
Posts: 9
Joined: Thu Nov 29, 2012 12:45 pm
Location: Greece

Postby AgentCain » Fri Nov 30, 2012 7:19 pm

Could you propose any method?

All I could find is

Code: Select all

 
bpy.data.scenes[].camera = obj
bpy.data.scenes[].render.file_format = 'JPEG'
bpy.data.scenes[].render.filepath = '//camera_' + str(c)


which is useful for saving a rendered scene as an image by calling

Code: Select all

 bpy.ops.render.render( write_still=True )


The blender.python documentation is rather difficult to navigate :oops: [/code]

CoDEmanX
Posts: 894
Joined: Sun Apr 05, 2009 7:42 pm
Location: Germany

Postby CoDEmanX » Sat Dec 01, 2012 12:47 pm

how about something like

Code: Select all

bpy.ops.render.opengl()
bpy.data.images['Render Result'].save_render("D:\\viewport_render.png")


?
I'm sitting, waiting, wishing, building Blender in superstition...

AgentCain
Posts: 9
Joined: Thu Nov 29, 2012 12:45 pm
Location: Greece

Postby AgentCain » Sat Dec 01, 2012 1:18 pm

CoDEmanX wrote:how about something like

Code: Select all

bpy.ops.render.opengl()
bpy.data.images['Render Result'].save_render("D:\\viewport_render.png")


?


The thing is I want to avoid writing to a file because it will be extremely slow writing lets say 50 .png and then open them.
Of course I could use a Ramdisk partition to gain some speed, but Id like to avoid it and pass the rendered result to another process through socket communication.

It would be ideal to render it in a variable and pass it through localhost UDP/TCP

CoDEmanX
Posts: 894
Joined: Sun Apr 05, 2009 7:42 pm
Location: Germany

Postby CoDEmanX » Sat Dec 01, 2012 1:31 pm

you could try to expose the pixels (Image.pixels) of an image for render results, this is current available to python for images, movies and uv test textures (not generated textures).
I'm sitting, waiting, wishing, building Blender in superstition...

AgentCain
Posts: 9
Joined: Thu Nov 29, 2012 12:45 pm
Location: Greece

Postby AgentCain » Sat Dec 01, 2012 2:01 pm

CoDEmanX wrote:you could try to expose the pixels (Image.pixels) of an image for render results, this is current available to python for images, movies and uv test textures (not generated textures).


So image.pixels gives a matrix-like access to a rendered scene's pixel data? I guess I could pass this through and use blender nodes to make my depth map render, so i can trasmit that aswell

stiv
Posts: 3646
Joined: Tue Aug 05, 2003 7:58 am
Location: 45N 86W

Postby stiv » Sat Dec 01, 2012 3:27 pm

Blender has (or did!) a frameserver, capable of sending rendered output via http protocol. Basically, you send a request for particular frame and get back the image in PPM format.

http://wiki.blender.org/index.php/Doc:2 ... rameserver

AgentCain
Posts: 9
Joined: Thu Nov 29, 2012 12:45 pm
Location: Greece

Postby AgentCain » Sat Dec 01, 2012 4:07 pm

I found this on blenderartists.org:
http://blenderartists.org/forum/showthr ... it-as-file

Alternatively you could use the z-buffer of the 3d-view. Here's some code to do that:

Code: Select all

import Blender
from Blender import *
from Blender.BGL import *

windows = Window.GetScreenInfo()
for w in windows:
    if w['type'] == Window.Types.VIEW3D:
        xmin, ymin, xmax, ymax = w['vertices']
        width = xmax-xmin
        height = ymax-ymin

zbuf = Buffer(GL_FLOAT, [width*height])
glReadPixels(xmin, ymin, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, zbuf)

strbuf = []
for i in range(height):
    strbuf.append(str(zbuf[i*width:i*width+width]))
strbuf.reverse()

file = open("C:/test.txt", "w")
for i in strbuf:
    file.write(i[1:-1]+'\n')
file.close()


This looks to me as a great way to get a fast and simple render, since it actually uses the 3dview's output (no shadows, lighting, ambient occlusions, smoothing etc). But Im really confused with the blenders documentation http://www.blender.org/documentation/bl ... i_2_64_9/# and the way its organized. Did blender change so much over its versions?

stiv
Posts: 3646
Joined: Tue Aug 05, 2003 7:58 am
Location: 45N 86W

Postby stiv » Sat Dec 01, 2012 6:28 pm

Did blender change so much over its versions?


Big Time. Major differences between the 2.4x series and 2.6x - both internally and in the BPy API. Oh, and the user interface, too!

Note the Z-buffer is only the depth information, a single float for each pixel, like a height map or gray scale image.

AgentCain
Posts: 9
Joined: Thu Nov 29, 2012 12:45 pm
Location: Greece

Postby AgentCain » Sun Dec 02, 2012 10:03 am

stiv wrote:
Big Time. Major differences between the 2.4x series and 2.6x - both internally and in the BPy API. Oh, and the user interface, too!

I guess i should try an older version then :P

stiv wrote:Note the Z-buffer is only the depth information, a single float for each pixel, like a height map or gray scale image.

Actually, depth information is the most important thing I need, I dont need visual rendering so much (It could help, but if its missing its no deal braker)

CoDEmanX
Posts: 894
Joined: Sun Apr 05, 2009 7:42 pm
Location: Germany

Postby CoDEmanX » Mon Dec 03, 2012 12:40 am

AgentCain wrote:
CoDEmanX wrote:So image.pixels gives a matrix-like access to a rendered scene's pixel data? I guess I could pass this through and use blender nodes to make my depth map render, so i can trasmit that aswell

it's just image data, RGBA

[0] = red of first pixel
[1] = green
[2] = blue
[3] = alpha
[4] = red of second pixel
etc.
I'm sitting, waiting, wishing, building Blender in superstition...

AgentCain
Posts: 9
Joined: Thu Nov 29, 2012 12:45 pm
Location: Greece

Postby AgentCain » Fri Dec 07, 2012 10:48 am

Ok
So for my first try, I used glReadPixels on the GL_DEPTH_COMPONENT

but, it returns a matrix of 1.0 :(
Whats the problem?

AgentCain
Posts: 9
Joined: Thu Nov 29, 2012 12:45 pm
Location: Greece

Postby AgentCain » Sun Dec 09, 2012 12:19 pm

Solved the problem
Apparently, one script execution doesnt work. On the second run you have depth results
Now its a matter of remapping the values from [min,1] to [0,1] or [0,255]


Return to “Python”

Who is online

Users browsing this forum: Yahoo [Bot] and 1 guest