Get a camera's view and send it through socket communication
Moderators: jesterKing, stiv
Get a camera's view and send it through socket communication
I am working on a server-client system which will estimate a person's pose
I would like to have blender function as a server, using python scripting and socket communication. It will manage a human articulated model, take as input the joints position (or movement in general) and respond with the models view both in depth and visual image. This will afterwards be compared to the actual scenery, so the pose can be estimated.
The reason I chose blender is because i dont want to have to deal with bones system and meshes rendering. It could be done in openGL but its such a pain you know where.
So is it possible to get a camera's view and store it somehow in a python variable? The only thing I can find is render the scene and store it in a file using scripting, but that would be extremly inefficient and slow for my purposes. I dont really have a need for quality rendering. In fact I would prefer low quality rendering in order to achieve better response times.
Im using windows 7 64bit and blender 2.64
I would like to have blender function as a server, using python scripting and socket communication. It will manage a human articulated model, take as input the joints position (or movement in general) and respond with the models view both in depth and visual image. This will afterwards be compared to the actual scenery, so the pose can be estimated.
The reason I chose blender is because i dont want to have to deal with bones system and meshes rendering. It could be done in openGL but its such a pain you know where.
So is it possible to get a camera's view and store it somehow in a python variable? The only thing I can find is render the scene and store it in a file using scripting, but that would be extremly inefficient and slow for my purposes. I dont really have a need for quality rendering. In fact I would prefer low quality rendering in order to achieve better response times.
Im using windows 7 64bit and blender 2.64
Could you propose any method?
All I could find is
which is useful for saving a rendered scene as an image by calling
The blender.python documentation is rather difficult to navigate
[/code]
All I could find is
Code: Select all
bpy.data.scenes[].camera = obj
bpy.data.scenes[].render.file_format = 'JPEG'
bpy.data.scenes[].render.filepath = '//camera_' + str(c)
Code: Select all
bpy.ops.render.render( write_still=True )

how about something like
?
Code: Select all
bpy.ops.render.opengl()
bpy.data.images['Render Result'].save_render("D:\\viewport_render.png")
I'm sitting, waiting, wishing, building Blender in superstition...
The thing is I want to avoid writing to a file because it will be extremely slow writing lets say 50 .png and then open them.CoDEmanX wrote:how about something like
?Code: Select all
bpy.ops.render.opengl() bpy.data.images['Render Result'].save_render("D:\\viewport_render.png")
Of course I could use a Ramdisk partition to gain some speed, but Id like to avoid it and pass the rendered result to another process through socket communication.
It would be ideal to render it in a variable and pass it through localhost UDP/TCP
So image.pixels gives a matrix-like access to a rendered scene's pixel data? I guess I could pass this through and use blender nodes to make my depth map render, so i can trasmit that aswellCoDEmanX wrote:you could try to expose the pixels (Image.pixels) of an image for render results, this is current available to python for images, movies and uv test textures (not generated textures).
Blender has (or did!) a frameserver, capable of sending rendered output via http protocol. Basically, you send a request for particular frame and get back the image in PPM format.
http://wiki.blender.org/index.php/Doc:2 ... rameserver
http://wiki.blender.org/index.php/Doc:2 ... rameserver
I found this on blenderartists.org:
http://blenderartists.org/forum/showthr ... it-as-file
http://blenderartists.org/forum/showthr ... it-as-file
This looks to me as a great way to get a fast and simple render, since it actually uses the 3dview's output (no shadows, lighting, ambient occlusions, smoothing etc). But Im really confused with the blenders documentation http://www.blender.org/documentation/bl ... pi_2_64_9/# and the way its organized. Did blender change so much over its versions?Alternatively you could use the z-buffer of the 3d-view. Here's some code to do that:Code: Select all
import Blender from Blender import * from Blender.BGL import * windows = Window.GetScreenInfo() for w in windows: if w['type'] == Window.Types.VIEW3D: xmin, ymin, xmax, ymax = w['vertices'] width = xmax-xmin height = ymax-ymin zbuf = Buffer(GL_FLOAT, [width*height]) glReadPixels(xmin, ymin, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, zbuf) strbuf = [] for i in range(height): strbuf.append(str(zbuf[i*width:i*width+width])) strbuf.reverse() file = open("C:/test.txt", "w") for i in strbuf: file.write(i[1:-1]+'\n') file.close()
I guess i should try an older version thenstiv wrote:
Big Time. Major differences between the 2.4x series and 2.6x - both internally and in the BPy API. Oh, and the user interface, too!

Actually, depth information is the most important thing I need, I dont need visual rendering so much (It could help, but if its missing its no deal braker)stiv wrote: Note the Z-buffer is only the depth information, a single float for each pixel, like a height map or gray scale image.
it's just image data, RGBAAgentCain wrote:CoDEmanX wrote:So image.pixels gives a matrix-like access to a rendered scene's pixel data? I guess I could pass this through and use blender nodes to make my depth map render, so i can trasmit that aswell
[0] = red of first pixel
[1] = green
[2] = blue
[3] = alpha
[4] = red of second pixel
etc.
I'm sitting, waiting, wishing, building Blender in superstition...