Page 1 of 1
"Mount" a Blender camera as system device
Posted: Fri Sep 12, 2008 11:22 am
I'm scientist in the robotics field, and we're thinking of starting a new simulation tool based on the BGE.
I was wondering if it *could* be possible to "mount" a Blender camera on the file system as a standard V4L device (ie a device accessible through /dev/video).
Is it currently doable (...I don't think so, but...), or do you think of a "path" to implement it?
It would be awesome to provide our robot a video stream similar to "real world" vision.
Posted: Fri Sep 12, 2008 8:08 pm
You could check out this link, which relates to getting video working within the GE...
Here is a video showing it in action...
( Ash has also worked on getting Augmented Reality working within the GE also, using ARToolkit ).
I'm not sure if the code is compatible with the latest version of Blender / GE, but I'm sure if you contact the author he could give you more info ( especially if you are able to fund his time to update it, if it's not compatible ).
It's great to see the GE being considered for more scientific projects!
Re: "Mount" a Blender camera as system device
Posted: Fri Sep 12, 2008 8:20 pm
Skadge wrote:I was wondering if it *could* be possible to "mount" a Blender camera on the file system as a standard V4L device (ie a device accessible through /dev/video).
Blender has a built-in frame server
you may be able to hook into. I have no idea if it works with the BGE.
Posted: Fri Sep 12, 2008 8:51 pm
I read your message wrong there...
> "mount" a Blender camera on the file system as a standard V4L device (ie a device accessible through /dev/video).
So, you want to have the output of the GE appear as a real-time video, which you can feed into your robot?
I'm not sure if you can access the OpenGL frame buffer directly from the GE/ Python, but it is definitely possible. You could then feed that OpenGL image a frame at a time, either directly to the visual input of your robot, or to the input of a intermediate virtual camera device.
Posted: Sat Sep 13, 2008 12:57 am
Thanks for these ideas. I'll investigate them more in depth!
Posted: Mon Sep 15, 2008 12:05 pm
I've worked on a robot simulator (comparable to a small car video game) for the Eurobot 2006 competition. with the BGE in a view to try and help our robot association get some ideas on their IA development (but the project was never used by them because I was discovering Blender day after day).
This camera thing is really something I've been looking for and I thing could help our association as well.
here's a setup I've thought of early in a Eurobot's competing robot development toolkit:
* model & texture the environment and robot in Blender with things set up to follow physics. (ie : if there are balls, make them physics enabled, if your robot has wheels, make it controllable...)
* make something being able to control the robot externally and fetch signal (camera images and captor data) out of blender.
I wrote this quickly and this is a mess.
Basically, you have Blender as an entry point for : controlling the robot, reading back some details from a fake camera and fake captors.
And the brain of that is written outside blender in python/C which can be then ported to microcontrollers/processors.
looking your name up in google shows you're part of planete sciences.
If you develop some toolkit similar to the one I've exposed, would you make it open? I think that many engineering schools would be fond of that and use it and not juste make up another development set up each year for Eurobot.
Posted: Mon Sep 15, 2008 12:29 pm
Yes, I'm part as well of Planete Sciences (actually, if you attended some of the Eurobot or Coupe de France events over the past years, you probably know me... the "pink hairs").
Of course, if we start a simulator in Blender, it will be GPLized, as any software from the LAAS lab, where I work.
Posted: Mon Sep 15, 2008 6:22 pm
Ok nice ! thank you
I've found this :
see also : http://www.cryptomath.com/~doug/2007-eos/V4L.pdf
you (or people...) could write a v4l2 driver (that's what you proposed... but maybe v4l2 is easier to writer/cleaner to write than v4l).
OpenCv does support v4l2 and v4l as I've seen : http://osdir.com/ml/lib.opencv/2006-01/msg00699.html
maybe you use something else for image processing.