Sound-Sensor

Game Engine, Players & Web Plug-in, Virtual Reality, support for other engines

Moderators: jesterKing, stiv

idefix
Posts: 5
Joined: Sun Oct 13, 2002 7:56 pm

Sound-Sensor

Postby idefix » Thu Nov 07, 2002 1:14 pm

Hi

Why not have a new sound-sensor which can read the soundcard-input or a file from disk and find for example beats and sends signals on beats? This would enable Blender to be used as some kind of VJ (Visual Jockey) software, where one could append the rotation of an object for example to the output of such a sound-sensor.
Anybody ever had similar thoughts or ideas who would like to share them here? Or anybody willing to help with ideas, time, knowledge, code...

comments?
Renzo

Saluk
Posts: 166
Joined: Wed Oct 16, 2002 6:52 am

Postby Saluk » Thu Nov 07, 2002 9:18 pm

I think that there are some python modules that would accomplish this, you could probably analyze a sound stream although getting it to time right might be difficult. Good idea though! Maybe someone should write a winamp plugin?

jeotero
Posts: 107
Joined: Wed Oct 16, 2002 5:31 am

Postby jeotero » Fri Nov 08, 2002 12:55 am

Analyze over different frecuencies, hummm blender winamp visualization plugins.

idefix
Posts: 5
Joined: Sun Oct 13, 2002 7:56 pm

Postby idefix » Fri Nov 08, 2002 10:16 am

Hi

It might be possible with python, I've already tried to do that with Snack, a sound library for python. And it also works in pure python but not in Blender because of the missing support of Tkinter in Blender-Python...

Another problem really might be the processing speed of the incoming sound. This is another reason why I am heading more in the direction of integrating a sound-sensor directly into blender instead of writing a python-script.

Any other input?
Renzo

Bandoler
Posts: 53
Joined: Mon Oct 14, 2002 3:16 pm
Location: Somewhere between the 1 and the 0

Re: Sound-Sensor

Postby Bandoler » Fri Nov 08, 2002 11:32 am

idefix wrote:comments?


It seems a very good idea. I was at a project presentation last year where they had developed a system to attach graphic effects to music events. They argued that the actual tendency to analyze frequencies to extract the events is a great limitation. I agree after taking a look at some plugins of winamp which try to do this kind of things, and comparing with the system presented in that project, where they intercepted midi events instead of frequencies. The match bewteen the effects and the midi events (with electronic musicplayed live with one of those synthetysers or whatever) was great and the artists have freedom to attach effects on the scene what gives a chance to creativity.

Maybe an approach like this would be more difficult as all sound system now in blender uses wave formats i think. But developing sensors for midi input shouldn't be so difficult... or even for sound tracker / mod files because in fact they contain events.

Can Blander play .MOD files or similar?

Let's do it.

(Bandoler)

Saluk
Posts: 166
Joined: Wed Oct 16, 2002 6:52 am

Postby Saluk » Fri Nov 08, 2002 7:02 pm

As far as I know it doesn't support mods at the moment, and it certainly doesn't support midis. These would be some nice additions in and of themself. Giving python control of the sounds in blender's memory would be a good addition, at the moment you can't even access the file. I could probably take my old music demo that plays sounds at different frequencies and add an event for making stuff happen when it hits a certain note; but I couldn't get the frequencies quite right. Anyway, hardcoded would be best, as long as there is python hooks to it as well:)

It's dissapointing that there really aren't any GOOD sound libraries for playing them in python, although there are many for analyzing sounds.

Goofster
Posts: 137
Joined: Mon Oct 14, 2002 12:26 pm

sound

Postby Goofster » Mon Dec 02, 2002 4:25 pm

b@rt made a demo last year that did exactly that. analyze the sound and have a sort of graphic thing respond to it (don;t know what you call it). I have the demo at home. I'll post a link as soon as possible

Roel

ps. very cool stuff it is :)

snowy_duck
Posts: 44
Joined: Mon Oct 21, 2002 3:22 pm
Location: In an igloo right next to a penguin fleet

Postby snowy_duck » Tue Dec 03, 2002 10:56 pm

what about a movement sensor? one that detects movement on a axis that u choose or all axis' and also have a speed so lets say a door will open when u your character move on the y axis at 15.00 and that could also be used for like skidding a car or making a guy thats running to a skid stop by haveing an inverted movement key and a movement sensor so when i detects it's moving and you wanbt to stop he skids. that would be quite a use to a lot of projects so far we just had to mimik were moving

Recall
Posts: 37
Joined: Fri Feb 21, 2003 5:57 pm
Location: The Netherlands...
Contact:

Postby Recall » Fri Feb 21, 2003 6:38 pm

Yeah I have a comment;
I haven't read all the other comments but Blender is a 3D engine based program.. why wasted it on such a low-brain idea?
It is perfect the way it is... if you guys keep going this way Blender will be known as a Windows;
Always disconnected, always disfunctioning...

SAD, SAD, SAD :x

matt_e
Posts: 898
Joined: Mon Oct 14, 2002 4:32 am
Location: Sydney, Australia
Contact:

Postby matt_e » Sat Feb 22, 2003 12:45 am

Are you kidding? The integration and interaction between audio and 3d graphics is an innovative idea that has hardly been pursued. Having graphics that can react to sound and imagery and vice-versa can help a lot in the 3D engine. Have you ever played 'rez' for the Playstation 2? That's a great example of what can be done by interacting video with audio, and it's a very original and artistic gameplay concept. Not to mention, like others have said, the possibilities for VJ-ing. Blender (although not the realtime engine) has already been used to display VJ graphics for dance parties and club nights in the past - having the animations react in different ways to the music would give this an extra layer of depth that other 'linear' animations can't compete with.

Perhaps you'd should read all the other comments before replying. You might just open your mind to a new idea.

And Goofster: yeah, I remember b@rt's demo too. I wish I had a copy here that I could upload but I can't find it.

Recall wrote:Yeah I have a comment;
I haven't read all the other comments but Blender is a 3D engine based program.. why wasted it on such a low-brain idea?
It is perfect the way it is... if you guys keep going this way Blender will be known as a Windows;
Always disconnected, always disfunctioning...

SAD, SAD, SAD :x

rorschach
Posts: 7
Joined: Wed Dec 18, 2002 1:19 am
Contact:

Postby rorschach » Sun Mar 30, 2003 1:16 pm

Hi All,

I've already programmed such tools in python, but they are still only a first usabality study.
I've implemented it using the python ecasound wrappers, and e very simple beat detection in written in python. Objekt Modifikation is accomplished through IPO Frame input, by copying the Values form broadcasting null objects.

I was, until today, on the search for a better python Audio wrapper, that is running on all the supported OS's of blender blends.
It seems i found one, which is PortAudio.
So before I'm start working an it, I would like to get some experience notes for it under the MacOS, Solaris, BSD. Windoze and Linux I will test and post.

This is going to be posted as a seperat thread too.

cu folkx

brantc
Posts: 2
Joined: Tue Apr 08, 2003 7:41 am
Contact:

Postby brantc » Tue Apr 08, 2003 8:12 am

:D This is great! I've been thinking a lot about how a MIDI/MTC/MMC control would be for live musical performance/composing and lip syncing ( or any kind of realtime simulation for that matter ). Coupla questions, and bear with me cause I haven't read too much about the plugin technology.

1. Would it be possible to render medium-res realtime animation IPOS with Blender or is this something beyond the scope of say current laptops. Wireframe would be ok for compositional work, something better for live?

2. How easy would it be to register different MIDI messages to different IPO curves that are executed instantaneously. I'm thinking specifically here about realtime execution of RVK's. without faking out the clock?

I think if this could be gracefully added on to Blender it would also allow for the easy addition of any kind of beat detection code, just convert the audio "beats" to MTC or MIDI note on messages. Even cooler on the audio end would be automatic lip sync generation with some Speech Recognition plugin.

Does this make sense? Is anyone thinking of/ doing a sub project?

B

rorschach wrote:Hi All,

I've already programmed such tools in python, but they are still only a first usabality study.
I've implemented it using the python ecasound wrappers, and e very simple beat detection in written in python. Objekt Modifikation is accomplished through IPO Frame input, by copying the Values form broadcasting null objects.

I was, until today, on the search for a better python Audio wrapper, that is running on all the supported OS's of blender blends.
It seems i found one, which is PortAudio.
So before I'm start working an it, I would like to get some experience notes for it under the MacOS, Solaris, BSD. Windoze and Linux I will test and post.

This is going to be posted as a seperat thread too.

cu folkx
:?:

rorschach
Posts: 7
Joined: Wed Dec 18, 2002 1:19 am
Contact:

Some Updates here ...

Postby rorschach » Wed Apr 09, 2003 1:41 pm

Hi Folx,

i started another thread, where there is still some little discussion going on too.

I really would like some more input like that of brantc, so i can sum up some use cases which will guide the design stage.

To brantc:
I'm currently thinking about the design issues and the integration into blender. But for what i've seen so far, in respect to teh blender SoundDevice design and some Research on Libraries to base the work on, there is a good chance that we may be able to create an interface that is offering low latency (how near to realtime this can get will depend ...), is able to process Files (from wav to midi, maybe even pd-max/msp files and libs) or device inputs, has a interface for building costum analysis plugins and can attach to single objects and be played/analysed in parallel.

Currently i'm investigating the following Library as base of the development:

http://www.may.ie/academic/music/musict ... /main.html

But I'm still investigating ...

I was also thinking about the idea of using notations or a statistics table, which can be used for recording the sound into, and reuse the once generated statistics/notation as long the filters or the sound don't change.
Beside reducing realtime memory requirements that would be some way to prefilter the raw sound-data into an easier translation to functioncurves (IPO).
But i'm more interested in the realtime part first, thats more like a new set of sensors for objects in the gameengine.

I will go on testing the SndObj Library, as well i will try to investigate into the possibility of interfacing Max/MSP or pd with blender directly for testing and development purpose.

i'll keep you posted of the progress.

cu

brantc
Posts: 2
Joined: Tue Apr 08, 2003 7:41 am
Contact:

Re: Some Updates here ...

Postby brantc » Wed Apr 09, 2003 8:07 pm

Cool, i like this:

I'm not super familiar with the game engine or blender at this point, but I see a coupla modes for integrating external control with blender.

1. composition mode: The user would be able to link their favorite sequencing program (like cubase or logic or ableton live) to a running copy of blender. The "now" frame of blender would be synchronized via MTC with the current SMPTE time in the host sequencer. Prima facia this appears a bit difficult since blender doesn't appear to have notion of SMPTE time, just frames (correct me if I'm wrong).

2.live mode: Here blender just becomes an "instrument". Objects, IPOs, RVK's and the like can be mapped to arbitrary incoming MIDI messages. THe percentage of an IPO curve executed could be mapped to these control messages. For instance, imagine a articulated hand that has IPO's for motion of the hand as a whole, as well as each individual finger. Using my pc1600x

http://www.peavey.com/products/amps_mi/midi/pc1600x.cfm

I could control the articulation of each individual finger using a slider. Moving a slider to %50 up would take that finger through %50 of it's IPO curve execution time. Of you could map this % execution to note on velocity, etc. The possibilities are wild.

Again, I think sound filtering or beat detection would fit nicely ON TOP of such a live system, but a solid framework for control mapping and real time IPO execution is the best place to start.

Ideally a system where we could seamlessly combine these two modes of control (live and composition) would be the best. Some elements of the blender model will proceed according to global IPO settings, but particular IPOs can be arbitrarily executed by MIDI messages.

Thoughts?
B

rorschach wrote:Hi Folx,

i started another thread, where there is still some little discussion going on too.

I really would like some more input like that of brantc, so i can sum up some use cases which will guide the design stage.

To brantc:
I'm currently thinking about the design issues and the integration into blender. But for what i've seen so far, in respect to teh blender SoundDevice design and some Research on Libraries to base the work on, there is a good chance that we may be able to create an interface that is offering low latency (how near to realtime this can get will depend ...), is able to process Files (from wav to midi, maybe even pd-max/msp files and libs) or device inputs, has a interface for building costum analysis plugins and can attach to single objects and be played/analysed in parallel.

Currently i'm investigating the following Library as base of the development:

http://www.may.ie/academic/music/musict ... /main.html

But I'm still investigating ...

I was also thinking about the idea of using notations or a statistics table, which can be used for recording the sound into, and reuse the once generated statistics/notation as long the filters or the sound don't change.
Beside reducing realtime memory requirements that would be some way to prefilter the raw sound-data into an easier translation to functioncurves (IPO).
But i'm more interested in the realtime part first, thats more like a new set of sensors for objects in the gameengine.

I will go on testing the SndObj Library, as well i will try to investigate into the possibility of interfacing Max/MSP or pd with blender directly for testing and development purpose.

i'll keep you posted of the progress.

cu


Return to “Interactive 3d”

Who is online

Users browsing this forum: No registered users and 3 guests