Give me a reason to use blender instead of truespace 7.6

General discussion about the development of the open source Blender

Moderators: jesterKing, stiv

Post Reply
BlindSight
Posts: 0
Joined: Wed Nov 26, 2008 5:59 pm

Give me a reason to use blender instead of truespace 7.6

Post by BlindSight » Thu Nov 27, 2008 6:42 pm

Alright,

So, im coming for 3ds, I want to make a animated movie but i cant afford the reidculusly overpriced max comercial lisence so I have to find the best free alternative.

Right now I am stuck between blender and truespace 7.6, both have things I like and things i dont like

for example, i dont much care for blenders UI, but i like its more robust modeling package

but

truespace seems easier for cahracter creation and has factial expression and animation tools which would be really nice, it also seems to have better speech sync capabilities

So, I need a reason to choose blender, what say you?

pinhead_66
Posts: 17
Joined: Wed Oct 16, 2002 10:09 am
Location: Belgium

Post by pinhead_66 » Thu Nov 27, 2008 7:42 pm

Hi,

I found the way you asked your question quite aggressive, but I'll answer it anyway...

Since both apps are free, try them both and make up your own mind. I would advise you to take your time to get to know both programs, first impressions can be deceiving.

Happy blending, or not

greets

BlindSight
Posts: 0
Joined: Wed Nov 26, 2008 5:59 pm

Post by BlindSight » Thu Nov 27, 2008 8:17 pm

I appologised if it appeared agressive, it wasnt intended to be, im just a very direct person.

Im looking more for benefits of blender, the feature overview of blender is very generalized on the website.

I dont want to take time to learn two programs (as its a long journey) since im only going to use one.

What are the advantages of blender over truespace?

Does blender have any fecial expression tools, ability to create fluid facial expression transformatons from seperate expressions.

Does blender have a good speech sync tool that will sync lip movement with actual speech?

and anything else youd like to share.

thanks!

kAinStein
Posts: 63
Joined: Wed Oct 16, 2002 3:08 pm

Post by kAinStein » Thu Nov 27, 2008 9:25 pm

BlindSight wrote:What are the advantages of blender over truespace?
Well, "advantages" sometimes are also just personal preferences. But for me one advantage would be that modelling "flows" better in Blender. Another would be that the screens and scenes management in Blender is very flexible.
Drawback: Blender is harder to learn - at least most people say so and I agree to some extent.
Does blender have any fecial expression tools, ability to create fluid facial expression transformatons from seperate expressions.
You've got shape keys which can be mixed. But you'd have to create the expressions by yourself first. There isn't anything oven-ready and there are no wizards to ease your life. In exchange you've got quite some freedom what you do and how.

http://wiki.blender.org/index.php/Manual/Shape_Keys
Does blender have a good speech sync tool that will sync lip movement with actual speech?
I don't know what special features you'd like to have but with Blender builtin tools you can get quite far though there aren't any specialized tools for lip syncing. Check the following link to have a quick glimpse:
http://en.wikibooks.org/wiki/Blender_3D ... Shape/Sync
and anything else youd like to share.
Just as it was said before: Try'em both and take what fits your needs and taste. That's the only reasonable thing to do! You know better than we do what you need and what you'd like to have - and especially how you want it. A tool is a tool and you should be undogmatic in picking them.

BlindSight
Posts: 0
Joined: Wed Nov 26, 2008 5:59 pm

Post by BlindSight » Fri Nov 28, 2008 2:32 am

Thanks for the reply kainstrein,

So you have to manually animate the lips to go with what is being said? that would take forever, darn, i was really hoping blender would automatically do this, basically you make your visemes, specify what sound they are for and then imput your audio file, the software should then automatically create the lipsync base on the speech in the file.

Sigh, i was really hoping to use blender but if this is the only way to achive lipsync then it wont work for me, it would take years to do it this way.

well, thank you anyways for the imput

tsgfilmwerks
Posts: 0
Joined: Thu Mar 29, 2007 4:31 am

Post by tsgfilmwerks » Fri Nov 28, 2008 6:53 am

BlindSight wrote:Thanks for the reply kainstrein,

So you have to manually animate the lips to go with what is being said? that would take forever, darn, i was really hoping blender would automatically do this, basically you make your visemes, specify what sound they are for and then imput your audio file, the software should then automatically create the lipsync base on the speech in the file.

Sigh, i was really hoping to use blender but if this is the only way to achive lipsync then it wont work for me, it would take years to do it this way.

well, thank you anyways for the imput
That would be a little difficult, wouldn't it? I mean, the most you could do would be what Poser (and any programs I'm not familiar with) does, and create poses for every unique sound the mouth can make. From a library of these poses, you would still have to manually determine when the poses would be used (like, at what time their mouth would be closed, making a certain vowel sound, etc.).

I mean, I guess you could automatically derive the tone of the audio file at specific times, with various tones referring to their respective lip poses, but I'm not sure if that would be effective enough...

BlindSight
Posts: 0
Joined: Wed Nov 26, 2008 5:59 pm

Post by BlindSight » Fri Nov 28, 2008 3:07 pm

Well, its really not that far fetched, theres speech recognition software out there that will even type for you, its really old technology now you just have to implement it properly and it should be quite accurate.

Theres only so many different visems that the mouth makes as there are only so many different sounds, so the sfotware should be able to ateast faily accuratly animate them based on an audio file provided the audio is clean speech without anything else

pildanovak
Posts: 18
Joined: Fri Oct 25, 2002 9:32 am
Contact:

Post by pildanovak » Sun Nov 30, 2008 3:26 pm

BlindSight wrote:Well, its really not that far fetched, theres speech recognition software out there that will even type for you, its really old technology now you just have to implement it properly and it should be quite accurate.

Theres only so many different visems that the mouth makes as there are only so many different sounds, so the sfotware should be able to ateast faily accuratly animate them based on an audio file provided the audio is clean speech without anything else
well there are lipsyncing solutions for blender, where you assign letters to shapes, some of them are scripted. If you look around wikipage, especially in the script catalog, you'll find a lot of additional tools for extended functionality.

What is advantage of blender against TrueSpace?

I'd just note 1 of many - blender develops currently much faster and it's opensource., so if you learn how to use it you'll certainly never hit the moment when you realize that "the development has been stopped" or the company has been bought e.t.c.
Other thing is e.g. blender has probably a bigger and more alive community(but not sure about this), which is always ready to help.

sausages
Posts: 0
Joined: Sun Nov 30, 2008 11:02 pm

Post by sausages » Sun Nov 30, 2008 11:30 pm

I'm sure some automatic solutions for lip-sync would be passable as "decent" animation, but in reality, due to the nature of acting and emoting, people usually don't use the same 8 or 10 mouth shapes when talking- there is asymmetry involved, plus other facial expressions (smiles, frowns) and not to mention changes in volume/tone. I guess automatic sync solutions would be great for robots though.

jamin3d
Posts: 0
Joined: Tue May 06, 2008 6:08 pm
Location: GSP

Post by jamin3d » Mon Dec 01, 2008 8:17 am

Yeah, the "automatic" method described would look very mechanical and unrealistic. Perhaps it could give you a base to start with, and then tweaked from there.

keffertjuh
Posts: 0
Joined: Sat Oct 25, 2008 11:54 am

Lip-sync

Post by keffertjuh » Sat Dec 06, 2008 5:38 pm

When using blender, you create your own characters and all, so of course it would probably be impossible to create lip-sync for your type of character. For example: There's a great difference between the lip movements of a human, a dog, and a glorgmugrrflorgpf from some other planet.

If you want lip-sync in your animation, I'd recommend creating the form the mouth makes with a certain lettre as actions for all lettres (might just take less than an hour, for it isnt that hard with shape keys), then, when you animate, you can just import those actions into your IPO to sequence them into a word...

(Replying won't help, for I won't be watching anymore, and I can't say I know much about shape keys and actions, because I'm just getting started with those myself (have seen them mentioned a lot tho, probably easy to find))
May the LOL be with you!!!!

BlindSight
Posts: 0
Joined: Wed Nov 26, 2008 5:59 pm

Post by BlindSight » Sun Dec 07, 2008 3:18 pm

It really is a simple concept, you create your own visems for the character (the lip positions/facial expressions) then you tie thoes to the specific sound you want them to be related to, so lipsync would work no matter what you character was because you made all the visems.

The lipsync wouldnt be perfect, but it would be fairly good, if you ever watch yourself talk i nthe mirror youll ntocie you look pretty clinical too 80% of the time, and youve only got so many different facial expressions as well, all of which would be easily made into visems and implemented.

Emotion could be simply done with parameters that let you specify which emtion visems are to be used between what frame's, so between frame 1000-1500 use happy visems, 3000-4000 angry visems. (again all visems would be created by you)

then you just run your audio file in and the lipsync will be created for you based on the audio file.

Then once the lipsync is done you quicly go through it, add a blink here, a lifted brow there, change some of the expressions a hair, and it would turn out great

I dont understand why everyone says it wouldnt work correctly because it would, its so simple and would save you so much time, I would have already made it if i knew python, it wouldnt even take long to make if you know blender well enough.

Post Reply