
I´ll keep testing Verse as soon as I finish a design project that I should present tomorrow. In the mean time, you can read some opinions of non-blender users about Verse here http://forums.cgsociety.org/showthread.php?t=286029
To summarize, a few quotes from the CGTalk thread:
That's pretty cool. I don't even want to think of all of the potential problems that this could cause in the app and in any workflow, but it's definitely nice to see it at least being attempted.
Really I think this is the future. Imagine the increase in workflow. A modeler creates a character's head...begins working on torso. And at the same time texturing begins on the head. Next modeling is completed. The animator begins to animate. Mid-animation the texturing is completed and added. A belt is modeled on the character and updated in realtime. If its done correctly and smoothly this could be very useful. But things like UV mapping, weight mapping, etc would have to blend in smoothly when the model is updated. I can see a lot of problems with it, along with a lot of potential.
Models take a long time to make, at least quality models do. You'd be better off investing in some other asset management software IMO. I applaud their efforts, but I think they're misdirected.
It'd be nice for 'real-life' learning, though, you could give lessons across the net and show people in real-time what they should do here and there imagine the teacher making something and telling the students: "Good, now make your own, here are your cubes."