I work at a biology research institute in Japan. One of the lab heads has asked me to look into helping him build a low cost render farm to use with Blender in his lab. At first he will simply be rendering standard blend files, but at some point in the future he is planning on using the BioBlender tool-set.
That is just a bit of background. My questions are more about what kinds of components are people having success with in render farm nodes these days.
My tentative plan is something like:
(Prices are from newegg but I will probably buy locally in Japan due to shipping costs)
CPU= AMD Lano 2.6gh Socket FM1 Quad with built in Radon HD 6530 GPU
MOBO= Biostar A75 FM1 Micro ATX
HDD= Kingston SSDNow 16GB SATAII SSD
Discrete Graphics= Gigabyte Radon HD 6450 1GB Low Profile
RAM= 2x 8GB CORSAIR XMS3 DDR3 1333 (Total 16GB)
Power Supply= Dialotek 350W ATX12v
Total is $387 per node.
Obviously I will need to either build or buy racks to put this stuff in and a cabinet, router, and external storage etc. But this is the basis of what I'm thinking.
The OS will be Linux (obviously) Ubuntu 12.04 (probably).
So the point is with this setup I will get four CPU cores and two GPUs in each node. Assuming that the CPU & GPUs play nice that is, if I can't I may drop the card.
With the current state of GPU rendering is a hybrid CPU/GPU system workable? Also what software do you guys suggest for the rendering? Can Cycles be installd in a system like this yet? I am assuming that with 16GB SSDs I can get Linux and the rendering software installed on each node. But I might need to bump that up.
I'm thinking of doing this in a 4 drawer cabinet with 3 nodes to a drawer.
Any thoughts? Advice?
My first impression, from a software point of view, is that ATI/AMD cards are a poor choice for GPU rendering. The ~$50 price point only reinforces that impression.
This is only an impression from listening to the GPU discussions, so if you really care about GPU rendering for your farm (I don't think I would), you should get some actual facts about what works with Blender.
Well, that is exactly what I am trying to do with this thread
Also since I will be building 12 of these nodes anything more than $50 would be out of my budget range I think. Since the AMD APU chips come with built in GPUs, by using crossfire I can get two per node without breaking the bank. If there is an Intel/Nvidia option in the same price range of course I would consider it.
One other thing that made me look toward AMD was that I had heard that Nvidia has somewhat gimped OpenGL in their consumer level cards and that it can lead to Nvidia 260 cards and above having problems. But if I remember that was only in the 3D modeling windows and might not affect rendering.
It seems like building the farm now adds in the special problem of not knowing how far GPU rendering is going vs CPU. That was another reason I was hoping to put together a hybrid system.
If this is not the right place to ask these questions can anyone point me to other forums where there may be more people with this kind of experience?
This project was put on the back burner, but has now been given the go-ahead and funding is secure.
I still need to decide on a final hardware list though.
The above information is a bit out-dated, and the GPU rendering situation seems to be picking up speed.
Does anyone with linux or renderfarm experiance have any advice for us going forward. Any help would be much appreciated.
|colincbn wrote: |
Woah, Blender never cease to amaze me...
Anyway, can I ask you if ever this project gets a go, can you make a videolog of it?
i recently set up a blenderfarm with a workstation (8 cores, 16GB ram = master,client and slave) and 4 older and used core2i workstations (2 cores, 2GB ram = pure slaves). all running ubuntu 12.04.
why would you put an SSD in all of your nodes? i assume picking up a 60GB SSD and enable PXE to boot up clients over network would cost much less. pxe over gigabit (with a fast disk in the server) is a dream. if you are less lazy than i was setting up my node-directories you can even go with a 16GB (system-disks only) cause in my setup i share only the initrd-files, rest is duplicated per clientnode which means 4x 3.2GB - much room for optimization here. :)
anyway - i bought a 64GB SSD a few weeks ago for about 60EUR so optimizing diskusage isn't a must from a money point of view.
i got a bonus cause when i bought my clientnodes (used, refurbished systems) there was a SATA drive in each of them (80GB) so i am able to boot up by PXE/nfs-root AND get a local swap-partition...anyway - none of my clients ever needed one while processing a job but it's good to know they *could*. ^^
another big advantage can be the administration level - why maintain each node for its own (even with cssh it can be a pain in the...you know). anyway - i was lazy so i missed this advantage in my setup (see above). only difference for me is i can admin the nodes files on the local disk on the master-server and not over the network. ^^
work- and temp-directories for blenderjobs are located on a nfs-shared partition on the master (SAS 15k disk in my setup cause i got the parts here - you can safely choose slower drives here). blenders netrender does a really great job in this setup.
for your last point i can't give an answer cause i never thought about GPU-rendering. in my setup i've got 16 cpu-cores for rendering (which is, by the way, pure luxus for a hobbyist and wayyyyy too oversized for most of my projects) and i'm happy with it.
so for the disk-question i'd say: the more nodes you have, the more you should think about PXE-clients. many professional mobos feature 2 NICs so you can safely setup 2 netranges (pxe/boot and farmnetwork separated) so disk-access and farm-communication do not collide (which only makes sense when using a 100-network or slower or you've got a really big amount of nodes).
there are better ways building up a blender-farm for sure..or maybe they are just different but not better, who knows (drqueue, mosix,...). i just wanted to give you a short overview about my attempt in building a renderfarm (which, to be honest, was more of a technical practice...but i won't miss the result. :) ).
i am investigating this as well. i am at a crossroads where i am deciding if i should invest in my own gear for the rack in my basement, or go cloud hosted processing. I landed on this thread while searching for more info on gpu rendering as a third option. I really haven't seen the benefits of gpu rending yet for animations. i admit to being in a cave some, however. i too am searching for the truth here.
I was looking at setting up cloud servers at rackspace and just budgeting per project how much render time i needed across x amount of servers. They have some lower end 2GB dual processor server instances for $0.12 USD per hour at this time of writing. The thought is that you run one server to get benchmarks and then scale out as you begin to see how much power you need / can consume.
you just setup one server image, then you can clone as many servers as you want off of that. Can get pricy though if your going to always run them. But for me, i also have to figure in the power consumption costs of running my own gear, so it balances out.
I have already my own queue management software in the form of a python package, where each node is running a node.py script, connects to a mysql database server in my basement to get info about the next frame that needs rendering, then over an sshfs mount to another fileserver on my end, the blend file is opened for rendering and the results are saved back through the mount, node reports back to mysql with progress, status etc. so, I only need enough drive space on my nodes to cover the software and swap space for ram.
does anyone have any success with production animations on the gpu for blender?