I have seen a lot of stuff all over the web on the whole OpenCL debarcle and Blender, but it still leaves me wondering...
I love Blender thus far, it is a fantastic programme. However, I am working on a Mac and I am feeling a bit left out. It isn't possible to buy am new Mac without it having an AMD Graphics Card in it (notably the 27" iMac [my computer] or the Mac Pro (my future upgrade for 3D modelling).
My computer is powerful in terms of RAM and CPU, and my Graphics Card is a 1GB, however, when I try to even test render a brief animation of fire and put it at high resolution I might as well just go to sleep. Not happening.
The other thing is this whole OpenCL thing... to put it on the Mac I need to know how to code C++. Talk about isolating a huge chunk of the world. I am clueless on C++ and nor do I have time to learn it.
Basically, what kind of timeline do we think we are talking about for full support of graphics rendering (like CUDA) on an AMD Graphics Card? How long will I need to wait? It is really frustrating to be so left out while everybody on a PC with nVidia can cruise around rendering with glee while I have to sap my CPU resources just to do a test render...
Sorry if I sound a little frustrated, just spent 4 hours of my day trying to get some semblance of render capabilities, but to no avail.
Anyway, please could somebody kindly let me know the progress and a relative timeline! Would really help my sanity!
Thanks to all developers for an otherwise truly awesome 3D developer. Really great!
i try to handle the same issue.
AMD Radeon HD 6970M 2GB
on a 27" iMac, 3,4ghz i7
Did the latest CUDA nVidia Update, downloaded the 2.66 64 bit Blender version. No Chance to get the CUDA support?
I'm also very interested on knowing if you have a deadline for OpenCL, most of the information related to it in the forums is from two or tree years ago, and I would love to know if there is an estimated date or version where opencl will be enabled.
I also find blender incrediible utility, congrats
I am also very interested to know this. I have a windows desktop that I built myself with dual AMD HD 6950s. They are great cards, but don't do anything for me in blender.
I see in the wiki it says that support for OpenCL is currently On Hold, but the reasoning is a very unclear. Does it have to do with AMD driver support or because of the current state of blender code?
Does blender ever plan on supporting AMD cards in the future or is this something that will never happen with the software? Is it something the devs plan on doing in the next year or two?
Also, thank you all so much for all the work that has gone into blender. It is amazing to be able to use something like this.
As an original 2006 Mac Pro owner who recently decided to take advantage of the Nvidia driver support in 10.7/10.8 by purchasing a GTX 570 specifically for CUDA Cycles rendering, I am also concerned about the lack of AMD/OpenCL support. Simply because it will be time for me to upgrade soon.
I've held out with a 7 year old computer this long, and I am extremely curious about the just announced next-gen Mac Pro. But of course, the dual GPUs in the machine are AMD.
I'm just starting to learn Blender, but I fell in love with it from day one. I hope whatever the issue is surrounding full AMD support gets resolved in the near future.
PS. Just to give you an idea ... the GTX 570 in the 2006 Mac Pro is rendering Cycle frames as quick as my 2011 2.93 GHz 24-thread Mac Pro at work!
File this under gossip since I have not been paying close attention to the problem...
There are two competing GPU libraries, CUDA (nVidia) and OpenCL (AMD). Right now CUDA is better supported since it basically just works. OpenCL has some issues with sucking up memory and a buggy compiler.
People *are* working on OpenCL support. There are no deadlines. When it works, it will be incorporated in a release. You may be able to find test builds on graphicall.org .
It is unfortunate your hardware is not well supported, but that has always been the case for certain chipsets and drivers.
Unfortunately when it comes to new technology and features, people often get carried away with using vendor specific extensions once they get rolling especially if other manufacturers are behind the curve that year..
Let this be a lesson to you, if you are going to write software that takes advantage of new GPU functionality it has to be exhaustively tested on many different types of machines.
Unfortunately, nVidia was out of the gate more than a year ahead of AMD/ATI when it comes to transform feedback type features. Because of this, many people have nVidia specific extensions deep within their code and it's going to be a while before people adopt a uniform multi-vendor way of doing things.
If you want to help speed this along then send Brecht van Lommel an email and volunteer to help him bug-test experimental Cycles builds on your AMD/ATI machines. He doesn't have a lot of help with this yet. If you ask him what extensions and functions that he is having issues with then you can do research for him by digging up open source code examples that he may be able to use to get his work fully supporting all possible GPU's.
I would like to see a cycles texture baker so I started roughing one out using GLSL and C, if it can then be ported into OSL and python, Brecht will have had a chunk of his work taken care of, and then he may be willing to tie up whatever loose ends remain without having to put off all his other workload for an indefinite period of time.