Nvidia’s CUDA (Compute Unified Device Platform) is one of the companies most prized IPs, allowing the easy creation of parallel workloads to be farmed out to the graphics card. Unfortunately for both developers and AMD, CUDA is also Vendor specific.
There are certainly other solutions such as OpenCL, as its code will run on a wide variety of hardware. But the benefits of CUDA include an exhaustive list of high-quality documentations and resources and it means a lot of developers prefer to stay with the easier to use platform.
But this could change, as Otoy – a new startup in the graphics industry claim to have revealed a CUDA Cross-Compiler capable of running native CUDA code on not just any GPU, but also any CPU and Chief Executive Jules Urbach claims his company put this together in just nine weeks.
That isn’t to say Otoy’s solution isn’t without drawbacks – it requires access to the original source code of the CUDA application. This means that if I write a CUDA application and and send it to you uncompiled (the native source code for example) you can convert it to OpenCL this fairly easily, but if I give you the compiled application (once again, written specifically for Nvidia’s CUDA) but without first converting it and not providing you the source code, you will not be able to do much with it.
The second hitch is the company currently don’t plan on releasing it necessarily as a ‘stand alone’ piece of software, and instead choose to bundle it in with its Octane Rendering Engine from version 3.1 or above. The good news is that the performance of running CUDA on an AMD card won’t be crippled, and Urbach stated that the application “runs on the other card at the same speed it runs on Nvidia cards.”
Unsurprisingly, Nvidia have been pretty quiet on the issue; and this isn’t the only pressure Nvidia have been facing against their CUDA monopoly. AMD have been pushing the Boltzman Initiative which includes new compilers to help this very same problem.
The Boltzman Initiative includes a new HIP application which allows you to convert your CUDA Code to native C++. After you’ve converted the CUDA code to C++, you can run it through CUDA NVCC or AMD HCC compilers.
HIP is quite different to OpenCL (see a PDF comparison of the technology here) but primarily C++ is a closer to match to the environment CUDA developers are used to, making the prospect of developing in a cross-vendor environment more appealing.
Nvidia’s typically fiercely protective of its IPs (be they hardware or software), and CUDA is just one example – Nvidia’s Gameworks is another one. Gameworks has been accused of performance issues in a myriad of titles, and because of the blackbox nature of the software (ie, you can’t delve in and tweak it or modify it… it simply plugs in) it means getting the software to run across vendors is next to impossible, and worse performance woes in titles such as Gears of War Ultimate Edition on PC and Batman Arkham Knight are said to primarily be caused by the inclusion of gameworks.
Thanks VentureBeat