Oxide Games have unleashed their eagerly anticipated Ashes of the Singularity, a real time strategy title where tens of thousands of units can battle each other across huge battlefields. Ashes of the Singularity also represents the first chance for gamer’s to see how DirectX 12 compares to DirectX 11 in an actual game environment, previously we’ve only had such tools as 3DMarks’s API Overhead draw tests to compare the performance.
Here at RedGamingTech, we’re planning a lot of coverage of AOTS – and not just because of its technical merits. From the small amount of gameplay time I’ve had with the game so far, it reminds me of Supreme Commander, one of my favorite RTS titles of all time. But I digress – in our early testing, we’re sticking with AMD and an Intel processor. Nvidia’s performance in Ashes of the Singularity is currently the subject of a lot of debate, but we’re told both Oxide and Nvidia are hard at work to fix the performance, particularly where MSAA is involved.
For our test rig, we’re running 16GB DDR3 and a Sapphire Radeon R9 390 which we reviewed rather favorably about a month or so back. The drivers are the latest WHQL release currently available for Windows 10 – Catalyst 15.7.1 64-bit.
It’s vital to remember that Ashes of the Singularity is still in pre-beta, which means the code is still early – leaving a lot of room for performance improvements in both DirectX 11 and DX12. To ensure that the GPU isn’t the limiting factor, we’ve decided to bench the title with the ‘medium’ preset at 1080P. When we’ve a little more time we’ll throw in a few other graphical settings for purposes of comparison, but AOTS medium settings helps to push most of the load onto the systems CPU and drastically reduces the chance of a frame being GPU bound (CPU bound means the GPU is waiting for the CPU to send it data, GPU bound is the reverse and means the CPU is waiting for the GPU to play catch-up with the instructions it’s already sent).
You’ll notice frame rates are dramatically increased with DirectX 12 – in fact, DX12 wins rather handily vs DX11 when DX11 has more processor cores to work with. For example – 2 Cores + Hyper Threading (think an I3) scores an ‘average’ framerate of 47.3 under “All Batches” while running under DirectX 12, taking 21.2 ms. 4 Cores + Hyper Threading under DirectX 11 manages a framerate of just 35.9 FPS, taking an average of 27.9 ms per frame. That’s a considerably difference in performance – and one that’s reflected throughout our testing. We’d like to point out that AMD have been criticized for their high driver overhead in DirectX 11 (which they’ve significantly improved over recent Catalyst releases), which is helping to contribute to the rather stark differences between the two API’s, but the difference isn’t that much more than Nvidia, and certainly takes very little away from the astounding DX12 performance.
Driver overhead also sees a significant drop in DirectX 12, the lower overhead leading to extra CPU performance per thread available to run actual game code. These results of DX12 beating DX11 when the former has considerably fewer processor cores are mirrored in our 3dMark API Overhead testing and serves to demonstrate just how poorly DirectX 11 can handle multiple graphics threads. With DirectX 12, not only do you have a lower driver overhead, but because each CPU core is able to send data to the GPU, the number of draw calls (instructions the CPU sends to the GPU telling it ‘draw’ an object) increase significantly, greatly reducing latency and also frame rate.
Ultimately, it’s pretty clear that DX11 versus DX12 is a one sided battle; to be more accurate DirectX 12 totally dominates it’s older brother. As we said earlier in this article, we’ll have a full comparison (at least of what’s possible considering the early code) over the next week or two!