There are two groups of gamer’s who’re eager to get their hands on any DirectX 12 information, the first being PC gamer’s and the second Xbox One owners. Microsoft’s newest API isn’t being released until next year, but it’s showing a lot of promise – and according to the latest demos shown off (by a rather modest PC setup too) both Xbox and PC owners will be in for a real treat.
The demo was created by Intel, where around 50,000 different objects were drawn (asteroids) in space. Each object was unique, with its own trajectory and characteristics. If you’re wondering why these two points require emphasis it’s to ensure you understand that each one requires lots of its own draw calls. Requires lots of calculations from both the CPU and GPU, and it’s a clear example where Microsoft (and others such as AMD who’re pushing their own Mantle API) believe we’ll be seeing a nice performance gain.
Intel’s hardware wasn’t actually what you might be expecting it to be – you may be thinking it’ll be some super high end I7 Haswell CPU with a billion CPU cores and 16GB ram with a high end AMD or Nvidia GPU to show the over 70 percent performance increase with DX12… but you’d be wrong.
Instead, Intel used the Surface Pro 3, which is powered by an I5 and an Intel HD4400 for graphics. Neither of these two components are particularly super high end, and to be bluntly honest a modest gaming PC or the Xbox One stomps over the HD4400 in terms of raw horse power. Regardless, the demos purpose was to show just what might be possible with DirectX 12. It featured a frame rate counter, and the ability to switch between the two rendering API’s (the traditional DirectX 11 and the still being coded you’re reading this DirectX12). Finally, there was a graph which showed not just the CPU power usage, but the GPU’s power usage too.
While we’ve been through the issues of DX11 (and other high level API’s) before in great detail, in a nutshell their issue is simple. They’ve not been designed from the ground up for modern multi-core CPU’s. Yes, technically a game engine could run over let’s say half a dozen cores (and some such as BF4 do just that), but you’ll find one CPU cores typically has a significantly higher amount of usage than others. That’s because DirectX 11’s renderer is being run on solely that one core – meaning it’s bottlenecking performance. If the core stalls for any reason, either because it’s busy, waiting for a result of a calculation (from GPGPU), because your Anti-Virus decides now is the perfect time to run that scan you’ve been putting off for the last three weeks, or simply because another program runs in the background, frame rates will drop.
A side benefit (and one I’ve a feeling most people will start rolling their eyes at, if they’re either Xbox One gamer’s or PC gamer’s) is a massive reduction in power for the CPU. It does this by being a much more efficient API, and this in turn means greater battery life for mobile devices. Okay I admit it – I care little for the plight of mobile devices for the most part. But it’s also good news for PC’s too – why? potentially less CPU heat and power draw means possibly better / more stable overclocks. In fact, at almost 50 percent reduction in power this is the same as a large die shrink (22nm to 14nm) which is very impressive – particularly when die shrinks for both AMD and Intel are around the corner.
What about Xbox One and DirectX 12?
There are numerous crucial factors which come into play when discussing DirectX 12’s benefits on the Xbox One’s performance. We know from Microsoft there are already a few DIrectX 12 features available for the Xbox One. Phil Spencer has said DX12 for the Xbox One won’t be a massive change in performance. But there will likely be other tangible reasons for DirectX 12’s to be on the Xbox One, including easier game ports.
Now that we understand all of that – why is it the Xbox One benefits less from DirectX 12? While some might already be about to drone on about hardware issues such as eSRAM, less GPU performance than a high spec PC, the truth is actually a lot simpler. Lower level access is already available on the Xbox One’s API via an extension. DX12 features are in the Xbox One, but it’s not a full feature set by any means (although it’s debated if the Xbox One’s GPU can even support a full feature set) but let’s ignore that for a moment and focus on low level. The ‘full’ DX12 low level will likely bring a few performance improvements – but nothing like what’s available on PC.
PC gamer’s have long suffered the issues of DX11 – and even during my review and preview of Mantle on the AMD’s R9 280, Thief went from 62 FPS on DX11 to over 80 with Mantle. Battlefield 4 was similar – 1080P on a mid range GPU suddenly became a non issue. My point being that it’s clear how much of an issue it is. The R9 280 on the test rig isn’t in the same ‘league’ as the I7. The I7 is traditionally paired with a much faster GPU (say an Nvidia GTX 780 or an AMD R9 290, yet even this super beefy CPU had issues. Think of it this way – the HD 4400 from Intel is an IGP (Integrated Graphics Processor) on the CPU. It’s slower than GPU’s like Nvidia’s GTX 730. And yet, DX12 makes the frame rate jump from 19FPS to 33 FPS.
I’ve said this before, and I’ll say it again. Some will swear that the Xbox One will receive no benefit for DX12 aside from PR and hype. Others believe it’s release will suddenly unlock an integrated on die GPU that will add another 5TFLOPS of performance to the machine (I’m being serious here… that’s the sad thing). The reality is somewhere in the middle – improved performance, but it won’t make the console ‘faster’ than the PS4. There’s too much of a gap in raw GPU performance between the two machines that software won’t solve… particularly when you consider the PS4’s API is pretty damn low level too.
But even IF there’s no performance increase (which I find unlikely) it’ll be likely easier to make games on the machine and make ports from PC a little simpler. That’s good news for indie developers in particular and even better news for both PC and Xbox One owners.
Image Source: Microsoft’s DX12 blog