If you’ve been following through the past few pages, hopefully you’ll have a good understanding as to why D3D 12’s low level access are important, some of the ways MS are going to achieve this and some of its other features. But there is the 65,000 dollar question, how are they going to achieve it? As we’ve already discussed – developing a low level API for a console a fairly easy thing due to the fixed hardware. PC’s however it’s a little bit trickier. I might have a Intel, AMD or Nvidia GPU. Let’s say I have an Nvidia Graphics Card – which one exactly? A GTX 670, 680? Perhaps a 760?
Abstraction is what allows Direct3D 11 to run on different pieces of hardware, but also in turn takes it pounds of flesh from the performance. Microsoft aren’t too willing to go into details on how the low level access for the graphics hardware works quite yet. But if we don our Sherlock Holmes outfit for a second time in this article, we can probably figure out a few of the clues – even if we’re not left with the entire picture. Currently, the 3 major GPU vendors (AMD, Nvidia and Intel) have announced which of its architectures will support D3D 12. GCN 1.0 from AMD – which means the 7000 range onwards, the Fermi range from Nvidia – meaning the GTX 4xx series, and finally Intel’s Gen 7.5 – meaning Haswell or above. This means that a large portion of hardware already supports DX12.
AMD’s previous architecture was based on VLIW5 and VLIW 4, and have since switched over to the GCN architecture (which is SIMD). It’s fair to say that with this move to a more Nvidia like GPU structure, there’s a lot less which separates how the 3 vendors GPU’s operate. From any previous period in modern GPU history, they’re the closest they’ve ever been. Of course there are still some architectural differences, but there is certainly a high level of parity.
Of course – there’s still going to have to be some level of abstraction between the three GPU vendors – we’re not quite in a “all the designs work exactly the same” situation after all. This has led to some speculation that in theory at least, AMD’s Mantle (on AMD’s own hardware) will provide slightly higher performance. Of course this is purely speculation – and it would of course also depend heavily on the levels of optimization that the games developers themselves put into the titles.
It’s also worth noting that Microsoft clearly have had several motivations to push this technology. For one, it was obvious that OpenGL would eventually push low level optimizations. Indeed, there’d already been murmurs concerning it before GDC had even been announced. This would be bad for Microsoft – especially considering Valve’s push towards Linux gaming. With OpenGL running perfectly fine on Linux (since it’s multi platform) it’s arguable that if MS had simply sat on DX11 for too long a low level OpenGL / Linux route would have become somewhat appealing to many a gamer.
Another argument is that with the inclusion of a low level API and some level of parity between games development on both the Xbox One and the PC, it will only encourage developers to push ports on to the Xbox One console.
Of course, DX12 is still compatible with DX11 and prior games, which is important from the perspective of gamer’s. In theory it means a future where current (and older) games are playable on DX12 software and hardware, but the DX12 games will run and take advantage of the higher performance and with the improved feature set that DX12 allows.
The PC market is going to be changing a lot with the introduction of DX12 – and I for one welcome it. In the next part we’ll be taking a look at what DX12’s introduction will mean for consoles, mobile devices and the gaming industry as a whole.
Previous Page
Sources:
Pcper for a few images
Nvidia Blog
Microsoft