I think it’s a great time to talk about AMD’s GPU plans, given that the lower end Navi 14 cards will soon be on the market (AMD have announced them but they’re not out as of the time I’m writing this), and we’re starting to have a greater understanding of the higher-end Navi 12 GPUs.
Let me know what your opinion is here – but I think it’s arguable that while there was lots of excitement around AMD’s Zen 2 architecture and its various product lines (particularly Ryzen 3000), AMD’s Navi architecture and range of GPUs was more anticipated.
Nvidia’s GeForce RTX 20 series frustrated people by the pricing of Nvidia’s offerings and were looking to team Red for an alternative. Fortunately, AMD was countering Nvidia well in the mainstream area with Polaris price cuts and game bundles, which kept them competitive in the $200 USD region.
The Radeon RX 5700 and RX 5700 XT’s launched in July and forced Nvidia’s hand to launch the GeForce Super series, essentially improving the price/performance ratio of Nvidia’s GeForce 20 series versus the launch products. Back when I first leaked that there’d be a Turing Refresh coming, I had heard that Nvidia was planning to simply crank the clock speeds of the memory for the GeForce 20 refresh, but in the end, we also saw higher CUDA core counts and GPU clocks too… a potential sign Nvidia felt that a bit of extra bandwidth wouldn’t be enough to fend off Navi 10.
Let’s get back to Navi though with the two highest performing graphics cards currently in the RDNA line up: both the Radeon RX 5700 and RX 5700 XT use the Navi 10 die, and it’s a 251mm2 chip, with 2560 RDNA shaders (40 compute units) in the fully enabled core found in the XT SKU, sitting on a 256-bit GDDR6 bus, with its 8GB of memory cranked up to 14gbps providing 448GB/s bandwidth.
Given AMD’s decision to not use a chiplet design for their GPUs (at least for now), starting with a die of a more modest size makes a great deal of sense given a new architecture too (something a source backed up, we’ll get to that in just a moment though).
Indeed, the company did much the same with Polaris. The RX 470 / 480 core measured just 232mm2 on the new 14nm LPP process, certainly not a hop-skip and a jump away from the 251mm2 chip of Navi 10 found in the RX 5700 series. With Polaris, we saw a new node and architecture too (from the 28nm Fiji architecture of the RX 300 series). Now, in the case of Polaris, we didn’t get the bigger die variant in the shape of the long-rumoured RX 490, despite plenty of evidence that the company had planned to release it.
Several of AMD’s partner AIB’s even listed the cards at one point, even AMD’s got in on the act and had shown the card on their own website’s dropdown. We even saw the card listed in Ashes of the Singularity (and other) benchmarking database, so we know the silicon existed but just didn’t make it into mass production.
Supposedly, the card had been held back and instead all the resources (including marketing) had gone into releasing and hyping both RX Vega 56 and 64, hence why we never saw a larger die version of Polaris launch (at least that’s what the whispers are).
AMD’s smaller GPU first approach was also backed up by one of my sources (we’ll refer to them as Source 1), who told me that the company had a lot of problems during the tape out with Navi, and AMD had decided that because of the complexities of designing a new architecture and also the shifting to the 7nm process, they needed to start with more a more modestly sized chip.
Actually, prior to the launch of Navi, earlier this year, another source had told me Navi had been a “nightmare” and indeed, I had conversations with Jim over at AdoredTV where he told me his sources had told him similar info.
And while this is getting slightly off-topic, it’s worth remembering that the Radeon VII GPU uses a Vega 20 die that’s 331mm2, far larger than the RX 5700 XT’s Navi 10 die (251mm2). But also remember that the number of CU for Radeon VII was just 60 (compared to 64 of the Vega 64), the same as the Radeon Instinct MI 50, albeit with Radeon VII featuring ¼ double precision (3.5 TFLOPS) compared to the MI 50’s 6.7 TFLOPS (½ its single precision).
AMD’s decision to release the Radeon VII with only 60 CU likely says that Vega 20 yields at the start weren’t the best at this point. At a guess, the 331mm2 die of Vega 20/Radeon VII probably isn’t too far off what we’ll see for Navi 12 core given the obvious bump in shaders (we’ll get back to this).
What we know about Navi 14 based upon driver entries is that it appears to be based on the same architecture as Navi 10, albeit smaller and less powerful than Navi 10 (thanks to fewe SPs). The official info is that the chip measures 158mm2 contains 1408 shaders (22 CU) and 128-bit memory bus – with between either 4 to 8GB RAM that’s running at 14Gbps (so a total of 224GB/s bandwidth).
Navi 12, on the other hand, is definitely different from Navi 10’s architecture. For one, the driver code we’ve seen so far is actually slightly different from the code we’ve seen in Navi 10 (whereas Navi 12’s code is pretty much identical), this indicates that the architecture itself has been tweaked and tucked.
Also, there’s a question of the performance targets of AMD’s Radeon RX 5800 and RX 5800 XT – which is likely to be RTX 2080 Super or greater. AMD’s own marketing slides compare the RX 5700 series several times against Vega. For one, in the die size and efficiency, it specifically compares it against Vega 10 (251mm2 Navi 10 vs the 495mm2 of Vega 14nm… and that’s without the HBM2 of Vega). In another slide, it touts that Radeon RX 5700 XT is “a worthy upgrade” and compares it against the RX Vega 56.
This makes sense – the MSRP of Vega 56 at launch was $400, and the RX 5700 and XT is 350 and 400 USD respectively, (though there’s currently some great deals online). We can logically assume that AMD is attempting to replace’ and better the performance of the entirety of its current lineup, including the Radeon VII.
Let’s have some more context – AMD is keen to compare the RX 5500 series against their Polaris based Radeon RX 480 and Nvidia’s GTX 1650 4GB. The RX 5500 in Gears 5 achieves 92 FPS vs 79 FPS of the RX 480, for example. In a slide, they state absolute performance is about 12 percent better than the RX 480 (which puts it right around the RX 580/590 level). (depending on model clock speeds and the games engine and workload).
It’s actually extremely impressive – given that the RX 480’s die was based on 14nm and measured 221mm2, with Polaris 10 squeezing in 5.7 billion transistors compared to the 158mm2 Navi 14 and its 6.4 billion transistors.
So, if we use the performance figures of the RX 580/590 – they’re about 30/40 percent behind the Vega 56 (once again, depending upon workload and model).
And again – remember – Radeon RX 5700 XT is between 30 – 40 percent faster than the Vega 56 (the card AMD is so keen to compare Navi 10 against… or Nvidia’s RTX 2070).
We can say therefore that there’s a pretty good chance that the Radeon RX 5800 XT will be 30 – 40 percent faster than the 5700 XT, while the RX 5800 Vanilla will be about 20 to 30 percent faster than the currently fastest RDNA GPU, the RX 5700 XT (We’ll talk about that more in a moment).
Although some of you might rightly point out that there’s a bit of a flaw with this theory – if there’s 40 CU in the RX 5700 XT, but let’s say only 56 CU for the RX 5800 XT(40 percent), why is the gap so much bigger between the RX 5500 XT and its 22 Compute Units, versus the 40 Compute Units of the RX 5700 XT?
Well, at a pure guess I’d say that AMD will launch a new series (the Radeon RX 5600) eventually which will have those let’s say between 28 and 32 CU. This would slot in between AMD’s RX 5500 XT 22 Compute Units and the RX 5700’s 36 CU. Obviously, this would be a great usage of the Navi 10 die too, once again squeezing it for all its worth (so for chips that had fewer than 36 functional Compute Units, they could use those to make the RX 5600 series).
We can also get an idea of the difference between the RX 5800 and RX 5800 XT too. If you look at the percentage difference between AMD’s GPUs of the same tiers, there usually about 10 – 15 percent difference in Compute Units. For example, The RX 480 contained 2304 shaders, vs 2048 of the Radeon RX 470. Vega 64 had… well, 64 Compute Units. RX vega 56 had… errr 56. The Navi 10 core in the RX 5700 XT has 2560 shaders, and the RX 5700 cuts that number to 2304.
Model | RX 5500 | RX 5600 | RX 5700 | RX 5700 XT | RX 5800 | RX 5800 XT |
GPU Core | Navi 14 | Navi 10 ????? | Navi 10 | Navi 10 | Navi 12 | Navi 12 |
Shader Count | 1408 | 1792 | 2304 | 2560 | 3072 – 3328 | 3328 – 3840 |
TMUs | 88 | ? | 144 | 160 | Up to 208 | Up to 240 |
ROPs | 32 | ? | 64 | 64 | ? | ? |
Transistor | 6400M | 10300M ? | 10300M | 10300M | Lots | Lots |
Core Clock | ? | 1460 ? | 1465 MHz | 1605 MHz | 1500? | 1600? |
Boost Clock | 1717 MHz | 1630 ? | 1625 MHz | 1755 MHz | 1650 ? | 1730 ? |
Game Clock | 1845 MHz | 1750 ? | 1725 MHz | 1905 MHz | 1700 ? | 1830 ? |
Mem Type | GDDR6 | GDDR6? | GDDR6 | GDDR6 | GDDR6 | GDDR6 |
Memory Amount | 4 GB / 8 GB | 6 / 8 GB | 8 GB | 8 GB | 8GB ? | 8 / 16 GB? |
Bus Width | 128 bit | 192 – 256 Bit | 256 bit | 256 bit | 256 bit | 256 bit |
Memory Clock | 14000 MHz | Up to 14000 MHz | 14000 MHz | 14000 MHz | 16000 – 18000 MHz | 16000 – 18000 Mhz |
Memory Bandwidth | 224 GBps | 336 GB/s to 448 GB/s | 448 GB/s | 448 GB/s | 512 GB/s to 576 GB/s | 512 GB/s to 576 GB/s |
This is obviously to squeeze the most out of a piece of silicon as possible. Let’s take the Vega series as a great example, AMD had Vega 56, 64 and 64 liquid. The liquid variant was identical to Radeon RX Vega 64, albeit with higher clock speeds. 1247 and 1546 of the base and boost respectively, versus the 1406 and 1677 of the liquids.
They would then figure out what dies would meet the different criteria for the clock speeds and number of CU enabled, and the lower frequencies also help separate the product a little more (although you can generally overclock the lower tier cards anyway).
We can, therefore, assume than the RX 5800 series will follow the same pattern in terms of the difference between the XT and ‘vanilla’ models. Also if there’s a 30 – 40 percent difference in the performance between the RX 5700 XT and RX 5800 XT we can predict that the RX 5800 will likely sport 3072 shaders (48 CU) and 3328 shaders (52 CU), and the RX 5800 XT will have between 3328 shaders (52 CU) and 3840 shaders (60 CU), and with likely a clock speed difference between the XT and vanilla cards.
In the driver code, there is a reference which reads ‘NUM_SDP_INTERFACES”. AMD labels this as “number of Synchronous Data Port Interfaces to Memory” and is directly related to the width of the bus.
In the case of Navi 10, it is NUM_SDP_INTERFACES = 16, whereas Navi 14 is NUM_SDP_INTERFACES = 8. This ties in with the information that Navi 14 is using a 128-bit bus vs the 256-bit bus of Navi 10.
This means that Navi 12 will use a 256-bit bus with GDDR6 then, right? Well, let’s investigate that for a moment. Don’t forget, we’ve seen faster memory already – with Nvidia’s RTX 2080 Super upping memory clock frequencies from 14gbps of the original GeForce 2080 to 15.5gbps.
We know however that there’s faster memory out there, with SK Hynix and Samsung hitting mass production of 18gbps, and we’ve even seen reports back in 2018 of Micron’s memory hitting 20gbps.
So, quite easily we can see 512Gbps bandwidth if they used 16gbps, or 576GB/s if they used 18gbps while maintaining a 256-bit bus. There are also memory modules which are of both 1 and 2GB capacity, so in theory, AMD could opt to outfit the RX 5800 series with either 8 or 16GB memory, although given they’re likely targeting the RTX 2080 Super 8GB is ‘probably enough’.
Below is a calculation of the bandwidth with different types of memory, assuming a 256-bit memory bus.
256 Bit x 14gbps = 448GB/s
256 Bit x 16gbps = 512GB/s
256 Bit x 18gbps = 576GB/s
So that’s the Navi 12 / RX 5800 memory mystery solved then, right? Well…. No. There’s another option – HBM. You see, Vega 10 also has a 2048-bit HBM2 stack and the drivers also report NUM_SDP_INTERFACES = 16. So it’s quite possible that we could see some hype of HBM2 configuration.
Vega10 (2048 bit HBM2) – pInfo->gfx9.numSdpInterfaces = 16;
Vega20 (4096 bit HBM2) – pInfo->gfx9.numSdpInterfaces = 32;
A number of you wrote to me to tell me I missed this out when I put out the original news piece regarding the original Linux drivers, and you were definitely right there. The biggest reason I didn’t discuss it in that video was I don’t think it’s very likely for an RX 5800 class device – mostly because of cost.
Don’t get me wrong, we certainly know that Navi supports HBM, as David Wang from AMD has actually confirmed it in a PCGamesN interview, but it’s also more expensive than GDDR6 – and so while it’s ‘technically’ possible, I suspect a Navi 12 die in an RX 5800 will be equipped with faster GDDR6 memory.
I had heard that AMD were planning to bring Vega 7nm to gamer’s prior to my actually unveiling it to the public. A source (I’ll refer to them as Source 2) who’d told me many of the details of Ryzen 3000 and Zen (plus some Comet Lake stuff), had also mentioned Vega 7nm coming to gamer’s off-hand. Another source (source 3) came later and blew the Radeon VII details wide open, including providing me renders of the cards. He also told me that the GPUs would be a limited production run most likely. Source 1 actually told me more about this later on in a separate conversation – and crucially much more Navi 20 info – but we’ll get to that.
Fast forward when the Radeon RX 5700 XT launches, and we see that the card isn’t far behind the Radeon VII. The extra memory and compute grunt of Radeon VII certainly has its place for prosumers, but for gamers – there’s no doubt about it, Navi 10 trounces it in value and makes it very poor value for money.
I suspect AMD’s goal with the RX 5800 series will be to undermine Nvidia’s 2080 Super cards, in both performance and pricing by undercutting them while doing it.
So, what about the future Graphic architectures from team Radeon?
There was a fascinating quote by AMD’s David Wang said back back in June of last year “I think the important thing was that Scott [Herkelman, senior vice president and GM of AMD Radeon Gaming] mentioned about having some sort of consistency, delivering something to our customers so that you keep stimulating the excitement,” Wang said. “I think that’s how you make this so business so exciting, so interesting. That’s how it makes the gamers so excited about [it], every year.”
He essentially pledged that we would see more consistent product launches from AMD, with tweaks to the architecture and said that in the past “AMD had lost momentum”.
What we know officially is that AMD is planning to launch a new range of RDNA cards next year, confirmed in two separate slides. One is the Ray Tracing vision, and a roadmap which shows the will be based on TSMC’s 7nm+ node.
If you’re a regular viewer, you might recall that a source (source 4) back in March confirmed AMD creating a card which would be hardware Ray Tracing and it would be Navi 2X. They also told me that the first-gen cards were Raja Koduri’s major project while at AMD, essentially fixing the ‘weaknesses’ within the GCN architecture and designing a card that would be much better at graphical prowess. This makes a great deal of sense, with hindsight. These cards would power the next generation consoles too, and also given Nvidia’s push to graphical efficiency with GeForce… well, yeah.
https://www.youtube.com/watch?v=s6dO73K8M1c
With Ray Tracing vision, it specifically states how AMD plan to evolve how they handle these future visual effects. “Shader” states – “ProRenderer for Creators, Radeon Rays For Developers”. If you’re confused, this is a software-only solution and will basically use the standard shaders to do the work, there’s not much more to say about that.
The “hardware” part is more interesting and specifically references next-gen Navi. “select lighting effects for real time gaming”. It then states that this is for Next Gen RDNA, which of course means Navi 2x.
We’ve discussed the Hybrid Ray Tracing patent filed multiple times before, but as a quick reminder, the patent uses a mixture of dedicated hardware (which is newly added to RDNA 2) and also existing hardware to run the Ray Tracing calculations. Essentially it leverages the texture processor and makes the most out of the hardware. It’s important to remember that a patent doesn’t necessarily indicate it will make it into production hardware.
Interestingly, we recently saw a leak which claimed that both Sony and Microsoft would be adopting different methods to implement ray tracing on their respective consoles, and this very same leak also said that “Navi V is late”. It’s hard to know the accuracy of this particular information, because while this tipster speaking to Gizmodo allegedly provided them with photos of a PS5 development kit prior to the public drawings we’ve seen, Microsoft already refuted one element of the story (the 4K camera).
Perhaps I’m being a bit too analytical here, but I am extremely drawn to the Ray Tracing Vision slide. I’m very much intrigued with “SELECT lighting effects for real-time gaming.” I do wonder if this implementation might be either faster or more efficient than Nvidia’s RTX technology, but perhaps less can be achieved using it? We’ll need to wait on that.
Also, there has been a lot of rumors floating about regarding AMD implementing ray tracing in the first generation of Navi cards (so the Radeon RX 5700 for example) via a driver update at the end of this year (2019). Thinking about this and then peaking back over at the ray tracing vision slide, “Shader” and the “step” in the slide potentially backs this up.
Perhaps we’ll see a software implementation of ray tracing for the first-gen rDNA hardware, and the second generation receives it in hardware. Given that there’s no HW rt solution on the first generation Navi, this is possible. Nvidia did enable ray tracing on Pascal for example (albeit, with less than stellar performance), but still – if you’d like to run RTX Quake 2 on your GTX 1080 Ti, you can at least see how it looks.
Now, you might recall Source 1 told me that a medium-size die had been selected by AMD to maximize the yields (no chiplets after all). But in the very same phone call, he also confirmed that Navi 1x had been originally intended to launch much earlier in 2019 (hence why AMD opted to launch Radeon VII, as a way to get some positive press for their GPU division in the meanwhile), but now AMD feels they’ve things ironed out.
He then went on to drop another bombshell – Navi 21 and Navi 23. These are GPUs will launch next year, and he told me Navi 23 was the one that was most interesting (he largely sidelined discussion of Navi 21). He said that internally, AMD was calling Navi 23 an “Nvidia Killer” and that engineering teams were extremely happy with how the GPU was shaping up.
A few takeaways here – AMD’s marketing teams, like Nvidia or Intel (or any other PR team) are typically pretty confident to the outside world, and so, such strong language wouldn’t be so interesting if a pr person had said this. But instead, it was the engineers working on next-generation Navi cards, and engineers at AMD are typically much more cautious in their language.
He also reaffirmed that Ray Tracing would be a goal for the gaming cards (although I think this is pretty well established and accepted by now).
Apparently, the performance targets of Navi 23 were set by AMD CEO Lisa Su, as she was ‘frustrated’ by Nvidia’s GPU dominance in the high-end market.
If I had to guess, it’ll be this next-gen RDNA cards we see ramp up the CU number significantly, likely breaking past the 64 CU ‘barrier’ and probably being in the low 70’s to 80 CUs for the RX 6000 series, especially given that AMD’s rivals will be hitting hard next year.
What we do know thanks to Igor’s Lab is that Ampere should launch the 1st half of 2020, although we know precious few details of the architecture itself. We are pretty certain it will use Samsung’s 7nm process (reports tell us that Nvidia got a great deal on it), although, given the constrained manufacturing of 7nm (with long lead times) from TSMC, this is probably for the best.
Given we know so little about Ampere, we don’t 100 percent know that it’s coming to gamers and the GeForce line of cards. Nvidia has certainly launched architectures in the past designed squarely for HPC (Volta), with gamer’s getting several thousand Pascal variants (along with a slightly Pascal refresh) to keep us sustained until Turing finally launched. It’s possible therefore Ampere is purely for HPC, and gamers will get a different architecture or a Turing refresh on 7nm. I think Nvidia’s plan is pretty obvious though – ramp up performance – and reduce the toll hardware Ray Tracing has.
Intel is also a bit of a mystery – with Gen 12 graphics (Xe) launching next year, and another reason I suspect AMD will want to have a great selection of products on hand. I’ll be doing a much deeper dive into Intel (since there’s been a lot of movement in the land of Xe since I last covered it), but the gist is that Raja Koduri recently hinted on Twitter that something bit will be coming in June of next year. This is certainly earlier than the expected launch for the discrete cards (I’d heard it was later in the year), but it’s possible that Raja and his team are ahead of the curve on release, or that they’re going to instead make a kind of announcement of what his team are working on.
I’ve also been told that there are professional Navi cards on the way too – designing around the workstation (Source 4 again), although he wasn’t certain of the time frame to their release. It’s possible these could be based on either the first or second generation RDNA architecture.
Nvidia, Intel and AMD have a rather interesting forth contender next year – games consoles. I’ve gone deeper into this in my Xbox Scarlett E3 analysis, but Microsoft and Sony launch a console between $400 to $500, it’ll put a lot of pressure on the companies mid-range offerings. Without question, PC will continue to dominate graphically, but the appeal of a games console which costs the same or less than the price of a high-end PC card is very appealing for the first few years of a consoles life.
Another interesting factor – AMD being the provider of not just Sony’s hardware, but also Microsoft. Microsoft could end up being more important than Sony here for PC gamers, because of their influence over DirectX and DXR (DirectX Ray Tracing). I assume that if a console game uses Ray Tracing of some description on the new Xbox Scarlett, it shouldn’t be super difficult to get that up and running in some form on an AMD GPU on your PC.
As always, all we can do is watch and see how it all unfolds!