A few days ago, the internet was ablaze with excitement when Intel disclosed slides which hinted at the direction they were going with their GPU. The slides demonstrated a dedicated GPU which would be based on Intel’s Gen9 technology, and built on a 14nm process.
The design concepts were shown off at theĀ IEEE International Solid-State Circuits Conference, and contains a rather impressive 1.54 billion transistors. There was immediately discussion and speculation that this would be the building blocks of Arctic Sound, the first GPU design spearheaded by former RTG head Raja Koduri. That would then form the building blocks of Jupiter Sound, which would follow after.
“While we intend to compete in graphics products in the future, this research paper is unrelated,” said Intel, quashing those theories in an instant. “Our goal with this research is to explore possible, future circuit techniques that may improve the power and performance of Intel products.”
In essence, the designs we saw at the Solid-State Conference (which primarily concerned power distribution and other such non-performance subjects) aren’t an indicator of the final design.
Given the wording it’s likely that Intel are just optimizing power distribution on its GPUs right now. With each execution unit able to increase voltage and so on based upon its own workload. In essence, Intel have applied a lot of lessons from its modern CPU designs.
But I did point out in my recent video on this subject that if this generation 9 design would be tweaked and eventually used – that would mean that Arctic Sound would be therefore the ‘tester’ for Intel. It’d be used to scale up and get used to producing something as complex as a GPU; and we could then presume Jupiter Sound would be a drastically improved design.
But this is likely telling us that we’ll see Intel just taking lessons from what they’re fiddling around with internally, instead the GPU design might be very different. Perhaps more akin to how either AMD or Nvidia are producing its own GPU.
The chip being coupled with an FPGA indicated that we’d likely see this used primarily for servers and HPC – but of course, given it was an early design, it didn’t mean much and could have been tweaked for gaming.
So what does Raja Koduri and Intel have planned for dedicated GPUs? Well, we’ll likely not know for some time. But it’ll be fascinating how the market reacts. I’m sure both Nvidia and AMD are watching extremely closely.