I know there must be some reason but I can't put my finger on why AMD are continuing to use Vega for their Mobile/APU range. Vega seems decent within the power range but I would have assumed Navi was more than mature enough for it now; potentially it's just economics and any performance upside for a Navi GP is not worth the extra cost.
A few helpful excerpts from different parts of the article (the rest of each paragraph is also useful):
> Users may be upset that the new processor range only features Vega 8 graphics, the same as last year’s design, however part of the silicon re-use comes in here enabling AMD to come to market in a timely manner.
> As mentioned on the previous page, one of the criticisms leveled at this new generation of processors is that we again get Vega 8 integrated graphics, rather than something RDNA based. The main reason for this is AMD’s re-use of design in order to enable a faster time-to-market with Zen 3. The previous generation Renoir design with Zen 2 and Vega 8 was built in conjunction with Cezanne to the point that the first samples of Cezanne were back from the fab only two months after Renoir was launched.
> With AMD’s recent supply issues as well, we’re of the opinion that AMD has been stockpiling these Ryzen 5000 Mobile processors in order to enable a strong Q1 and Q2 launch of the platform with stock for all OEMs.
One reason for depriorizing it might be that they realized iGPU performance isn't as relevant as they hoped in recent history. AMD spent years betting heavily on concepts like the APU, HSA, tight coupling with coherent virtual memory access from GPU and CPU side etc.
They tried to do a "build it and they will come" but the software didn't come and Intel ate their lunch with low effort iGPUs. Advances of GPU programming slowed industry wide, especially software & dev experience / programming language wise, and the GPU world remained far from CPU programming productivity. (And you can still get 2 orders of magnitude of parallelism staying on the CPU, from SIMD, multicore and SMT).
They learned their lesson, and this time around came back with a product that can beat Intel in their own game.
memory bandwidth as spqr0a1 said is probably a good reason, but also, the design pipeline probably made it hard to do Zen2 with Navi. Both of those came out in 2019; if Navi had gotten delayed, it would have delayed Renoir as well.
Now for the Zen3 APUs, AMD is emphasizing pin for pin compatibility; possibly switching the GPU would have needed different pins; or they just wanted to make sure they got Zen 3 APUs out quickly to keep making inroads into the laptop market.
There's a leaked AMD roadmap floating around that has a Van Gogh chip with Zen2 + Navi coming out sometime this year. But that roadmap didn't show the Lucienne Zen2 + Vega chips AMD recently announced (Ryzen 5{3,5,7}00U} to ensure model numbers stay confusing.
The article touches on this. Anandtech assumes that it's because it was faster to go to market with Vega. Something about the modular design of the chip. They also mentioned that AMD would likely upgrade the GPU in a next version, without upgrading the CPU. Maybe a minor mhz bump of the CPU together with a new GPU.
Memory bandwidth is but one aspect of GPU performance, and also memory bandwidth is substantially upgraded in this APU. It has support for LPDDR4X-4267 68.2 GB/s, up from DDR4 3200's 51.2 GB/s.
But the Vega 8 in this is definitely not the peak that you can squeeze out of DDR4. If that was the case then we wouldn't have had things like the Vega 11 in the 3400G. Also RDNA2's "Infinity Cache" helps reduce memory bandwidth requirements, which would also be a relevant upgrade.
This was just a time-to-market strategy to reduce risk. Not an optimal engineering decision. This let them avoid trying to make a power-optimized version of RDNA2 at the same time they were trying to release any version of RDNA2.
Navi improves delta compression for memory, which improves effective memory bandwidth for a given "raw" bandwidth. So switching to Navi would have alleviated the memory bandwidth bottleneck to some degree and improved performance.
It is a puzzling decision and perhaps the "wasn't ready when they taped out" explanation is the correct one. Cezanne seems to have been ready for a while now and just waiting for fab capacity - meaning it would have had to have been taped out in parallel with the RDNA2 architecture chips. So a design flaw in RDNA2 might have blown the Cezanne launch.
They could have used RDNA1 though and it still would have improved compression over GCN/Vega. Or ported over just the delta compression. I guess maybe they just wanted to port Zen3 over straight, use the memory controller they'd already proven with Renoir, and not take any risks?
It's definitely puzzling and I haven't heard an explanation I would consider 100% satisfactory.
Citation needed. Doesn't seem to be borne out by benchmarks comparing chips with varying iGPU resources and same memory setup.
Sure, they're going to be meomry bound some of the time, and it depends on what kind of bw/compute balance the GPU code is tuned for... but we didn't stop putting more cores and wider SIMD on chips either just because then current SW didn't fully utilize them.
Also in the article it is clearly mentioned that time to market using vega was much shorter than reinventing the wheel which was important i'm guessing since AMD is still small compared to intel in mobile space and they have struggled in the past in this segment.
Yes, though I'm pretty sure the same rationale was used for the 4000 series APUs as well.
I have a feeling that the particular market these chips are aimed at that improving the iGPU by even something like 50% is not that meaningful, it's still not going to be competitive with discrete and will nearly always be pair with one if gaming is an option on the particular laptop.
And so I suppose the rationale might be that it's just easier to stick with Vega and the current power draw for the iGPU as acceptable for the required graphics horsepower. Maybe one day we'll see an APU powerful enough to remove the need for discrete GPU in laptops, not for awhile yet though :).
Unlike DDR, GDDR is optimised for high-bandwidth (over latency), which is why dedicated GPUs can be so much faster. HBM even more so. DDR5 does 6.4Gbps, whereas HBM2E does 2.5Tbps (per stack).
Console APUs don't need to be particularly power-optimized. If the RDNA 2 GPU in the PS5 can't idle below 10w, nobody will care. If the RDNA 2 GPU in your laptop can't idle below 10w, that's unshippable. A decent laptop doesn't draw 10w total at idle including screen & all that.
And I'm sure at some point there will be an RDNA-something APU from AMD, too, but there also isn't any existing power-optimized RDNA cores sitting around either to point at and just be like "but why didn't you use that instead?"