<davidlt[m]>
There are rumours that AMD is going after Apple (i.e. finally a mega APU).
<thefossguy>
“Those tiny 1 to 2 percentage boosts in energy efficiently and/or performance add up”
<thefossguy>
Like an M2 Max/Ultra?
<davidlt[m]>
Probably not chiplet, but just one silicon, but who knows.
<davidlt[m]>
But yeah, basically a way larger iGPU and maybe something more.
<davidlt[m]>
Intel has it's own ADM, the base layer can have a large pool of cache/memory (eDRAM-like?).
<davidlt[m]>
AMD plans to place 3D below compute die too.
<thefossguy>
From what I know, Apple isn’t better just because of a fast OoO core. They heavily use ASICS for common “professional” tasks like hw enc/dec
<davidlt[m]>
But Intel has something that's not cache, but also not memory-like it seams.
<davidlt[m]>
Yeah, buy you can get that in any SoC too.
<thefossguy>
And the 1600-ish prores card for the Mac Pro was an experiment for this.
<davidlt[m]>
Nothing really stops vendor from buying that IP and putting multiple of those.
<davidlt[m]>
What surprises me is that Apple didn't do that much revolutionary stuff. They just actually didn't what over vendors were not willing to do.
<davidlt[m]>
Like Intel didn't want to go beyond 4-core until AMD came up with Zen.
<davidlt[m]>
Just look at that memory bus width that Apple has on their chips.
<davidlt[m]>
We know that existing APUs from AMD are starving for memory bandwidth.
<davidlt[m]>
(but they aren't adding more memory channels)
<davidlt[m]>
Placing LPDDR5X that close to compute chip was very smart too.
<davidlt[m]>
But again that's nothing new. Especially if you looked at HPC designs.
<davidlt[m]>
Yet again, look at Intel's Xeon Phi and MCDRAM.
<thefossguy>
Is that one out?
<davidlt[m]>
You have two-tier memory structure with 16GB MCDRAM right next to the CPU, and DDR4 as the 2nd tier slower memory but higher capacity.
<thefossguy>
davidlt[m]: I don’t understand the second sentence
<davidlt[m]>
Sorry. I am bit tired after some hiking today :)
<davidlt[m]>
Apple just did what over vendors were not willing to do.
<davidlt[m]>
And Apple makes money from device margin + services, not the chip.
<thefossguy>
Ah gotcha
<davidlt[m]>
Intel and AMD are not fully integrated companies, they make money from the CPUs and software.
<thefossguy>
Hmm, so their stuff has to be more generic and profitable
<davidlt[m]>
Well, AMD is in a very good spot with chiplets and 3D stacking too.
<davidlt[m]>
Intel needs to cook way more designs compared to AMD.
<davidlt[m]>
I am really waiting for Meteor Lake to see how Intel is doing.
<thefossguy>
Yeah. I was shocked at AMD’s speedy recovery in desktop and server. The chiplet design can be used not only in server and desktop chips, but even in laptop and mobile; based on the yeilds.
<davidlt[m]>
If Meteor Lake is not extremely impressive then Arrow Lake must. Otherwise Intel is extremely behind.
<thefossguy>
Pretty smart thing to do imo
<davidlt[m]>
The high-end laptops chips are the same desktop chips, just running at that most efficient mark.
<thefossguy>
davidlt[m]: I’m sleepy so can’t remap those names to generations rn but agreed nonetheless :D
<davidlt[m]>
There were tests on AMD silicon side (Zen 3, I think) showing that you get very close to 100% of performance with way smaller power target.
<davidlt[m]>
It just that on desktop people want more and more thus you let it the chip go crazy basically :)
<thefossguy>
Chiplets also gave them edge in moving supply based on demand in post pandemic market
<davidlt[m]>
Yeah, one design, and you can move between desktop/high-perf laptop/workstation/server.
<davidlt[m]>
Yet Intel needs to make multiple designs for those.
<thefossguy>
It’s only the desktop market that’s odd lol
<davidlt[m]>
Also if AMD managed to ship A0 silicon with RNDA3 that's some impressive work.
<thefossguy>
Everywhere else, customers want an efficient chip
<davidlt[m]>
Yeah, except desktop gamer crowd :)
<thefossguy>
davidlt[m]: Also in contrast to Intel’s design team. Their SR were C stepping IIRC
<davidlt[m]>
This is really impressive for A0 silicon, and just think about time & money savings.
<davidlt[m]>
I think folks don't understand how many things on silicon don't work as expected. There are also experimental features that could be baked in.
<thefossguy>
I thought that was because of the time crunch. But haven’t heard about any crashes (only because it’s A0) so it’s really a triumph.
<davidlt[m]>
Yeah, that means they managed to hit high mark right on the 1st attempt.
<davidlt[m]>
Good enough to make some nice margins and still compete with Nvidia.
<davidlt[m]>
Just look at the current marked conditions, there are no need to spend more money.
<thefossguy>
Cynical me: how many of those Xilinx engineers worked on Navi 31? 😆
<davidlt[m]>
It would be better to redirect saved money to RDNA4 and enjoy those higher-than-usual margins with A0.
<thefossguy>
Agreed
<davidlt[m]>
Damn, I still cannot believe how much those GPUs cost.
<thefossguy>
As in cheap or expensive?
<davidlt[m]>
Expensive.
<thefossguy>
Same boat then
<davidlt[m]>
3000 USD/EUR during COVID was crazy town.
<davidlt[m]>
1000+ USD/EUR better, but still crazy.
<thefossguy>
I can just cry
<davidlt[m]>
And lower-end seems to be crap.
<davidlt[m]>
Not to mention that lower-end now costs more than high-end stuff used to.
<davidlt[m]>
The only GPU that I still have is Polaris :D
<davidlt[m]>
I do love iGPUs. I hope that mega-APU battle begins.
<thefossguy>
On one hand, I’m excited for integration of better GPU. But I have repair OCD. “What if it breaks. No way to savage that CPU/GPU without the other component(s).”
<thefossguy>
*salvage
<thefossguy>
It’s 22:40 so I’ll just sleep unsteady of being dumb dumb