dgilmore changed the topic of #fedora-riscv to: Fedora on RISC-V https://fedoraproject.org/wiki/Architectures/RISC-V || Logs: https://libera.irclog.whitequark.org/fedora-riscv || Alt Arch discussions are welcome in #fedora-alt-arches
<thefossguy> Were you talking about this the other day [davidlt](<https://matrix.to/#/@davidlt:matrix.org>)?
<davidlt[m]> Yes
<davidlt[m]> rwmjones: nufive seems to have problems
<davidlt[m]> Might have lost of the GCC builds :/
<rwmjones> one sec .. looking
Kevinsadminaccou has joined #fedora-riscv
<rwmjones> it's running 4 x lto processes
<rwmjones> it seems to be doing a GCC 13.1.1 build
<rwmjones> as it is still running I would let it go for a bit
<rwmjones> nothing interesting in dmesg, I think it's OK
<rwmjones> biab
zsun has joined #fedora-riscv
zsun has quit [Quit: Leaving.]
<thefossguy> I assume this is a no, but asking nonetheless; Does rv64* support addressing registers in 16-bit and/or 32-bit mode like Intel/ARM does?
<thefossguy> Looked at the manual, doesn't look like it
<dtometzki> Pratham Patel: which manual do you checked ?
<davidlt[m]> You mean old legacy 16-bit and 32-bit registers?
<davidlt[m]> There is no such legacy in riscv land.
<thefossguy> Yes. And I just AND-ed them XD. Felt quicker :shrug: 
<davidlt[m]> Actually funny thing there were news articles about Intel proposing x86S, which removed old <64-bit legacy.
<davidlt[m]> Which would incl. those old 16-bit and 32-bit registers/accesses.
<thefossguy> ‘Twas me sir, on SIG/AltArch in Rocky :)
<davidlt[m]> Even newer ARM cores, ARMv9 compatible don't incl. AArch32 mode anymore.
<davidlt[m]> For a long time I haven't seen a need for this legacy stuff.
<davidlt[m]> Even quite tiny micro-controllers these days are 64-bit.
<davidlt[m]> I haven't executed anything 32-bit (on PC) for probably 10+ years now.
<davidlt[m]> I recall Xeon Phi (KNC probably) that using "legacy ISA" had a huge penalty (i.e. pipeline flush).
<davidlt[m]> So even using SSE4.X was a big no.
<davidlt[m]> The problem with x86_64 is tons of legacy.
<thefossguy> I though ARMv8-a and later were strictly 64-bit too? Though that just might be Jon Masters’ tweet that I’m mis-remembering.   
<davidlt[m]> No
<davidlt[m]> It's ARMv9 that does that.
<davidlt[m]> ARMv8 CPUs can support AArch32 and AArch64 execution modes.
<thefossguy> So I was ADHD-ing reading his tweet then. Fair. :)  
<thefossguy> Ah, so will have to account for that too.  
<davidlt[m]> rwmjones: surprisingly the board reconnected after a long time
<davidlt[m]> and GCC build was not lost
<davidlt[m]> rwmjones: any progress on GHC and pandoc? Otherwise at some point I might as well schedule GHC with LLVM backend full rebuild.
<davidlt[m]> We probably should do that for F39 mass rebuild.
<thefossguy> A bit unrelated. I wonder if the x86-S was a push by Pat to beat Apple and AMD.    
<davidlt[m]> Nah, that wouldn't be enough.
<davidlt[m]> What makes Apple win is TSMC and their design team.
<davidlt[m]> And money :)
<davidlt[m]> IIRC these are very wide designs and take a lot of silicon space.
<thefossguy> Well, I’ve heard people say Intel’s design team is the bottleneck compared to their foundry team. So that might be it too.    
<davidlt[m]> Apple co-develops process with TSMC and gets exclusive access to be the 1st customer to a new process.
<davidlt[m]> Again, it takes 3-5 years to produce response.
<thefossguy> Yeah. Just a thought :) 
<davidlt[m]> Intel cannot really do anything crazy right now to combat Apple or AMD>
<davidlt[m]> Basically Meteor Lake/Arrow Lake is the initial response.
<davidlt[m]> And we will know more after we see Ribbon FET and back power stuff.
<davidlt[m]> I think that's with Lunar, but there will be a demo in Computex.
<thefossguy> I wondered primarily because I saw this analysis: https://youtube.com/watch?v=llOo10p1ijM  
<davidlt[m]> There are rumours that AMD is going after Apple (i.e. finally a mega APU).
<thefossguy> “Those tiny 1 to 2 percentage boosts in energy efficiently and/or performance add up”
<thefossguy> Like an M2 Max/Ultra?
<davidlt[m]> Probably not chiplet, but just one silicon, but who knows.
<davidlt[m]> But yeah, basically a way larger iGPU and maybe something more.
<davidlt[m]> Intel has it's own ADM, the base layer can have a large pool of cache/memory (eDRAM-like?).
<davidlt[m]> AMD plans to place 3D below compute die too.
<thefossguy> From what I know, Apple isn’t better just because of a fast OoO core. They heavily use ASICS for common “professional” tasks like hw enc/dec 
<davidlt[m]> But Intel has something that's not cache, but also not memory-like it seams.
<davidlt[m]> Yeah, buy you can get that in any SoC too.
<thefossguy> And the 1600-ish prores card for the Mac Pro was an experiment for this.  
<davidlt[m]> Nothing really stops vendor from buying that IP and putting multiple of those.
<davidlt[m]> What surprises me is that Apple didn't do that much revolutionary stuff. They just actually didn't what over vendors were not willing to do.
<davidlt[m]> Like Intel didn't want to go beyond 4-core until AMD came up with Zen.
<davidlt[m]> Just look at that memory bus width that Apple has on their chips.
<davidlt[m]> We know that existing APUs from AMD are starving for memory bandwidth.
<davidlt[m]> (but they aren't adding more memory channels)
<davidlt[m]> Placing LPDDR5X that close to compute chip was very smart too.
<davidlt[m]> But again that's nothing new. Especially if you looked at HPC designs.
<davidlt[m]> Yet again, look at Intel's Xeon Phi and MCDRAM.
<thefossguy> Is that one out?
<davidlt[m]> You have two-tier memory structure with 16GB MCDRAM right next to the CPU, and DDR4 as the 2nd tier slower memory but higher capacity.
<thefossguy> davidlt[m]: I don’t understand the second sentence
<davidlt[m]> Sorry. I am bit tired after some hiking today :)
<davidlt[m]> Apple just did what over vendors were not willing to do.
<davidlt[m]> And Apple makes money from device margin + services, not the chip.
<thefossguy> Ah gotcha  
<davidlt[m]> Intel and AMD are not fully integrated companies, they make money from the CPUs and software.
<thefossguy> Hmm, so their stuff has to be more generic and profitable 
<davidlt[m]> Well, AMD is in a very good spot with chiplets and 3D stacking too.
<davidlt[m]> Intel needs to cook way more designs compared to AMD.
<davidlt[m]> I am really waiting for Meteor Lake to see how Intel is doing.
<thefossguy> Yeah. I was shocked at AMD’s speedy recovery in desktop and server. The chiplet design can be used not only in server and desktop chips, but even in laptop and mobile; based on the yeilds.    
<davidlt[m]> If Meteor Lake is not extremely impressive then Arrow Lake must. Otherwise Intel is extremely behind.
<thefossguy> Pretty smart thing to do imo
<davidlt[m]> The high-end laptops chips are the same desktop chips, just running at that most efficient mark.
<thefossguy> davidlt[m]: I’m sleepy so can’t remap those names to generations rn but agreed nonetheless :D
<davidlt[m]> There were tests on AMD silicon side (Zen 3, I think) showing that you get very close to 100% of performance with way smaller power target.
<davidlt[m]> It just that on desktop people want more and more thus you let it the chip go crazy basically :)
<thefossguy> Chiplets also gave them edge in moving supply based on demand in post pandemic market
<davidlt[m]> Yeah, one design, and you can move between desktop/high-perf laptop/workstation/server.
<davidlt[m]> Yet Intel needs to make multiple designs for those.
<thefossguy> It’s only the desktop market that’s odd lol
<davidlt[m]> Also if AMD managed to ship A0 silicon with RNDA3 that's some impressive work.
<thefossguy> Everywhere else, customers want an efficient chip
<davidlt[m]> Yeah, except desktop gamer crowd :)
<thefossguy> davidlt[m]: Also in contrast to Intel’s design team. Their SR were C stepping IIRC
<davidlt[m]> This is really impressive for A0 silicon, and just think about time & money savings.
<davidlt[m]> I think folks don't understand how many things on silicon don't work as expected. There are also experimental features that could be baked in.
<thefossguy> I thought that was because of the time crunch. But haven’t heard about any crashes (only because it’s A0) so it’s really a triumph.    
<davidlt[m]> Yeah, that means they managed to hit high mark right on the 1st attempt.
<davidlt[m]> Good enough to make some nice margins and still compete with Nvidia.
<davidlt[m]> Just look at the current marked conditions, there are no need to spend more money.
<thefossguy> Cynical me: how many of those Xilinx engineers worked on Navi 31? 😆
<davidlt[m]> It would be better to redirect saved money to RDNA4 and enjoy those higher-than-usual margins with A0.
<thefossguy> Agreed
<davidlt[m]> Damn, I still cannot believe how much those GPUs cost.
<thefossguy> As in cheap or expensive?
<davidlt[m]> Expensive.
<thefossguy> Same boat then 
<davidlt[m]> 3000 USD/EUR during COVID was crazy town.
<davidlt[m]> 1000+ USD/EUR better, but still crazy.
<thefossguy> I can just cry
<davidlt[m]> And lower-end seems to be crap.
<davidlt[m]> Not to mention that lower-end now costs more than high-end stuff used to.
<davidlt[m]> The only GPU that I still have is Polaris :D
<davidlt[m]> I do love iGPUs. I hope that mega-APU battle begins.
<thefossguy> On one hand, I’m excited for integration of better GPU. But I have repair OCD. “What if it breaks. No way to savage that CPU/GPU without the other component(s).”   
<thefossguy> *salvage
<thefossguy> It’s 22:40 so I’ll just sleep unsteady of being dumb dumb
<davidlt[m]> :)