dgilmore changed the topic of #fedora-riscv to: Fedora on RISC-V https://fedoraproject.org/wiki/Architectures/RISC-V || Logs: https://libera.irclog.whitequark.org/fedora-riscv || Alt Arch discussions are welcome in #fedora-alt-arches
iooi has quit [Read error: Connection reset by peer]
iooi has joined #fedora-riscv
tg has quit [Quit: tg]
<cwt[m]> do we have any expected price for Horse Creek yet?
<davidlt[m]> No news to my knowledge.
<cwt[m]> Will it have "V" extension?
<davidlt[m]> No
<davidlt[m]> It has the old P550 cores, which are old by years now.
<davidlt[m]> It's the 1st SiFive OoO core.
<davidlt[m]> It got replaced by P650 in 2021 IIRC.
<davidlt[m]> That also got a refresh (minor bump) to P670 and potential RVA22 compatibility.
<davidlt[m]> Vectors become mandatory (probably) in RVA23, which is to be defined later this year.
<davidlt[m]> So that basically means that vectors should be major availability in the next core ip or next + 1.
<cwt[m]> I see.
<davidlt[m]> With SOPHGO Mango Pioneer (64-core SoC, OoO, DIMMs, etc.) Horse Creek is probably a dead horse here.
<davidlt[m]> And there is even 2S 128-core server platform too.
<davidlt[m]> Somehow I feel it will be pretty much impossible to beat SOPHGO value proposition.
<davidlt[m]> The biggest worry here is trade wars.
<cwt[m]> davidlt[m]: Milk-V ?
<davidlt[m]> Yeah, but it's called Mango Pioneer in the device tree, OpenSBI, etc.
<davidlt[m]> Repositories are open to look at.
<davidlt[m]> It has 16 clusters, each with 4 cores (reminds me of SiFive).
<davidlt[m]> So far it looks like a large embedded SBC to me.
<davidlt[m]> Their ZSBL looks a bit strange to me.
<davidlt[m]> Pioneer also support Kingston 3200 UDIMMs.
<cwt[m]> with 64 cores, should be fun to run alpaca.cpp on it :-P
<davidlt[m]> I also saw commits mentioning CCIX, which is interesting. I do think the world moved on with CXL.
<davidlt[m]> I will try to get one ASAP, but so far I haven't seen it being available.
<davidlt[m]> Based on the activity it's not ready for upstream, still tons of stuff being added/rebased.
<davidlt[m]> But things are open, which is epic.
<davidlt[m]> Their server platform is called "pisces".
<davidlt[m]> DTs are constantly change, so it's not stable yet.
<davidlt[m]> They also have a bunch of hacks in Linux tree that needs to be resolved for upstreaming.
<davidlt[m]> Surprisingly they did add vector support :)
<davidlt[m]> Anyways there is a patch to support T-HEAD vector stuff on the kernel side. Once the vector stuff lands properly in the kernel this might actually land too.
<davidlt[m]> Interesting bit.
<davidlt[m]> SOPHGO and SiFive Horse Creek both seem to use SiliconMotion SM768.
<davidlt[m]> I am also surprised that they only have Sv39 on this SoC.
<davidlt[m]> Comparing to existing server platforms this severely limits the memory capacity.
<davidlt[m]> Yeah, it's not of RAM for us, but still seems kinda low for 64-core system.
<davidlt[m]> If we get T-HEAD IFUNCs in the glibc this will be epic.
<davidlt[m]> Most of the extensions (exception is vectors) are supported by toolchain.
<davidlt[m]> Thus T-HEAD might "age like a fine wine".
<davidlt[m]> There is room to improve performance with time by adding optimized glibc functions and in-kernel stuff too.
<davidlt[m]> Vectors might be complicated. T-HEAD didn't even add extension for it for some reason.
kwizart has quit [Ping timeout: 264 seconds]
<davidlt[m]> kernel 6.2.8 released with several riscv fixes.
jcajka has joined #fedora-riscv
kwizart has joined #fedora-riscv
davidlt has joined #fedora-riscv
<cwt[m]> I trying to find why my 2 nvme allocate difference host memory buffer size. Then I just see this https://lore.kernel.org/all/f94565db-f217-4a56-83c3-c6429807185c@t-8ch.de/t/#mc084149579e2c5e68469e34258fe13dc96cf5e03
<cwt[m]> btw, my 2 nvme use difference firmware, that may be the cause.
<cwt[m]> however, do you think the login in the kernel for allocate host mem is right or wrong? https://elixir.bootlin.com/linux/latest/source/drivers/nvme/host/pci.c#L2047
<cwt[m]> s/login/logic/
<davidlt[m]> Not sure. Not my field, thus little knowledge here. Not sure what should be right/wrong.
<davidlt[m]> Hmm.. I wonder why it it needs to work way down from min_chunk.
<davidlt[m]> I don't think preferred will ever get used here (most likely).
<davidlt[m]> Based on the email thread. There is a huge different between 32MiB and ~200MiB (preferred).
<cwt[m]> right, may be the logic is tried to avoid over allocation in the first place.
<davidlt[m]> On low-memory systems that might be useful, but on today's systems ~200MiB doesn't really mean a lot.
<davidlt[m]> Firefox right now uses ~20GiB on my laptop :)
<cwt[m]> the but comment /* start big and work our way down */ make confusion.
<davidlt[m]> I am kinda interested in that 48x2 DDR5 (total 96GB) RAM availability :)
<cwt[m]> * but the comment /* start big and work our way down */ make confusion.
<davidlt[m]> Yeah, because it most likely will start from something smallish (not what NVMe would prefer).
<davidlt[m]> On Unmatched that's 128MiB (max_host_mem_size_mb)
<davidlt[m]> Same as on my laptop.
<davidlt[m]> But cannot see "Host Memory Buffer" as that requires NVMe with 2.0a.
<davidlt[m]> or maybe older, but all return not implemented or something.
<cwt[m]> I just try change min_t to max_t and rebuild my kernel. It's just one file, should be fast to rebuild.
<davidlt[m]> You might also try to load nvme module with max_host_mem_size_mb set to a more than 128 (for fun).
<davidlt[m]> Do something like 256, 512.
<davidlt[m]> Just to see how far it would go.
<cwt[m]> that should be fun
<davidlt[m]> > 5.27.1.10 Host Memory Buffer (Feature Identifier 0Dh), (Optional)
<davidlt[m]> If you want to use  nvme get-feature
<cwt[m]> interesting, in my case it doesn't change anything.
masami has joined #fedora-riscv
masami has quit [Quit: Leaving]
jcajka has quit [Ping timeout: 265 seconds]
jcajka has joined #fedora-riscv
zsun has joined #fedora-riscv
zsun has quit [Quit: Leaving.]
jcajka has quit [Quit: Leaving]
davidlt has quit [Ping timeout: 248 seconds]