ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
sajattack[m] has quit [Quit: Idle timeout reached: 172800s]
Noah[m] has quit [Quit: Idle timeout reached: 172800s]
<thejpster[m]> Really weird swings in performance by changing C compiler. Also, I want to test rebuilding libcore with the bit mask instructions for RISC-V enabled. Do we have a good libcore benchmark suite?
crabbedhaloablut has quit [Ping timeout: 248 seconds]
crabbedhaloablut has joined #rust-embedded
cr1901 has quit [Read error: Connection reset by peer]
cr1901_ has joined #rust-embedded
cyrozap_ has joined #rust-embedded
vanner- has joined #rust-embedded
konkers[m]1 has joined #rust-embedded
ello_ has joined #rust-embedded
thejpster[m]1 has joined #rust-embedded
ello has quit [Quit: ZNC 1.9.1 - https://znc.in]
vanner has quit [Quit: ZNC 1.9.0 - https://znc.in]
cyrozap has quit [Remote host closed the connection]
konkers[m] has quit [Ping timeout: 246 seconds]
adamgreig[m] has quit [Ping timeout: 246 seconds]
thejpster[m] has quit [Ping timeout: 246 seconds]
adamgreig[m]1 has joined #rust-embedded
dne has quit [Remote host closed the connection]
dne has joined #rust-embedded
M9names[m] has quit [Quit: Idle timeout reached: 172800s]
<AlexandrosLiarok> anyone familiar with the cortex-m7 MPU knows what's wrong with my uncached region configuration?... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/FdYuakOJSPTXJaFQejkNpTtf>)
<VaradShinde[m]> is imxrt-boot-gen an alternative of th eimsrt1060evkfcb ???
<VaradShinde[m]> s/th/the/, s/eimsrt1060evkfcb/imxrt1060_evk_fcb/
mkj[m] has quit [Quit: Idle timeout reached: 172800s]
BenoitLIETAER[m] has quit [Quit: Idle timeout reached: 172800s]
omniscient_[m] has quit [Quit: Idle timeout reached: 172800s]
siho[m] has quit [Quit: Idle timeout reached: 172800s]
jistr has quit [Remote host closed the connection]
adamhott[m] has quit [Quit: Idle timeout reached: 172800s]
jistr has joined #rust-embedded
<JamesMunns[m]> Latest podcast episode just went live, this one is a high level explainer of what DMA is and why we use it: https://sdr-podcast.com/episodes/dma/
<JamesMunns[m]> Lemme know what y'all think, and if you have any corrections :D
M9names[m] has joined #rust-embedded
<M9names[m]> You were surprisingly close on your Mac pro memory bandwidth thumb-suck
<JamesMunns[m]> thumb-suck?
<JamesMunns[m]> (I did look up the memory bandwidth of my specific laptop :D)
<M9names[m]> Estimate
<M9names[m]> the MacBook m3 pro has a listed memory bandwidth of 150GB/s. You said "maybe 400GB/s"
<JamesMunns[m]> M2 Max has 400GB/s
<M9names[m]> Ah cool. Well it's less impressive if you looked it up, but if you asked me to guess I probably would have gotten it wrong. Crazy numbers, really.
<JamesMunns[m]> I think I did envelope math for the RP2040, 32 bits/cycle, 150MHz or something?
<JamesMunns[m]> (at least that's how I got the "hundreds of megabytes per second" number)
<JamesMunns[m]> fwiw, we do make slides for the episodes, we just haven't found a good way to publish them yet. For this episode: https://docs.google.com/presentation/d/1kZa1yGqgmKkCwN1Xh-wuZcxEM-jURRVv3OvOg2NySFM/edit?usp=sharing
<M9names[m]> Like a gtx4090 is ~1000GB/s. Surprising to have that much memory b/w to a cpu
K900 has joined #rust-embedded
<K900> On-package memory does that to you
<M9names[m]> Yep. Short distance, high frequency, many channels.
<vollbrecht[m]> even if you leaf the close connection, pcie5x16 gives you 128GB/s over a "longer" distance
<vollbrecht[m]> * gives you already 128GB/s over
<vollbrecht[m]> * gives you already 128GB/s over
xnor has quit [Quit: WeeChat 3.4]
<AlexandrosLiarok> Is anyone aware of any queue suitable for shared memory ?
<AlexandrosLiarok> I basically need a mailbox between cores
jakzale has quit [Remote host closed the connection]
jakzale has joined #rust-embedded
TomB[m] has joined #rust-embedded
<TomB[m]> If you can port this to Rust it’d work well I’m sure https://www.1024cores.net/home/lock-free-algorithms/queues/intrusive-mpsc-node-based-queue
xnor has joined #rust-embedded
mrkajetanp has joined #rust-embedded
LucasChiesa[m] has quit [Quit: Idle timeout reached: 172800s]
<thejpster[m]1> heapless has some queues. And there’s bbqueue if you want to send lots of things (like bytes, or samples)
<thejpster[m]1> Rust doesn’t care about shared memory - all static variables are considered shared and must have a type that implements Sync.
<JamesMunns[m]> I'm assuming by shared memory they mean "could be two programs on two cores"?
<JamesMunns[m]> like, I do plan for the next version of bbqueue to be "ipc safe"
<JamesMunns[m]> <TomB[m]> "If you can port this to Rust it..." <- btw I'm aware of a couple impls of this in Rust if you need one, including no-std ones
<JamesMunns[m]> I think std's mpsc (or maybe parking lots? or maybe they are the same these days?) all used that algo
<JamesMunns[m]> oh, maybe crossbeam and not parking lot.
<JamesMunns[m]> idk, a lot of people use a lot of vyukov's algos :D
<TomB[m]> They do, but I don’t recall any in rust supporting intrusive structs like the C version does, it’s really handy when you don’t want to allocate
<JamesMunns[m]> cordyceps is a whole library for intrusive data structures, std optional :)
<TomB[m]> Like intrusive linked list are super super useful on embedded devices
<TomB[m]> There we go, maybe that’s what I was missing from the mental picture of what existed
<JamesMunns[m]> it's also used by maitake-sync for std-optional, intrusive async primitives
<JamesMunns[m]> note that doubly-linked intrusive lists do still require mutexes (not that mpsc impl tho - that's only singly linked), cordyceps and maitake-sync are currently being rewritten to allow user-chosen mutexes (instead of spinlocks) similar to how embassy-sync allows you to choose
<TomB[m]> Looking at cordyceps, it’s what I’d use
<JamesMunns[m]> but like I said - be careful with "shared mem" - not all types are `repr(c)` and this matters if you are sharing mem between different targets, or different programs!
<JamesMunns[m]> this is fine in cases like the RP2040 IF they are both running code from the same program that was compiled at the same time
<JamesMunns[m]> less so if you have for example a 64-bit Cortex-A core and a 32-bit Cortex-M core on the same die that are sharing memory.
<TomB[m]> That’s not too terrible tbh
<TomB[m]> Though maybe at that point it’s time for OpenAMP
<JamesMunns[m]> Not sure what you mean by "not too terrible". It's definitely possible to write IPC safe queues! But I also am not aware of any "off the shelf" today.
mrkajetanp has quit [Ping timeout: 248 seconds]
<AlexandrosLiarok> Okay so I just basically copied the bare minimum enqueue/dequeue from `heapless::spsc::Queue`, added producer/consumer taken flags, added producer/consumer methods that only work once, made everything `repr(C)` and put inside a MaybeUninit/UnsafeCell wrapper with an init flag and /`primary_init/secondary` methods for cell initialization on a custom ipc shared memory region.
<AlexandrosLiarok> * Okay so I just basically copied the bare minimum enqueue/dequeue from heapless::spsc::Queue, added producer/consumer taken flags, added producer/consumer methods that only work once, made everything `repr(C)` and put inside a MaybeUninit/UnsafeCell wrapper with an init flag and /`primary_init/secondary` methods for cell initialization on a custom ipc shared memory region.
<AlexandrosLiarok> * Okay so I just basically copied the bare minimum enqueue/dequeue from heapless::spsc::Queue, added producer/consumer taken flags, added producer/consumer methods that only work once, made everything repr(C) and put inside a MaybeUninit/UnsafeCell wrapper with an init flag and primary_init/secondary_init methods for cell initialization on a custom ipc shared memory region.
<AlexandrosLiarok> And seems to work fone.
<AlexandrosLiarok> s/fone/fine/
AtleoS has joined #rust-embedded
<thejpster[m]1> <JamesMunns[m]> "Not sure what you mean by "not..." <- The Cortex-A on the Beagleboard X15 talks to the Cortex-M over virtio vrings. Those are kinda designed to have each end in a different address space. I certainly implemented one end. Doing the other is probably doing the same again (but in my case I was talking to a Linux kernel)
dirbaio[m] has quit [Quit: Idle timeout reached: 172800s]