ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
IlPalazzo-ojiisa has quit [Quit: Leaving.]
crabbedhaloablut has quit [Ping timeout: 246 seconds]
crabbedhaloablut has joined #rust-embedded
pbsds has quit [Quit: The Lounge - https://thelounge.chat]
dc740 has joined #rust-embedded
<re_irc> <@sourcebox:matrix.org> So now I'm having a bit of fun with dual core Cortex-A7. Next thing: how to exchange data between the cores? Some kind of IPC message queue?
pbsds has joined #rust-embedded
<re_irc> <@jamesmunns:beeper.com> : That's what I'm planning for mnemos
<re_irc> <@jamesmunns:beeper.com> I have an IPC safe version of bbqueue, kernel messages get serialized to the bbqueue
<re_irc> <@jamesmunns:beeper.com> (mnemos is very far from being multi core ready, but I have some ideas/sketches for it)
<re_irc> <@jamesmunns:beeper.com> https://github.com/tosc-rs/mnemos/tree/main/source/abi/src/bbqueue_ipc if you're interested, I can pull it into a crate if you'd like
<re_irc> <@sourcebox:matrix.org> I have something written in C some time ago that uses shared memory. Can even be used between different architectures. But I'm not keen to port that.
<re_irc> <@jamesmunns:beeper.com> This works if you want a byte queue :)
<re_irc> <@jamesmunns:beeper.com> Just requires shared mem and atomics. I'm not certain the async part is IPC safe, but we have our own repr c vtables, so maybe?
<re_irc> <@jamesmunns:beeper.com> Eliza or Dirbaio will probably tell me why I am wrong, but it could probably be implemented somehow even if it doesn't work today :)
<re_irc> <@sourcebox:matrix.org> Ok, what you're doing is more advanced than what I need now.
<re_irc> <@sourcebox:matrix.org> Basically, I need a ring buffer for messages. No DMA involved. Receiver can just poll for new messages.
<re_irc> <@jamesmunns:beeper.com> bbqueue _can_ be used with DMA, but you don't need to. It's basically just a bipbuffer
<re_irc> <@jamesmunns:beeper.com> It's useful for sending variable length [u8] chunks, or a stream of bytes, basically. But if it's not what you need that's okay too :)
<re_irc> <@sourcebox:matrix.org> Ideally, I want to have a generic T.
<re_irc> <@sourcebox:matrix.org> The bipbuffer has the advantage of not doing any copies.
<re_irc> <@sourcebox:matrix.org> My question is if I just can take a regular ring buffer and make it multicore safe with a critical section or something else.
<re_irc> <@sourcebox:matrix.org> As both cores are started separately, there has to be a static ringbuffer for each queue.
<re_irc> <@sourcebox:matrix.org> The docs from the cortex-m say "On multi-core systems, a CriticalSection is not sufficient to ensure exclusive access." Is that really true?
<re_irc> <@jamesmunns:beeper.com> https://github.com/tosc-rs/mnemos/blob/main/source/spitebuf/src/lib.rs is potentially interesting, it's basically heapless::mpsc but async. Not certain how ipc safe it is
<re_irc> <@jamesmunns:beeper.com> : yes
<re_irc> <@jamesmunns:beeper.com> because you have two "threads" running, core0 + core1, even if all interrupts are disabled
<re_irc> <@jamesmunns:beeper.com> you need something like rp2040's spinlock mutex
<re_irc> <@sourcebox:matrix.org> But my implementation uses a spinlock.
<re_irc> <@jamesmunns:beeper.com> Yeah, if your critical section impl is multicore safe, then you should be fine.
<re_irc> <@jamesmunns:beeper.com> that line is mostly saying that the general cortex-m "disable interrupts, receive mutex access" isn't sound with multiple cores.
<re_irc> <@sourcebox:matrix.org> So it depends on the implementation of the CS, you can't say in general that a CS is not sufficient.
<re_irc> <@jamesmunns:beeper.com> ÂŊ\_(ツ)_/ÂŊ probably just from the old docs, im sure they'd love a PR to fix it.
<re_irc> <@jamesmunns:beeper.com> critical section (in cortex-m) used to JUST mean "disable all interrupts"
<re_irc> <@jamesmunns:beeper.com> but yeah, the critical section crate is much fancier and more flexible now :)
<re_irc> <@jamesmunns:beeper.com> : Where do these docs come from? The current cortex-m only talks about the *single core* critical section impl, and says:
<re_irc> > It is unsound to enable it on multi-core targets or for code running in unprivileged mode, and may cause functional problems in systems where some interrupts must be not be disabled or critical sections are managed as part of an RTOS. In these cases, you should use a target-specific implementation instead, typically provided by a HAL or RTOS crate.
<re_irc> <@dirbaio:matrix.org> that applies to the "old" "interrupt::free"
<re_irc> <@sourcebox:matrix.org> Docs from here: https://docs.rs/cortex-m/0.7.7/cortex_m/interrupt/struct.Mutex.html
<re_irc> <@dirbaio:matrix.org> in cortex-m 0.7 "interrupt::free()" gives you a bare_metal::CriticalSection token. That's unsound in multicore systems.
<re_irc> <@dirbaio:matrix.org> and "Mutex" uess "CriticalSection" to let you access the interior.
<re_irc> <@jamesmunns:beeper.com> Ah yeah, that's the "old" way of doing things, rather that https://docs.rs/critical-section/latest/critical_section/
<re_irc> <@dirbaio:matrix.org> so it's not Mutex"'s fault, it's "interrupt::free()`'s fault
<re_irc> <@dirbaio:matrix.org> * "Mutex"'s fault, it's "interrupt::free()"'s
<re_irc> <@dirbaio:matrix.org> the new way is the "critical-section" crate
<re_irc> <@dirbaio:matrix.org> where it's your responsibility to enable the right implementation for your target
<re_irc> <@sourcebox:matrix.org> : Yeah, I'm already using that.
<re_irc> <@dirbaio:matrix.org> cortex-m provides an implementation for single core chips, you enable it with the "critical-section" Cargo feature
<re_irc> <@dirbaio:matrix.org> * "critical-section-single-core"
<re_irc> <@jamesmunns:beeper.com> : Cool, if you do something multi core safe, then yes using your critical section will be suitable then :)
<re_irc> <@dirbaio:matrix.org> so you can still cause unsoundness if you enable it on multicore systems
<re_irc> <@dirbaio:matrix.org> but now the blame is on you, the user, for incorrectly enabling that feature
<re_irc> <@dirbaio:matrix.org> vs before it was on "interrupt::free()" which always gives you a CS token, even in multicore systems, with no way to opt out
<re_irc> <@dirbaio:matrix.org> * the blame
<re_irc> <@dirbaio:matrix.org> and in cortex-m 0.8 "interrupt::free" will no longer return a CS token
<re_irc> <@sourcebox:matrix.org> So what I need is a mutex that uses the critical-section crate.
<re_irc> <@dirbaio:matrix.org> "critical_section::Mutex"
<re_irc> <@sourcebox:matrix.org> Ok ;-)
<re_irc> <@dirbaio:matrix.org> that can be used with "cortex-m" with the "critical-section-single-core" feature in single core chips
<re_irc> <@dirbaio:matrix.org> or for multicore chips the arch support crate or the hal should provide a multicore-safe critical section impl
<re_irc> <@dirbaio:matrix.org> for example "embassy-rp", "rp2040-hal" provide one for the rp2040 multicore
<re_irc> <@dirbaio:matrix.org> in GHA how the hell do you add a feature combination that only applies to rust nightly?
<re_irc> <@dirbaio:matrix.org> oh ugh the issue is it's pulling "embedded-io-async" into the workspace even if it's not explicitly listed..?
<re_irc> <@dirbaio:matrix.org> because "embedded-io-adapters" has "embedded-io-async = { version = "0.5", path = "../embedded-io-async", optional = true }"
<re_irc> <@dirbaio:matrix.org> so even if "embedded-io-async" is not enabled
<re_irc> <@dirbaio:matrix.org> just mentioning by path
<re_irc> <@dirbaio:matrix.org> makes Cargo pull it into the workspace
<re_irc> <@dirbaio:matrix.org> and try to build it
<re_irc> <@dirbaio:matrix.org> wtf?
<re_irc> <@dirbaio:matrix.org> why
<re_irc> <@dirbaio:matrix.org> > All path dependencies residing in the workspace directory automatically become members. Additional members can be listed with the members key, which should be an array of strings containing directories with Cargo.toml files.
<re_irc> <@dirbaio:matrix.org> whyyy
<re_irc> <@thejpster:matrix.org> I find workspaces to be a bit of a nightmare to be honest
<re_irc> <@dirbaio:matrix.org> yeah they're very annoying :(
<re_irc> <@dirbaio:matrix.org> so much implicit magic
<re_irc> <@thejpster:matrix.org> especially when you have some stuff that's no_std and other stuff that isn't, and how it likes to harmonise features across all workspace members.
<re_irc> # Include all the generic library crates
<re_irc> <@thejpster:matrix.org> [workspace]
<re_irc> members = [
<re_irc> "neotron-bmc-protocol",
<re_irc> "neotron-bmc-commands"
<re_irc> ]
<re_irc> # Exclude the BMC firmwares as they build using different targets/features
<re_irc> exclude = [
<re_irc> "neotron-bmc-pico",
<re_irc> "neotron-bmc-nucleo",
<re_irc> ]
<re_irc> <@jamesmunns:beeper.com> yeah, in mnemos we have one top level workspace, and the "platform" dirs (for hardware + wasm) are each separate workspaces for this reason
<re_irc> <@jamesmunns:beeper.com> But generally, I don't really think workspaces ever work well for multiple target archs, it always ends up being painful.
<re_irc> <@dirbaio:matrix.org> in this case I just want a path dep to not be included by default 😭
<re_irc> <@jamesmunns:beeper.com> I don't think workspaces have a "conditional include/exclude"
<re_irc> <@dirbaio:matrix.org> because if it's included as a member, "cargo check" etc will try to build it by default, and it only builds on nightly
<re_irc> <@jamesmunns:beeper.com> you could exclude it completely, and optionally use it as a path dep, outside the workspace
<re_irc> <@dirbaio:matrix.org> : that's exactly what I'm doing
<re_irc> <@dirbaio:matrix.org> the problem is "optionally use it as a path dep" implicitly adds it as a workspace member
<re_irc> <@dirbaio:matrix.org> even if the optional dep is not enabled
<re_irc> <@jamesmunns:beeper.com> ahhhh shit
<re_irc> <@jamesmunns:beeper.com> yeah, it'd have to be outside the workspace top level, that sucks
<re_irc> <@dirbaio:matrix.org> grrrrrr
<re_irc> <@dirbaio:matrix.org> "exclude" works
<re_irc> <@dirbaio:matrix.org> requires having two "Cargo.toml"s, one for stable and one for nightly...
<re_irc> <@dirbaio:matrix.org> oh well
<re_irc> <@sourcebox:matrix.org> Hmm. I'm using this "critical_section::Mutex" with a "RefCell" inside, like described in the docs. But now I get a "already borrowed: BorrowMutError" when I access it from different cores.
<re_irc> <@dirbaio:matrix.org> which chip are you using?
<re_irc> <@sourcebox:matrix.org> STM32MP1
<re_irc> <@dirbaio:matrix.org> which critical section impl are you using? you can _not_ use "cortex-m/critical-section-single-core"
<re_irc> <@sourcebox:matrix.org> As I said, I did a spinlock impl.
<re_irc> <@sourcebox:matrix.org> But maybe it's not working as expected.
<re_irc> <@dirbaio:matrix.org> could be, yes :S
<re_irc> <@sourcebox:matrix.org> use core::sync::atomic::{AtomicBool, Ordering};
<re_irc> struct MultiCoreCriticalSection;
<re_irc> use critical_section::{set_impl, Impl, RawRestoreState};
<re_irc> set_impl!(MultiCoreCriticalSection);
<re_irc> static LOCK: AtomicBool = AtomicBool::new(false);
<re_irc> unsafe impl Impl for MultiCoreCriticalSection {
<re_irc> unsafe fn acquire() -> RawRestoreState {
<re_irc> core::sync::atomic::compiler_fence(Ordering::SeqCst);
<re_irc> while LOCK.load(Ordering::Relaxed) {}
<re_irc> LOCK.store(true, Ordering::Relaxed);
<re_irc> 0
<re_irc> }
<re_irc> unsafe fn release(_: RawRestoreState) {
<re_irc> LOCK.store(false, Ordering::Relaxed);
<re_irc> core::sync::atomic::compiler_fence(Ordering::SeqCst);
<re_irc> }
<re_irc> }
<re_irc> <@dirbaio:matrix.org> another way you can get "BorrowMutError" is if you lock the CS multiple times in a nested way
<re_irc> <@dirbaio:matrix.org> uh yeah you need atomic CAS for it to be sound
<re_irc> <@sourcebox:matrix.org> What does that mean?
<re_irc> <@dirbaio:matrix.org> unsafe fn acquire() -> RawRestoreState {
<re_irc> while LOCK.compare_and_swap(false, true, Ordering::Acquire) {}
<re_irc> unsafe fn release(_: RawRestoreState) {
<re_irc> }
<re_irc> 0
<re_irc> LOCK.store(false, Ordering::Release);
<re_irc> }
<re_irc> <@sourcebox:matrix.org> Ok, I see.
<re_irc> <@dirbaio:matrix.org> otherwise you can have a race condition
<re_irc> <@dirbaio:matrix.org> while LOCK.load(Ordering::Relaxed) {}
<re_irc> // another core acquires the lock HERE. Now you have two cores that think they've acquired it!
<re_irc> LOCK.store(true, Ordering::Relaxed);
<re_irc> <@dirbaio:matrix.org> this ensures that race condition is not possible
<re_irc> <@dirbaio:matrix.org> the impl is still not 100% correct though: "critical-section" mandates the impl must be reentrant
<re_irc> <@dirbaio:matrix.org> ie it must allow nested locking of the CS from the same thread
<re_irc> <@sourcebox:matrix.org> Yes, otherwise it can lead to deadlocks.
<re_irc> <@dirbaio:matrix.org> and you probably want to disable irqs to ensure the current core is not interrupted and then you deadlock if the interrupt also tries to acquire the CS
<re_irc> <@dirbaio:matrix.org> here's an exapmle of how it looks for rp2040's multicore https://github.com/embassy-rs/embassy/blob/main/embassy-rp/src/critical_section_impl.rs
<re_irc> <@dirbaio:matrix.org> this stuff is very tricky ðŸĨē
<re_irc> <@sourcebox:matrix.org> Using "compare_and_swap" does not change anything. It only throws a deprecation message.
IlPalazzo-ojiisa has joined #rust-embedded
<re_irc> <@sourcebox:matrix.org> What I'm really not about is if my static AtomicBool lock variable is located in a memory region with the correct attributes to make it multicore safe.
<re_irc> <@sourcebox:matrix.org> +sure
<re_irc> <@sourcebox:matrix.org> This whole stuff is really confusing in the docs.
<re_irc> <@sourcebox:matrix.org> LDREX/STREX can't be used with device or strongly ordered memory, so it must be "normal" memory.
<re_irc> <@dirbaio:matrix.org> are you doing it across the cortex-a / cortex-m cores?
<re_irc> <@sourcebox:matrix.org> Yes
<re_irc> <@sourcebox:matrix.org> There's something called SCU, maybe this has to be enabled.
<re_irc> <@sourcebox:matrix.org> : Sorry, no. Only across the A7 cores for now.
<re_irc> <@dirbaio:matrix.org> ahh so same arch
<re_irc> <@sourcebox:matrix.org> This SCU thing seems to be responsible for cache coherency across the cores.
<re_irc> <@pmnxis:matrix.org> I am using rust embedded for my own production with embassy-rs (previous used rtic very roughly).
<re_irc> <@pmnxis:matrix.org> In this time money-related-project that means the product communicate with bill-paper-machine and credit-card-system.
<re_irc> Thus I need do thing very carefuly.
<re_irc> So this is my feeling while do this project with rust-embedded.
<re_irc> Rust-Embedded should have more friendly with OOP than existing C-embedded.
<re_irc> <@pmnxis:matrix.org> "need understand OOP concept and famillar with it" means not "cons" of rust-embedded.
<re_irc> But yeah, I felt I was less care OOP when I doing firmware work.
<re_irc> <@pmnxis:matrix.org> * realized that I didn't care much attention to OOP when I writing embedded code.
<re_irc> <@jamesmunns:beeper.com> Are there particular OOP concepts you're referring to? Rust actually isn't generally considered an "OOP" language, the main abstraction system, traits, use a technique called "composition" instead, which is usually a bit different than OOP. With traits, you typically say "X can do Y", instead of "X is a Z (which means it can do Y)".
<re_irc> It's sort of a very small distinction, but if you try to do a lot of "classic OOP" patterns in Rust, they often will be very awkward or not possible to do.
<re_irc> <@jamesmunns:beeper.com> so instead of saying "The "RP2040SerialPort" IS a "SerialPort" (base class)", you say "RP2040SerialPort implements the SerialPort trait (as an interface)".
<re_irc> <@jamesmunns:beeper.com> But there are some other overlaps Rust has with OOP languages, usually things like "types can have methods", though in Rust you can have both "free methods/functions", and "type methods". There are no "class"es in Rust, but you can add methods to other types, like "struct"s or "enum"s.
<re_irc> <@pmnxis:matrix.org> I think used wrong idiom, yeah, might "composition" would be right word.
<re_irc> <@avery71:matrix.org> There isn't really a good OOP alternative to this that I can think of, but one reason rust is able to keep up with the popularity of OOP languages is because it was built with algebraic data types at the beginning (instead of how all these old languages are trying to squeeze them in way after the fact)
<re_irc> <@jamesmunns:beeper.com> Yeah, not trying to correct you, I've just seen a lot of people come to Rust with some OOP experience, and they often run into problems because Rust looks a _little_ like OOP, with what you can do with the language, but looking deeper Rust doesn't have or doesn't (easily) allow a lot of things that OOP languages use really heavily.
<re_irc> <@jamesmunns:beeper.com> But for sure, Rust has a bit more complexity around things like "trait"s, and other tools we use for portability or abstraction! Especially compared to C, where you typically either just have multiple implementations of the same function (one header file, multiple .c files that implement the same functions), or structs that have a bunch of function pointers for methods (which is basically just "build your own vtable by...
<re_irc> ... hand").
<re_irc> <@jamesmunns:beeper.com> it is nice when the language can "understand" your abstraction though, and it can help you from accidentally getting things subtly wrong :)
<re_irc> <@azzentys:matrix.org> I come from C. I'm fascinated and intimidated by the abstractions that can be done in Rust. I wish there was a way to easily print/see all the data AND impls/functions available.
<re_irc> <@diondokter:matrix.org> : This is what rustdoc is for. All public crates have their docs on docs.rs
<re_irc> You can generate it too using "cargo doc --open"
<re_irc> <@jamesmunns:beeper.com> : If you haven't used rustdoc (e.g. "cargo doc --open"), or checked out rust-analyzer before, they help out a ton!
<re_irc> <@diondokter:matrix.org> It's similar to doxygen
<re_irc> <@jamesmunns:beeper.com> https://rust-analyzer.github.io/manual.html
<re_irc> <@jamesmunns:beeper.com> being able to type "something." then getting a pop up of all the methods you can call on that item is really really great for discovery. You can also jump to types and functions when you want to see what they are and what they can do
<re_irc> <@azzentys:matrix.org> Thanks for the comments! I've used rust-analyzer in the past and it's been helpful! Using rustdoc, that's something that I can really find helpfud.
<re_irc> <@azzentys:matrix.org> * helpful.
starblue3 has quit [Ping timeout: 252 seconds]
<re_irc> <@firefrommoonlight:matrix.org> : Stay away from the abstractions until you understand the use cases that drive them (likely when you run into one yourself) The road to unmanageable complexity is paved with them
<re_irc> <@almindor:matrix.org> what are your guys' thoughts on supporting async in drivers? code duplication or macro hell? Is there a third option?
<re_irc> <@jamesmunns:beeper.com> Personally speaking: I don't think there's a good way to abstract over async or not in the general sense
<re_irc> <@jamesmunns:beeper.com> Like, for some simple stuff you can
<re_irc> <@jamesmunns:beeper.com> but it'll make things a lot more complex, or not work in a lot of edge cases.
<re_irc> <@jamesmunns:beeper.com> IMO the best way is to have the simple building blocks, like how to configure a register, or parse some data or something be shared
<re_irc> <@jamesmunns:beeper.com> then have two separate drivers, if you plan to support both.
<re_irc> <@firefrommoonlight:matrix.org> Of note to tie the threads of abstractions and rust docs together, rust doc's utility decreases when trait -bases APIs are used. What would be a link to a struct, enum, primitive etc that shows you how to construct it turns into static text of the trait required, without a hint to how to construct something that impls it. This can be mitigated using examples or handwritten docs
<re_irc> <@almindor:matrix.org> yeah I'm leaning to separate crates as well
<re_irc> <@jamesmunns:beeper.com> But for macro level ops like "send 512 bytes when the inner fifo is only 32 bytes" is going to be solved very differently in async and blocking, especially if you want to make it efficient in either case.
<re_irc> <@almindor:matrix.org> i wished something like `spawn_blocking` was universally available to embedded so we could do things in sync and just add these wrappers on top :D
<re_irc> <@jamesmunns:beeper.com> I mean, if you have a heap and threads, it's very easy :D
<re_irc> <@almindor:matrix.org> honestly, in my opinion async was a mistake in Rust
<re_irc> <@jamesmunns:beeper.com> I super disagree, but you do you :)
<re_irc> <@firefrommoonlight:matrix.org> : That is cheating
<re_irc> <@jamesmunns:beeper.com> like, you can not like it, and choose not to use it
<re_irc> <@almindor:matrix.org> i'd rather dance with kqueue and epoll manually than handle 3 level deep dependency Pin problems or Send not implemented (not to mention the bloat, although that's somewhat out of embedded space level)
<re_irc> <@jamesmunns:beeper.com> but that doesn't make it a _mistake_
<re_irc> <@almindor:matrix.org> that's the issue
<re_irc> <@almindor:matrix.org> you cannot choose not to use it, see postgres crate
<re_irc> <@jamesmunns:beeper.com> I mean
<re_irc> <@jamesmunns:beeper.com> you can't make other people build exactly the thing you want for free, no
<re_irc> <@almindor:matrix.org> it used to be 100% sync, and then they just wrapped tokio-postgres in block_ons and called it a day, bloat notwithstanding
<re_irc> <@jamesmunns:beeper.com> they are going to build it the way _they_ want it to.
<re_irc> <@almindor:matrix.org> my point is that async pushes people to bloatify
<re_irc> <@jamesmunns:beeper.com> again: "bloat" is relative. Nothing about async MAKES it bloaty
<re_irc> <@firefrommoonlight:matrix.org> It's subjective
<re_irc> <@firefrommoonlight:matrix.org> I happen to also not be a fan
<re_irc> <@jamesmunns:beeper.com> Good thing it's optional then :)
<re_irc> <@almindor:matrix.org> forcing tokio into a "sync" crate is bloat :)
<re_irc> <@firefrommoonlight:matrix.org> : Yes you can
<re_irc> <@jamesmunns:beeper.com> I very much have enjoyed it while writing an OS kernel, and have started using it in bare metal stuff as well. It for sure takes some getting used to, but it also makes a lot of very painful state machine things a lot easier for me to work with.
<re_irc> <@dirbaio:matrix.org> for "Send not implemented" issues, you can use the single-threaded tokio runtime
<re_irc> <@azzentys:matrix.org> : 100% true. Took me a year to be comfortably maneuver around hal code and pac code.
<re_irc> <@dirbaio:matrix.org> no threads, so no Send/Sync issues
<re_irc> <@almindor:matrix.org> you need threads if you have a high cpu task :) it's possibly to isolate of course but it gets hairy at times
<re_irc> <@almindor:matrix.org> > > <@almindor:matrix.org> you cannot choose not to use it, see postgres crate
<re_irc> >
<re_irc> > Yes you can
<re_irc> <@dirbaio:matrix.org> : if you don't care about concurrency you can block in the executor thread, it's fine
<re_irc> <@firefrommoonlight:matrix.org> That's an example. You made a general statement
<re_irc> <@jamesmunns:beeper.com> Anyway, I'mma go write some code. I don't think anyone particularly is looking to have their viewpoint changed here :)
<re_irc> <@almindor:matrix.org> my point is that this pattern of enbloatification is emergent from async
<re_irc> <@dirbaio:matrix.org> async solves problems that are hard/annoying to solve with raw threads
<re_irc> <@avery71:matrix.org> It also allows you to do concurrency with only 1 thread
<re_irc> <@dirbaio:matrix.org> this is why libs adopt it
<re_irc> <@dirbaio:matrix.org> plus for something like postgres, it's likely the user is already using async, so not using async would do little to reduce bloat
<re_irc> <@dirbaio:matrix.org> for example for postgres it needs to manage the lifecycle of connections to the db, pool them and share them across threads, do background keepalives so NATs/routers don't kill idle connections...
<re_irc> <@dirbaio:matrix.org> all these things suck without async
<re_irc> <@almindor:matrix.org> true but I don't think it needs to be a language level feature, at least until CS figures out a better way to solve the color problem (without heap requirements)
<re_irc> <@dirbaio:matrix.org> We know how to solve the color problem
<re_irc> <@dirbaio:matrix.org> Go has solved it
<re_irc> <@dirbaio:matrix.org> Java too with the new "light threads"
<re_irc> <@almindor:matrix.org> I agree practically with how postgres did it, I disagre in principle though
<re_irc> <@dirbaio:matrix.org> doing it requires a runtime
<re_irc> <@firefrommoonlight:matrix.org> It's good we have options
<re_irc> <@dirbaio:matrix.org> to do the M:N scheduling of lightweight tasks into real threads
<re_irc> <@dirbaio:matrix.org> it would be a bad fit for Rust. it'd make it unusable for embedded, for example
<re_irc> <@dirbaio:matrix.org> so Rust instead chose to build async/await into the language
<re_irc> <@almindor:matrix.org> there's a lot of duplication, for example my original question was for the mipidsi driver. The only thing I really need to do to make it work async is... async versions of anything that calls SPI. it's just useless code duplication and I can't shake the feeling there's a better way somehow (by which I don't mean fugly macro hacks :D)
<re_irc> <@dirbaio:matrix.org> Yes, that causes function coloring, but that gies you control as a user
emerent has quit [Ping timeout: 245 seconds]
<re_irc> <@dirbaio:matrix.org> * gives
<re_irc> <@dirbaio:matrix.org> you can choose which color to paint each function
emerent has joined #rust-embedded
<re_irc> <@dirbaio:matrix.org> so you can paint functions the "pls go fast no runtime" color when you need to
<re_irc> <@almindor:matrix.org> I guess I could use async in the sync version and provide a blocking executor, but that's just the wrong place to do it in
<re_irc> <@dirbaio:matrix.org> I agree the "share code between async+blocking" story is a disaster though ðŸĨē
<re_irc> <@dirbaio:matrix.org> there's the "keyword generics" initiative, but I'm not very hopeful
<re_irc> <@almindor:matrix.org> haha, one more level `[maybe-async] fn dosomething<'a, G, B>(...) impl [maybe-future] Iterator` :D
<re_irc> <@dirbaio:matrix.org> exactly ðŸ’Đ
<re_irc> <@almindor:matrix.org> which is sort of my second part of this rant (sorry everyone). The mental strain with async can become taxing once you get into more deep stuff such as mixing CPU and IO intensive stuff
<re_irc> <@almindor:matrix.org> i was there when c10k hit and epoll was new (I implemented the thing for free pascal :D) and somehow found it less difficult to use explicitly with thread pools manually than hunting Pin and Send issues
<re_irc> <@mabez:matrix.org> There is also some duplication for supporting nb too. Any shift in approach will always require some "glue" somewhere in the chain
<re_irc> <@almindor:matrix.org> i'm probably just getting old though
<re_irc> <@firefrommoonlight:matrix.org> I am too dumb for Async, so I don't use it
<re_irc> <@firefrommoonlight:matrix.org> Makes brain hurt, but not as much as Haskell does
<re_irc> <@dirbaio:matrix.org> the nonblocking ecosystem is somewhat consolidating around async instead of "nb" though
<re_irc> <@dirbaio:matrix.org> after so many years there isn't even a "nb" i2c trait ðŸĪŠ
<re_irc> <@firefrommoonlight:matrix.org> I think framing things as Async vice Blocking, or Async vice the nb lib is a source of confusion
<re_irc> <@mabez:matrix.org> : For sure, I guess my point was more that no one really complained about having to implement nb stuff and blocking stuff before hand, but now implemented blocking approaches and async approaches is too much duplication. I'm glad async is favored now though, I've been thinking about removing the nb stuff from esp-hal entirely tbh, I don't know anyone using it
cr1901_ has joined #rust-embedded
cr1901 has quit [Ping timeout: 246 seconds]
starblue3 has joined #rust-embedded
<re_irc> <@pixelprizm:matrix.org> : Is this problem solved in Rust by Bevy game engine's ECS? It schedules a bunch of repeated tasks on different threads. Theoretically I think you could use Bevy ECS on embedded rust, you can use the ECS without importing the rest of the game engine
<re_irc> <@dngrs:matrix.org> you still have function coloring
<re_irc> <@firefrommoonlight:matrix.org> Bevy is very intereting
<re_irc> <@firefrommoonlight:matrix.org> I used it for a while for my chemistry and protein visualization projects, but ultimately switched to using WGPU directly wth a custom engine. I was not a fan of the ECS syntax, but I think if I were using it as intended for a game, it would be worth it
<re_irc> <@firefrommoonlight:matrix.org> I think in my case it wasn't since I was just using the graphics
<re_irc> <@firefrommoonlight:matrix.org> * I'm
<re_irc> <@firefrommoonlight:matrix.org> Hey unrelated, but more on-topic: I've recently (after reading the NASA code guidelines) noticed my code had a number of potential infinite loop hangs. In practice, these would usually come up due to a hardware fault, and teh prog would hang as it spins indefinitely waiting for a bit to be set etc. In HAL and user code, I found every one and added a max tries timeout (in loop cycles), then return an error if it...
<re_irc> ... hits this. How do y'all handle this? Any recommendations vice this approach?
<re_irc> let mut count: u16 = 0;
<re_irc> <@firefrommoonlight:matrix.org> Example:
<re_irc> while !can.is_transmitter_idle() {
<re_irc> count += 1;
<re_irc> const TIMEOUT_COUNT: u16 = 50_000; // todo: What should this be?
<re_irc> if count >= TIMEOUT_COUNT {
<re_irc> return Err(CanError::CanHardware);
<re_irc> }
<re_irc> }
<re_irc> <@dngrs:matrix.org> watchdog?
<re_irc> <@firefrommoonlight:matrix.org> Doesn't that reboot the whole system?
<re_irc> <@firefrommoonlight:matrix.org> (Although I concur that is relevant and a good idea; I recently coded a simple API for it, but haven't used it yet)
<re_irc> <@firefrommoonlight:matrix.org> Like, here the intent is you can then raise a fault; for example, I tend to use a "SystemStatus" struct consisting of "SensorStatus::Pass" etc fields
<re_irc> <@firefrommoonlight:matrix.org> SO, an error there, instead of hanging or rebooting, might set the status to fault etc
<re_irc> <@dngrs:matrix.org> yeah, probably needs a different approach then. honestly timeouts are really awesome with async
<re_irc> <@firefrommoonlight:matrix.org> I think the WD is probalby something I should put in my programs though; can't hurt and is a good failsafe; although I think the idea would be for teh program never to have to trigger it
<re_irc> <@firefrommoonlight:matrix.org> * to catch errors upstream
<re_irc> <@adamgreig:matrix.org> usually watchdog can be configured to trigger an interrupt before it resets the whole system
<re_irc> <@adamgreig:matrix.org> if you can, it can be nice to design the firmware to quickly survive and handle a hard reset
cr1901_ is now known as cr1901
<re_irc> <@firefrommoonlight:matrix.org> nice!
<re_irc> <@firefrommoonlight:matrix.org> That sounds like a good play
starblue3 has quit [Ping timeout: 246 seconds]
starblue3 has joined #rust-embedded
crabbedhaloablut has quit []
dc740 has quit [Quit: Leaving]
<re_irc> <@jamesmunns:beeper.com> You've said you're not interested in async, but embassy_time has a "timeout" operator, so you can give it a future and a Duration, and if the future doesn't complete in that time, the future terminates and returns an error.
<re_irc> <@jamesmunns:beeper.com> In general, in all safety critical projects we had a "no unbounded loops" rule. Most things ended up with structures like the one you posted, either counting iterations, or in Rust I'd probably do something like:
<re_irc> let start = Instant::now(); // or whatever timer you have
<re_irc> if start.elapsed() > Duration::from_millis(50) {
<re_irc> break Err(());
<re_irc> let val = loop {
<re_irc> }
<re_irc> if can.is_ready() {
<re_irc> break Ok(());
<re_irc> }
<re_irc> };
<re_irc> <@jamesmunns:beeper.com> In non-async rust, you could probably make some kind of function that takes a "FnMut" or something to retry the closure, so you don't have to type that out, but it's less elegant to solve without async, where it's just:
<re_irc> // returns an error if the timeout occurred
<re_irc> let result = embassy_time::timeout(Duration::from_millis(50), wait_for_ready()).await?;
<re_irc> async fn wait_for_ready() { ... }
<re_irc> <@firefrommoonlight:matrix.org> : Using an actual timer like that is probably a better approach than counting loops
<re_irc> <@firefrommoonlight:matrix.org> Easier to reason about the timeout period
<re_irc> <@jamesmunns:beeper.com> The challenge in blocking loops like that is that, well, you're blocking, which isn't great for power or cpu utilization. But it beats getting stuck until the watchdog bites you :)
<re_irc> <@jamesmunns:beeper.com> (whereas PROBABLY in async, you'd just get either a "ready" notification or a "timeout" notification, and you could be sleeping or working on something else during that time)
<re_irc> <@firefrommoonlight:matrix.org> hah I'm not sure if that's how the watchdog metaphor is supposed to work!
<re_irc> <@jamesmunns:beeper.com> well yeah! you pet and feed the dog so it doesn't bite you!
<re_irc> <@9names:matrix.org> sometimes it barks first
<re_irc> <@firefrommoonlight:matrix.org> Yea; that is a good point further re that while loops to wait are not a great play in general. I generally have them in init code etc
<re_irc> <@jamesmunns:beeper.com> Yeah, works great for that! Or if you have an RTOS, or higher prio tasks in rticv1, you still can cooperate, but in bare metal code, you're gunna be eating the CPU time.
<re_irc> <@firefrommoonlight:matrix.org> I have a blocking I2C read here for some reason in program runtime; I think the DMA didn't work at first so I took the easy way out
<re_irc> <@jamesmunns:beeper.com> i2c is also notorious for locking up :D
<re_irc> <@firefrommoonlight:matrix.org> Completely agree that eating CPU time waiting is a bad plan (except in init)
<re_irc> <@firefrommoonlight:matrix.org> Yeah it's not my fav periph!
<re_irc> <@jamesmunns:beeper.com> but yeah, these kinds of places are where async shines.
<re_irc> <@jamesmunns:beeper.com> it's good for waiting for events, and turns out embedded is more often than not just waiting for a lot of events, then doing a little work, then waiting some more :)
<re_irc> <@firefrommoonlight:matrix.org> I tried async several times; I can't grok it
<re_irc> <@firefrommoonlight:matrix.org> In general I have a hard time understanding abstractions vice imperitive code
<re_irc> <@firefrommoonlight:matrix.org> * imperative code, at least for time-domain concepts
<re_irc> <@firefrommoonlight:matrix.org> *And it tends to propagate through the code base
<re_irc> <@jamesmunns:beeper.com> totally fair! "await" just means "make this task sleep (without blocking) until the thing is done", but there's certainly some patterns to learn to "think in async", and I agree it doesn't work great if you don't sorta lean into it. That being said, I'd probably say the same for Rust + safety, vs C style ways of doing things
<re_irc> <@jamesmunns:beeper.com> Helping a customer with some async stuff now though, and it was REALLY nice to turn some very tricky and loopy state machines into very linear "Do A, then wait for B, then do C, then wait for D" code, instead of having a huge match statement with an arm for each state, and having to update all the tracking variables between each state, etc.
<re_irc> <@firefrommoonlight:matrix.org> It is interesting how a common complaint about rust is that it's difficult/verbose etc. I totally get that. I think I had an easy time learning the lang since it (appear to) comes down to normal imperative code; diff syntax than Python, C etc, but the borrow checker is just learning where to put "&" and "*", which the compiler will correct you on if you screw it up
<re_irc> <@firefrommoonlight:matrix.org> It's like a game or puzzle where you get immediate hints and feedback; I learn well that way
<re_irc> <@firefrommoonlight:matrix.org> It is interesting how a common complaint about rust is that it's difficult/verbose etc. I get that. I think I had an easy time learning the lang since it (appear to) comes down to normal imperative code; diff syntax than Python, C etc, but the borrow checker is just learning where to put "&" and "*", which the compiler will correct you on if you screw it up
<re_irc> <@jamesmunns:beeper.com> Yeah, I'd say async is getting better with that (libraries and compiler wise), but "async in 2023" is a lot more like "rust in 2015", in terms of rough edges.
<re_irc> <@jamesmunns:beeper.com> like: absolutely nice when you know how it works, still a challenge to pick up if you're new to it though, especially if you accidentaly overextend past your understanding, then can't figure out what corner you've painted yourself into
<re_irc> <@jamesmunns:beeper.com> It took me more than a couple months, with help from two very experienced async folks, to really get my head right on it :D
<re_irc> <@firefrommoonlight:matrix.org> From what I understand, async shines in getting state machines up and running in a concise way
<re_irc> <@firefrommoonlight:matrix.org> *I guess that's what _it is_
<re_irc> <@dngrs:matrix.org> there was a quite nice blog post recently about that ...
<re_irc> <@dngrs:matrix.org> now who wrote it again
<re_irc> <@dngrs:matrix.org> ah, I remember, cliffle
<re_irc> <@dngrs:matrix.org> http://cliffle.com/blog/async-inversion/
<re_irc> <@firefrommoonlight:matrix.org> His articles are outstanding
<re_irc> <@whitequark:matrix.org> : happy to do a trial run (and hopefully the final run as well) of bridging this room to IRC tomorrow, when's good time in UTC?
<re_irc> <@dngrs:matrix.org> : for sure. Got me into Rust in the first place!
<re_irc> <@jamesmunns:beeper.com> : Yep, pretty much! It really is "compiler generated state machines"
<re_irc> <@whitequark:matrix.org> although usually in a state machine you can jump to a state and here you kind of cannot?
<re_irc> <@whitequark:matrix.org> since Rust has no computed goto
<re_irc> <@firefrommoonlight:matrix.org> By the mercy of your deity of choice
<re_irc> <@whitequark:matrix.org> computed goto is very useful and Rust would be better with it / not worse in any way
<re_irc> <@adamgreig:matrix.org> : Currently free any time from 10 onwards or so, anything particularly convenient for you?
<re_irc> <@whitequark:matrix.org> (it's useful for exactly one use case, high performance inner loop of a threaded interpreter)
<re_irc> <@adamgreig:matrix.org> : Yea, this is the one thing that means many of my actual written out state machines couldn't just be async I think
<re_irc> <@whitequark:matrix.org> : happy to do it tomorrow around 10 UTC / 11 localtime
<re_irc> <@adamgreig:matrix.org> Like any state machines where I'm drawing a little fsm diagram and it has more structure than just moving one to next to next and some errors I'm not sure how well it works. I should try it out though...
<re_irc> <@adamgreig:matrix.org> : Great, let's go for that
<re_irc> <@jamesmunns:beeper.com> yeah, trying to find some particularly fun ones in mnemos to share right now, but a lot of it is very boring plumbing :D
<re_irc> <@jamesmunns:beeper.com> https://onevariable.com/blog/phase-locked-state-machines/ sort of gets into it, but half of that post is also based on my somewhat cursed "abstract over the server and client in one state machine" part
<re_irc> <@jamesmunns:beeper.com> I do talk a lot about the FSM -> async translation there, which might be illustrative tho.
<re_irc> <@jamesmunns:beeper.com> but imo it shows that (for situations that involve comms) there's usually one state machine for the macro behavior, with commands being primary events
<re_irc> <@jamesmunns:beeper.com> but then there are a lot of "local events", like waiting for an erase to complete, or timeouts, etc.
<re_irc> <@firefrommoonlight:matrix.org> Most of the embedded programs I do are relatively simple, and have cooperative processes. The one I'm coding now takes readings from a handful of sensors, does some fusion, and broadcasts a handful of message types over CAN; responds to queries and instructions over CAN and USB. It's set up using ISRs (via RTIC) and DMA (or USB/CAN msg ram) reads and writes
<re_irc> I imagine that if I wrote something more complex, an RTOS would help. Maybe something like async would be a middle ground? My understanding of C is it's generally RTOS by default
<re_irc> <@jamesmunns:beeper.com> Yeah, async helps with the "juggle multiple cooperative things at the same time" part.
<re_irc> <@firefrommoonlight:matrix.org> Maybe also if you're CPU limited or teh processes take a while. I think this works in the current form because I'm not saturating the CPU (nor the CAN bus ideally...)
<re_irc> <@firefrommoonlight:matrix.org> But surprising things may happen if the processes start stepping on each other in time
<re_irc> <@jamesmunns:beeper.com> Yeah, threads are great for FORCING time sharing, and there are some tools in embassy for that (basically: having different priority levels for different tasks, so a higher priority task can pre-empt lower ones), but it works best when you never block unnecessarily
<re_irc> <@jamesmunns:beeper.com> it's harder to do the "strictly enforced temporal partitioning" you expect in safety critical, like having a scheduler to enforce this thread NEVER gets more than x% of CPU time
<re_irc> <@firefrommoonlight:matrix.org> I think RTIC does somehting similar re pre-empting, but it has some limitations
<re_irc> <@firefrommoonlight:matrix.org> (This is one of the reasons I prefer RTIC over cortex-m lib ISRs
<re_irc> <@firefrommoonlight:matrix.org> +and critical sections)
<re_irc> <@jamesmunns:beeper.com> yeah, embassy's interrupt executors are somewhat like how rticv1 worked
<re_irc> <@jamesmunns:beeper.com> but the interrupt executors "yield" when there are no more tasks at that priority to be run.
<re_irc> <@firefrommoonlight:matrix.org> RTIC2 does it differently? I haven't tried
<re_irc> <@firefrommoonlight:matrix.org> (I only use RTIC's ISR functionality; not the Monotonic or software tasks)
<re_irc> <@jamesmunns:beeper.com> I haven't looked at rtic2, but it is now primarily async focused, iiuc?
<re_irc> <@firefrommoonlight:matrix.org> I've heard, although I have also heard you don't _need_ to propogate async through your code with it? Not sure. Should probably try
<re_irc> <@firefrommoonlight:matrix.org> My understanding is its changes are more internal (Since RTIC 1's proc-macros is a huge maintainance and inspection obstacle)
<re_irc> <@jamesmunns:beeper.com> haven't looked, honestly. I've been working on mnemos mostly for personal projects, and using embassy for current client project.
<re_irc> <@firefrommoonlight:matrix.org> Ie I had several API changes to make to RTIC, but they were rejected as unfeasible due to the macro system
<re_irc> <@firefrommoonlight:matrix.org> https://mnemos.jamesmunns.com/
<re_irc> <@firefrommoonlight:matrix.org> Interesting
<re_irc> <@jamesmunns:beeper.com> don't read that, that's the old docs, and I should remove it lol
<re_irc> <@jamesmunns:beeper.com> https://onevariable.com/blog/mnemos-moment-1/ is a much better overview lol
<re_irc> <@firefrommoonlight:matrix.org> Tell Google not to put it at the top!
<re_irc> <@firefrommoonlight:matrix.org> People will find and ref it
<re_irc> <@jamesmunns:beeper.com> yeah, I need to swap it out :D
<re_irc> <@jamesmunns:beeper.com> The kernel is async, and the intent is that userspace processes are also mostly async, and instead of having an "interrupt style syscall interface", it only offers an io_uring style syscall queue
<re_irc> <@jamesmunns:beeper.com> so that you can batch systemcalls in userspace, and only trigger a syscall when your user program has nothing else to do (so the only interrupt-style syscall is "user scheduler is blocked waiting on feedback from the kernel")
<re_irc> <@firefrommoonlight:matrix.org> OH my god
<re_irc> dma::mux(MAG_DMA_PERIPH, MAG_TX_CH, DmaInput::I2c1Tx);
<re_irc> dma::mux(MAG_DMA_PERIPH, MAG_RX_CH, DmaInput::I2c2Rx);
<re_irc> <@firefrommoonlight:matrix.org> This is why I couldn't get it to work with DMA
<re_irc> <@jamesmunns:beeper.com> and in many cases, if userspace is pre-empted due to interrupts, the kernel has a chance to process requests "for free" before yielding back to userspace
<re_irc> <@firefrommoonlight:matrix.org> : Sounds cool! WIll monitor
<re_irc> <@jamesmunns:beeper.com> : I don't understand, but I'm guessing "swapped input and output" on one of them?
<re_irc> <@firefrommoonlight:matrix.org> I mixed busses
<re_irc> <@jamesmunns:beeper.com> ahhh
<re_irc> <@jamesmunns:beeper.com> 1tx, 2rx, got it
<re_irc> <@jamesmunns:beeper.com> Fun :D
<re_irc> <@firefrommoonlight:matrix.org> Can't fix careless
<re_irc> <@firefrommoonlight:matrix.org> I almost feel like functions that block/poll should be marked with something like "unsafe"
<re_irc> <@firefrommoonlight:matrix.org> So it's obvious you're doing something that will stall the CPU
<re_irc> <@jamesmunns:beeper.com> (unsafe means specifically memory safety, not "is a spicy function")
<re_irc> <@firefrommoonlight:matrix.org> Yea concur on marking them unsafe would be misleading
<re_irc> <@jamesmunns:beeper.com> deadlocks are not unsafe :)
<re_irc> <@firefrommoonlight:matrix.org> lol
<re_irc> <@firefrommoonlight:matrix.org> I think this loose thought is not possible because it would require a lang feature
<re_irc> <@jamesmunns:beeper.com> Maybe someday we'll solve the halting problem :)
<re_irc> <@jamesmunns:beeper.com> mnemos.jamesmunns.com now redirects to that blog post instead :D
<re_irc> <@firefrommoonlight:matrix.org> Nice!
<re_irc> <@taylor_smith:matrix.org> Do you know that you can earn $5,000 or more weekly from crypto Trading? With Just $500â€Ķ 100% Inbox Admin on Telegram for more details 👇👇👇👇👇👇 https://t.me/PROFITSWITHSTEVE
<re_irc> <@jamesmunns:beeper.com> ðŸ”Ļ