ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
Ultrasauce is now known as sauce
takkaryx[m] has joined #rust-embedded
<takkaryx[m]> is there a good place to look into setting up an embedded project that's 2 applications? looking into how to set up a bootloader+application. but I'm not entirely sure how to go about it in rust. normally it's just the linker file that has a lot. is it as simple as coordinating the memory.x files?
M9names[m] has joined #rust-embedded
<M9names[m]> <kentborg[m]> "Newbie again, and I want to..." <- > <@kentborg:matrix.org> Newbie again, and I want to write to a register.... (full message at <https://catircservices.org/_irc/v1/media/download/AfY2VJ-BcboK1Bcwa02Upfw01Qcbls1Lj1cYQrOKKgcxhu6GZ6webJvUXi5WHmzZUxLYR7WVGLnfY6GdKxXrC-tCeSO32N5AAGNhdGlyY3NlcnZpY2VzLm9yZy95TG9QQVRXak5BQWFwcGtlT3lreXZkd2Q>)
<kentborg[m]> Yes, but I guess I missed things. Since I have been playing it will mean more. Let me do some rereading.
<kentborg[m]> Thanks.
<thejpster[m]> <takkaryx[m]> "is there a good place to look..." <- The Neotron OS is two applications, a BIOS and an OS. That might offer some pointers.
<thejpster[m]> But yes it’s mainly handled in the linker scripts.
cinemaSundays has quit [Quit: Connection closed for inactivity]
chrysn[m] has quit [Quit: Idle timeout reached: 172800s]
emerent has quit [Ping timeout: 252 seconds]
emerent has joined #rust-embedded
JoonaHolkko[m] has quit [Quit: Idle timeout reached: 172800s]
lulf[m] has quit [Quit: Idle timeout reached: 172800s]
ivmarkov[m] has quit [Quit: Idle timeout reached: 172800s]
Kaspar[m] has quit [Quit: Idle timeout reached: 172800s]
mameluc[m] has quit [Quit: Idle timeout reached: 172800s]
AtleoS has quit [Ping timeout: 265 seconds]
AtleoS has joined #rust-embedded
AdamHorden has quit [Ping timeout: 264 seconds]
AdamHorden has joined #rust-embedded
<thejpster[m]> I put my Arm target-feature blog on my website: https://www.thejpster.org.uk/blog/blog-2024-09-29/
Ralph[m] has joined #rust-embedded
<Ralph[m]> <thejpster[m]> "I put my Arm target-feature blog..." <- > Yes, Helium is a lighter version of Neon. The Brits do love a good pun (and this is coming from the company that decided the smaller version of an Arm instruction was a Thumb instruction...)
<Ralph[m]> you know, that arm/thumb thing so far flew completely over my head! am i the only one to have missed that? πŸ‘€
<Ralph[m]> <thejpster[m]> "I put my Arm target-feature blog..." <- this is far from my field of expertise, but: you only looked at the soft float targets. so i briefly gave it a spin with `thumbv7em-none-eabihf` (without specifying a `target-cpu`) and it seems to produce the same instructions as with the cortex-m7 target CPU, though they were ordered differently.
<Ralph[m]> i've never specified a target CPU. is this generally recommended to do for embedded rust binaries over just picking the right target?
<thejpster[m]> so eabihf will mean that any functions that take floating point arguments put them in floating point registers. As this example has no floating point arguments (passing an array of floats means passing the pointer in an integer register) so all the eabihf target does in this case is turn on baseline FPU support.
<thejpster[m]> I suspect it only put float instructions in the f32 function though, and not in the f64 function?
<thejpster[m]> (yes, I just confirmed this)
<thejpster[m]> recommendatios are tricky because technically target-feature and target-cpu are unstable, even though they work on stable
<thejpster[m]> if you have an FPU, you should use eabihf, unless you must interact with pre-existing C libraries that are compiled eabi.
<thejpster[m]> if you care about performance, you should try some of these features and cpu flags and manually inspect the assembly code output, or test the binary, to make sure it's no bogus
<thejpster[m]> s/no/not/
<Ralph[m]> ok, thanks for the clarification! so for now i'll just stick to "use the correct target" and look into this if i ever get performance issues
RidhamBhagat[m] has quit [Quit: Idle timeout reached: 172800s]
andar1an[m]1 has quit [Quit: Idle timeout reached: 172800s]
jistr has quit [Remote host closed the connection]
jistr has joined #rust-embedded
sirhcel[m] has joined #rust-embedded
<sirhcel[m]> Beginner question: As far as i understand delays in drivers, the recommendation is to store a delay provider (implementing DelayMs) in the driver object. Is there a generic way of sharing a delay provider between multiple driver instances?
<sirhcel[m]> This looks easy on esp32 with Ets and Delay which i can instantiate without passing in a hardware resource and so each driver can get its own instance. But how to do this for example on nrf52840 where Delay takes the systick timer. Or for rp2040 where Timer takes a hardware timer?
<sirhcel[m]> I'm asking because i'm looking into this for progressing my sht4x driver to e-h 1.0. It currently borrows s delay provider for each call which might need to delay. And with the things i've learned since writing it, it should take a delay provider with its constructor. But i don't see how to do this for a large amount of driver instances without running out of delay providers depending on hardware resources.
pjaros[m] has joined #rust-embedded
<pjaros[m]> A big "Hello" to the guy we met on LinuxDay.at yesterday in Dornbirn giving us a hint about what was going wrong on our rust-nano-laby game with the SPI. And that there is this place on matrix for Rust embedded devs.
<JamesMunns[m]> <sirhcel[m]> "I'm asking because i'm looking..." <- > i don't see how to do this for a large amount of driver instances without running out of delay providers depending on hardware resources
<JamesMunns[m]> The answer is to use a shared timer implementation like embassy-time does, where there's a single "time driver" that is managing "time to the next deadline", where all the deadlines are on the same "timeline"
<JamesMunns[m]> s/The/One/
<sirhcel[m]> <JamesMunns[m]> "> i don't see how to do this for..." <- Thank you! Do you know if such implementations are generally available? Like for nrf52, rpi2040, stm32, ...? Passing in a delay provider is clumsy but at least works for a larger number of driver instances out of the box with every hal with at most a single hardware resource for the delay provider.
<thejpster[m]> I think rp-Hal has copyable timers so every driver can have its own copy
<JamesMunns[m]> sirhcel[m]: embassy-time supports all three of those targets, yeah
<thejpster[m]> Singleton timers can be annoying
<dirbaio[m]> Yeah the recommendation is for hals to allow cloning timers
<JamesMunns[m]> I suppose the timer queue/wheel is less important if you have blocking code, then likely you can just have them each set their own deadlines (or do their own polling). For async it matters quite a bit more because you want to wake tasks when their delay is up.
<sirhcel[m]> Thank you all for the references! I will look into them. At first with the focus on blocking code. Copyable timers would solve my issue right now. But at a first glance a lot of hals (nrf52840, stm32f3xx among them) implement them non-copy.
<sirhcel[m]> <dirbaio[m]> "Yeah the recommendation is for..." <- Are there recommendations/a checklist for e-h implememtations?
<dirbaio[m]> some have in the docs, for example spi
<dirbaio[m]> we could add this to the Delay docs
<dirbaio[m]> PR welcome πŸ™ƒ
<sirhcel[m]> Thank you for the pointer! Added this to my list of wanna make prs. πŸ˜…
<JamesMunns[m]> https://www.segger.com/news/pr-240927-ozone-support-rust/ πŸ‘€πŸ‘€πŸ‘€
vollbrecht[m] has joined #rust-embedded
<vollbrecht[m]> though ozone is only included in j-link plus and not in the smaller ones right?
<dirbaio[m]> it works with the onboard jlinks, at least the ones on nrf's dks
<vollbrecht[m]> that means 1000 πŸ’΅ per probe πŸ’Έ
<dirbaio[m]> I played with it a long time ago when I still didn't know what Rust was
danielb[m] has joined #rust-embedded
<danielb[m]> come to the probe-rs side, we have cookies, and bugs 🫠
<vollbrecht[m]> well i am sure they will not support rust on xtensa anytime soon right? :D
<vollbrecht[m]> so probe-rs is still the superior tool here, while beeing not only free for non-comercial and 1000$ cheaper but also fully open source
<danielb[m]> 970$ cheaper, you still need some probe :D
<vollbrecht[m]> esp come with that buildin stuff :p
<danielb[m]> they do? 🀯
<danielb[m]> okay I knew that
Noah[m] has joined #rust-embedded
<Noah[m]> you can get a probe for 2$ :D
<Noah[m]> which reminds me I need to do a resend batch :/
<Noah[m]> the brazil person with a 300 character address came back :P
<danielb[m]> actually since like today, probe-rs has better j-link clone support than ever, can those still work with segger software? :D
<Noah[m]> and a dozen others
konkers[m] has quit [Quit: Idle timeout reached: 172800s]