<re_irc>
<@john_socha:matrix.org> henrik_alser: Thanks. I'll work on making it more idiomatic, which of course that means I have to learn what that looks like.
fabic has joined #rust-embedded
tokomak has joined #rust-embedded
<re_irc>
<@john_socha:matrix.org> eldruin: Thanks. That's something I wanted to do, so having the link you provided will help me get there.
GenTooMan has quit [Ping timeout: 240 seconds]
GenTooMan has joined #rust-embedded
dcz has joined #rust-embedded
neceve has joined #rust-embedded
fabic has quit [Ping timeout: 268 seconds]
fabic has joined #rust-embedded
fabic has quit [Ping timeout: 258 seconds]
<re_irc>
<@jduchniewicz:matrix.org> How welcome would be having `Backtrace` in core compared to current implementation in std? I am working on a RFC(https://github.com/rust-lang/rfcs/pull/3156) which tackles this and has a proposed implementation where you have to provide your own backtrace hook (akin to panic one) or you will be provided with...
<re_irc>
... a compiler generated one.
<re_irc>
<@jduchniewicz:matrix.org> The most important question is: where and how would be backtraces used in embedded contexts? So far we have discussed that they would be probably used for automated testing/bug reporting on platforms that can afford to allocate some memory for a backtrace and debug symbols. The discussion happens in this...
<re_irc>
<@eldruin:matrix.org> jduchniewicz: I think that discussion would benefit from having its own issue in the [WG repo](https://github.com/rust-embedded/wg) and probably also from discussion in our weekly meetings (Tuesdays 20:00h CEST)
<re_irc>
<@dirbaio:matrix.org> I imagine backtraces in core would add quite a bit of bloat if you need the whole unwinding code and symbol table with full strings
<re_irc>
<@jduchniewicz:matrix.org> they don't need to be symbolized at the time of capturing
<re_irc>
<@jduchniewicz:matrix.org> unwinding is a valid argument though
<re_irc>
<@jduchniewicz:matrix.org> also symbols could be stored in external memory and pulled in when needed
<re_irc>
<@jduchniewicz:matrix.org> citing nagisa from zulip:
<re_irc>
<@jduchniewicz:matrix.org> >With frame pointers enabled (which I believe is typically true on thumb) the unwinding is just chasing a pointer in a linked list. Along with storing the chain somewhere else that'd be a couple hundred bytes if written with care.
<re_irc>
<@jduchniewicz:matrix.org> so it seems like there is no big bloat entailed
<re_irc>
<@dirbaio:matrix.org> Ahh, just collect an array of PCs and ship it to the server
<re_irc>
<@dirbaio:matrix.org> "Rustfmt formats programs rather than files" 🤷♂️
<re_irc>
<@ryan-summers:matrix.org> I would comment on that issue with your usecase. It seems to me like the original person that closed the issue didn't really give it a lot of thought. The other alternative is to run rustfmt with all configurations
<re_irc>
<@ryan-summers:matrix.org> Which would likely be very un-fun
<re_irc>
<@ryan-summers:matrix.org> But I think rustfmt may actually not work if your code doesn't compile - it may very well _require_ that the rs files be part of the build to format them properly
<re_irc>
<@newam:matrix.org> it would be nice to have an `async` RNG trait, IIRC `rand_core` has no plans for `async`.
<re_irc>
<@lachlansneff:matrix.org> Really, they don't?
<re_irc>
<@lachlansneff:matrix.org> Huh
<re_irc>
<@newam:matrix.org> Hmmm, that might be me misremembering though, I can't find any recorded conversation of that, no mention of `async` on github.
<re_irc>
<@eldruin:matrix.org> haha nice turn of events. We are removing the rng traits in `e-h` 1.0 in favor of `rand_core`
<re_irc>
<@newam:matrix.org> I'll open up an issue asking what the plans are for `rand_core` with `async`.
<re_irc>
<@eldruin:matrix.org> awesome, I was going to suggest that
fabic has quit [Ping timeout: 258 seconds]
neceve has quit [Ping timeout: 258 seconds]
<re_irc>
<@dirbaio:matrix.org> What's the use case for async rng?
<re_irc>
<@dirbaio:matrix.org> HW rng is not that slow
<re_irc>
<@newam:matrix.org> fair point.
<re_irc>
<@newam:matrix.org> Let me get some numbers on the STM32WL RNG.
<re_irc>
<@dirbaio:matrix.org> And you can also seed a csprng from the hwrng and then use that, so you wait for cpu instead of for hw
<re_irc>
<@dirbaio:matrix.org> Not sure which would be faster
<re_irc>
<@newam:matrix.org> is that secure enough for cryptographic use? Not an expert in that area.
<re_irc>
<@dirbaio:matrix.org> Yeah it's secure as long as you seed it properly
<re_irc>
<@newam:matrix.org> huh, good to know!
<re_irc>
<@dirbaio:matrix.org> Cs in csprng stands for Cryptograpically Secure after all 😜
<re_irc>
<@newam:matrix.org> Yeah the STM32WL RNG is pretty slow, 2662-2692 CPU cycles to generate one block of 4x u32, CPU at 48MHz, RNG at 48MHz (both max)
<re_irc>
<@newam:matrix.org> wait... not release mode, let me re-run
<re_irc>
<@newam:matrix.org> mmm same results
<re_irc>
<@lachlansneff:matrix.org> Is that instruction blocking?
<re_irc>
<@lachlansneff:matrix.org> Or rather, does it stall the cpu
<re_irc>
<@newam:matrix.org> Yup.
<re_irc>
<@newam:matrix.org> Though that is also the cold-boot numbers, it is a lot faster on the second block.
<re_irc>
<@newam:matrix.org> that's more reasonable, at steady state it goes at the advertised 412 cycles per block.
<re_irc>
<@newam:matrix.org> Still HW limited in that case though
<re_irc>
<@lachlansneff:matrix.org> How could it be made async in that case?
<re_irc>
<@newam:matrix.org> There's a hardware interrupt that fires when a new block of entropy is available.
<re_irc>
<@dirbaio:matrix.org> that's the one I'm using
fabic has joined #rust-embedded
<re_irc>
<@dirbaio:matrix.org> There's also ChaCha8/ChaCha12, there's a [paper](https://eprint.iacr.org/2019/1492.pdf) claiming 20 rounds are overkill and 8 should be enough but I think it's somewhat debated (?)
<re_irc>
<@dirbaio:matrix.org> not sure if even chacha8 will beat 412cycles/block
<re_irc>
<@dirbaio:matrix.org> and it certainly won't win in code size :P
fabic has quit [Ping timeout: 252 seconds]
<re_irc>
<@newam:matrix.org> CPU cycles to generate 1 block (`[u32; 4]`)
<re_irc>
<@dirbaio:matrix.org> when you want RNG for crypto networky shit you usually want only 32-64 bytes (so 2-4 blocks)
<re_irc>
<@dirbaio:matrix.org> and an async context switch probably takes more than 1000-2000 cycles
<re_irc>
<@dirbaio:matrix.org> so blocking is probably faster than async
<re_irc>
<@newam:matrix.org> dirbaio: I actually have not measured this yet; does async take longer than a normal RTOS?
<re_irc>
<@newam:matrix.org> I was guessing that it would be a lot faster since there is usually less to save/restore without preemption.
<re_irc>
<@dirbaio:matrix.org> I haven't measured anything, no
<re_irc>
<@newam:matrix.org> I guess this also depends on the executor and how feature-rich it is though. Adding task priority can eat up quite a few cycles.
<re_irc>
<@dirbaio:matrix.org> when awaiting the entire "call stack" returns, and has to be "called" again when polling the future again
<re_irc>
<@dirbaio:matrix.org> so for very deep call stacks it might be slower than rtos
<re_irc>
<@dirbaio:matrix.org> lke
<re_irc>
<@dirbaio:matrix.org> if you have a deep future stack
<re_irc>
<@newam:matrix.org> This time with embassy.
<re_irc>
<@newam:matrix.org> Also for good measure the non-async version (running both functions after each other) is 4 cycles.
<re_irc>
<@dirbaio:matrix.org> 62 only? yay :D
<re_irc>
<@dirbaio:matrix.org> you're using the raw task API, the highlevel one is `#[embassy::task] async fn sample1() {..` then you can spawn them with `spawner.spawn(sample1_task())`
<re_irc>
<@dirbaio:matrix.org> but it should be equally fast
<re_irc>
<@newam:matrix.org> dirbaio: Ran it again with the changes, 62 cycles again.
<re_irc>
<@newam:matrix.org> That's rather impressive.
<re_irc>
<@dirbaio:matrix.org> :D
<re_irc>
<@dirbaio:matrix.org> it's the best case though, cycle count will increase as you nest futures
<re_irc>
<@newam:matrix.org> Yeah, it's a good benchmark for minimum overhead though.
<re_irc>
<@newam:matrix.org> I'm glad I measured, going to stay far, far away from that BTree executor.
<re_irc>
<@newam:matrix.org> Huh, 62 cycles actually really good on the do-nothing benchmark scale.
<re_irc>
<@richarddodd:matrix.org> Right I'm going on holiday next week. My holiday project is to write a blog post about why Rust embedded is so awesome/how it will be even more awesome in the future/how accessible it is. Posting this here in the vain hope that it will make me feel accountable and therefore more likely to actually complete...
<re_irc>
... it.
<re_irc>
<@almindor:matrix.org> what's the process for fixing something in a PAC when the SVD is missing ? e.g. in this case the reset value is not defined in the SVD, should I add it in code, expand the svd itself?
<re_irc>
<@almindor:matrix.org> s/SVD is missing/SVD is missing the part/
<re_irc>
<@newam:matrix.org> For the STM32 SVD the `stm32-rs` repo contains a TON of patches to fix bugs in the vendor SVD, you add in in a YAML file and it gets applied when the SVD is generated.
<re_irc>
<@newam:matrix.org> Find your device in that directory and go from there. There is some shared patches for things that impact multiple devices.
<re_irc>
<@firefrommoonlight:matrix.org> You need to edit the YAMLs using a donain-specific language. I'm new to it, but may be able to help if you have questions
<re_irc>
<@firefrommoonlight:matrix.org> As you've found out, the SVDs are full of errors
<re_irc>
<@newam:matrix.org> firefrommoonlight: datasheets too ;)
<re_irc>
<@newam:matrix.org> for the STM32WL there is a completely undocumented block-encryption feature; but they have code for it in their HAL which sets some registers to "reserved" values.
<re_irc>
<@firefrommoonlight:matrix.org> Often the reserved values are to keep parity with different MCUs, eg skipping features a given one doesn't have
neceve has quit [Ping timeout: 272 seconds]
<re_irc>
<@almindor:matrix.org> hmm is it expected that svd2rust will generate uncompilable code?
<re_irc>
<@almindor:matrix.org> I removed all the custom stuff from the update.sh in e310x so it just does the svd2rust, using latest version and it generated a reader with pub(crate) new that's unused
<re_irc>
<@almindor:matrix.org> oh wait, i had some ancient svd2rust version from pacman installed, nvm