<Kaspar[m]>
<thejpster[m]> "Nested SVCalls are working but I..." <- Nested SVCalls are a Cortex-R thing, right? Not possible on Cortex-M?
<thejpster[m]>
Not sure but Armv7-M and similar have the hardware push all state to the stack when entering an exception or interrupt handler (which is why they can be written in C or Rust). And there is no SVC mode, just Thread Mode and Handler Mode. So it would probably Just Work?
<thejpster[m]>
Here I have to push state manually, including the saved processor state register, which is register and not a fifo so it gets lost on reentry.
<thejpster[m]>
The huge benefit of Cortex-R is you have an almost full second set of registers to handle FIQ, so the entry latency is only a couple of clock cycles.
<thejpster[m]>
If you’re tidy, nothing needs to be pushed to the stack at all.
<RobertJrdens[m]>
If you can warm up to the through of the DSL being Rust, then I'd recommend [bitbybit](https://crates.io/crates/bitbybit) over the other rust candidates for this use case.
<thejpster[m]>
ah ha ha ha ha - love that example in the README
<thejpster[m]>
(that's GICD_TYPER, the second register in the GICv3 Distributor, which I have been working with recently)
<thejpster[m]>
ok, bitbybit looks great. I'm going to move over to it and see how it works in practice.
<n_vl[m]>
Is there a way to tell rust-analyzer to only build a specific bin with given features and ignore the rest?
hjeldin__[m] has quit [Quit: Idle timeout reached: 172800s]
towynlin[m] has joined #rust-embedded
<towynlin[m]>
I hadn't heard of bitbybit. It looks great! Thanks for sharing.
NishanthMenon has quit [Ping timeout: 260 seconds]
dnm has joined #rust-embedded
NishanthMenon has joined #rust-embedded
<adamgreig[m]>
ok, let's start! just one quick announcement from me, svdtools 0.4.2 was released this week
<adamgreig[m]>
and i'll repeat the message from before: if you're interested in coming to the embedded unconf at rustweek this may in the netherlands, please message me so I can add you to the list!
<adamgreig[m]>
or comment on the agenda or just say so here
<thejpster[m]>
PSA: I will be at precisely one Rust event this year, and that's only because my company is running it
<adamgreig[m]>
there is now some funding available to cover some transport/accommodation costs if you need it to attend, though we're not yet sure if the foundation will also provide anything
<adamgreig[m]>
ok, let's start then, burrbull are you around to talk about the svd2rust item?
<adamgreig[m]>
if not we can skip to thejpster for now and come back to it
<dirbaio[m]>
separate crates per peripheral? 💀
<thejpster[m]>
I'm not sure how splitting one crate into lots of crates helps build times.
<thejpster[m]>
you're either building all of them or none of them.
<thejpster[m]>
unless I suppose you're using codegen-units=1 and the build can't be parallelised
<cr1901>
I don't plan to use such an option myself
rmsyn[m] has joined #rust-embedded
<rmsyn[m]>
thejpster[m]: because each crate is a compilation unit, splitting it into multiple crates allows multiple cores to work on each crate (without a parallel front-end in rustc)
<cr1901>
Will the old behavior be unaffected (so I can still use separate generic.rs)?
<thejpster[m]>
will it have a code size / performance trade-off because the crate isn't being analysed as a single unit for optimisations? and does it matter if we're building with fat LTO anyway?
<rmsyn[m]>
thejpster[m]: I am not, and the PAC consumes 100% of a single core with other cores idle
<rmsyn[m]>
thejpster[m]: I'm not sure, definitely something to keep an eye on while developing, though. thanks for bringing it to my attention
<thejpster[m]>
(if anyone hasn't seen it cargo build --timings is a thing)
i509vcb[m] has joined #rust-embedded
<i509vcb[m]>
I get the vibe that for a crate per peripheral I'd probably want to see some numbers. Although that could also mean pain when publishing to crates.io
<dirbaio[m]>
and running against the ratelimits :D
burrbull[m] has joined #rust-embedded
<burrbull[m]>
I wanted to talk about moving generic.rs to separate crate as dependency for generated code, but now I understand that it is hard to do.
<dirbaio[m]>
* against the crates.io ratelimits :D
<rmsyn[m]>
i509vcb[m]: the idea is to have the crates be in the project workspace. a number of other crates take a similar approach, so I don't really see it being an issue
<adamgreig[m]>
we've talked about similar ideas before, but the problem is most mcus have a lot of peripherals, and there's a lot of mcus
<adamgreig[m]>
for the stm32 universe it would mean thousands+ of crates
<i509vcb[m]>
And don't forget stm32 with its dozen i2c
<adamgreig[m]>
like >100 svds with an average of maybe 50 peripherals each
<rmsyn[m]>
I think this might be the reasoning behind assigning a hash as the crate name, for re-use of the same peripheral across SoC/MCU
<thejpster[m]>
an alternative solution - svd2rust could just generate less code?
<i509vcb[m]>
mspm0 is relatively small for now under that type of approach, but it could go off the rails if TI decides to do some funny stuff
<rmsyn[m]>
thejpster[m]: maybe, but I've attempted to minimize code generation with re-use in a PAC I maintain, and there are relatively lesser number of peripherals compared to other PACs
therealprof[m] has joined #rust-embedded
<therealprof[m]>
thejpster[m]: Sounds great. Have ideas?
<rmsyn[m]>
still experience the hang on a single core at full use with other cores idle
<burrbull[m]>
thejpster[m]: less docs?
<thejpster[m]>
there's a lot of trait implementations that don't appear to benefit the user. I wonder if it's faster to just generate the code rather than trait impls with lots of generics.
<adamgreig[m]>
if there's not many peripherals, i would have hoped compiling a single crate isn't too slow, the benefit is when there's loads of peripherals, but in that case there would also be loads of crates
<thejpster[m]>
I don't know if "more simple code" is faster to compile than "a bit less heavily generic code" or not.
TomB[m] has joined #rust-embedded
<TomB[m]>
What chiptool does is quite a bit faster, [@adamgreig:matrix.org](https://matrix.to/#/@adamgreig:matrix.org) ral even faster still
<thejpster[m]>
I also don't know if svd2rust producing something that uses bitbybit works out cheaper or more expensive. I'd like to see some experiments before concluding that a 20x increase in the number of crates is the best solution.
<therealprof[m]>
thejpster[m]: Probably. Not sure it'll make a ton of difference but we could try.
<rmsyn[m]>
adamgreig[m]: the PAC I maintain has roughly ~56 peripherals, with a good number of them being repeats (at different offsets) of the same peripheral type (uart0, uart1, etc)
<burrbull[m]>
rmsyn[m]: derive them
<cr1901>
Isn't SVD supposed to take care of that?
<rmsyn[m]>
so, maybe 20-30 unique peripherals
<adamgreig[m]>
yea, repeats should all become the same thing
<thejpster[m]>
only if the SVD says they are the same thing
<adamgreig[m]>
if you're getting duplicate codegen for multiple instances of the same peripheral you might be able to fix that with some updates to the svd
<TomB[m]>
Vendor SVDs notoriously do not
<TomB[m]>
Gotta patch and do other things to merge peripheral nodes
<rmsyn[m]>
adamgreig[m]: maybe, however I would like the PAC to have the ability to work on multiple pieces of compilation, not ping a single core, with others doing nothing
<cr1901>
I don't particularly like this state of affairs, but isn;t that par for the course that SVDs have to be patched?
<TomB[m]>
See imxrt-ral for one option I suppose
<TomB[m]>
Or just abandoning svd in some cases and mapping blocks as needed…
<burrbull[m]>
If peripherals are identical they definetely must be derived.
<rmsyn[m]>
TomB[m]: this seems like the absolute last option to pursue. the PAC crate is foundational to a number of other crates, including ones I do not maintain
<thejpster[m]>
another option is to cfg out everything you don't need
<rmsyn[m]>
thejpster[m]: that's already possible, I'm looking for a more generic solution that works when users need everything in the PAC
<mabez[m]>
thejpster[m]: I'm not sure this really is an option, if the long term goal for any hal is to support all the peripherals :D
<thejpster[m]>
right, but the HAL could have feature flags too.
<adamgreig[m]>
I guess the hal could in theory re-export the features and only enable its drivers for the enabled peripherals, but yea, it's not amazing
<mabez[m]>
Yeah that's true
<rmsyn[m]>
dirbaio[m]: do you have any plans to upstream your work on chiptool into svd2rust?
<adamgreig[m]>
there was a lot of discussion about bringing svd2rust closer to chiptool at rustnl but not much work on it
<dirbaio[m]>
rmsyn[m]: no plans. chiptool's way of doing things is in some ways quite different and there's tradeoffs that some people don't agree with. it's not magic pixie dust you can sprinkle on svd2rust and make it 2-3x faster
<burrbull[m]>
dirbaio[m]: When have you measured last time?
<dirbaio[m]>
burrbull[m]: these numbers are from today.
<i509vcb[m]>
A lot of SVDs are written with only C in mind, so a lot of duplication is working around C not being ideal
<dirbaio[m]>
(nrf52840-pac is not built with latest svd2rust though. I think the amount of code generated by svd2rust has grown recently though, not shrinked?)
<thejpster[m]>
I just want to note there are four other agenda items
<rmsyn[m]>
right, that's been the main blocker on me choosing to use chiptool to generate the PAC I maintain. huge breaking changes, with little crossover with svd2rust. almost to the point where it would make more sense to release a separate `<pac>-chiptool` crate
<rmsyn[m]>
thejpster[m]: apologies, I am happy to continue discussion after the meeting, or in the tracking issue
<adamgreig[m]>
yea, let's move on to the other items now, thanks for the discussion all!
<adamgreig[m]>
thejpster, go for it with your 3
<thejpster[m]>
first - defmt 1.0 will release in a week. Please look at it and speak now if you see any issues. I really don't want to commit to stability and then have someone say "well, actually, you shouldn't have done it like this"
<cr1901>
I'll make a msp430 smoke test when I get the chance
<adamgreig[m]>
"oh this is unsound actually" two days after 1.0
<adamgreig[m]>
many such cases
<thejpster[m]>
ok, my second - I wrote a cortex-r crate
<thejpster[m]>
"cortex-r" is namesquatting and I've raised a ticket for it. Apologies the owner is here - I did reach out by email.
<thejpster[m]>
I don't really want to own this crate, I just needed to write some code for an actually Cortex-R52 chip and it made sense to kick out all the common stuff into another crate.
<thejpster[m]>
there's a cortex-r-rt crate too.
<thejpster[m]>
and I patched cortex-m-semihosting to support the AArch32/A32 and AArch64 instructions too. Unclear what I should do with this (arm-semihosting already exists, as does 'semihosting')
<thejpster[m]>
and it doesn't make sense for cortex-m-semihosting to support non-cortex-m chips. cortex-semihosting perhaps?
<dirbaio[m]>
<dirbaio[m]> "(nrf52840-pac is not built..." <- regenerated nrf5840-pac with latest svd2rust git. it takes 4.5s.
<adamgreig[m]>
cortex-semihosting seems OK. does arm-semihosting do everything cortex-m-semihosting does?
<thejpster[m]>
* it. Apologies if the owner
<adamgreig[m]>
or just semihosting I mean, arm-semihosting is defunct i think
<thejpster[m]>
I don't really want to make the commitments necessary to join a team. I would feel compelled to attend meetings and events.
<thejpster[m]>
I'm trying very hard not to do that which is why I left all my teams.
<adamgreig[m]>
yea, that's fair and understood
<adamgreig[m]>
maybe just keep it on your github then? or transfer it to cortex-r team with no members but add you as an external collaborator so you still have write/publish access?
vollbrecht[m] has joined #rust-embedded
<vollbrecht[m]>
semihosting is in part used in combination with probe-rs and tmoe also seams to have commited on the semihosting crate
<vollbrecht[m]>
* with probe-rs test's and tmoe
<vollbrecht[m]>
so still probably the best place to expand building on
<adamgreig[m]>
probe-rs uses the semihosting crate?
<thejpster[m]>
i'd rather donate my crate to the working group. My capacity to work on it is seasonal.
<vollbrecht[m]>
i think for the testing part e.g embedded-test
<thejpster[m]>
it's very useful when working with qemu, where it's instant (rather than the 500ms some debuggers take)
<adamgreig[m]>
currently the wg's capacity to maintain cortex-r stuff is 0, though
<adamgreig[m]>
unless/until someone comes along who's interested
<thejpster[m]>
right, but the WG can find more capacity more easily than I can find more capacity
<thejpster[m]>
you could go and ask the arm aarch64 target maintainers, for example
<thejpster[m]>
actually, please push that with the project because arm-targets shouldn't need to exist.
<thejpster[m]>
it's entirely reasonable for the compiler to tell me if I'm building for Armv8-R or Armv7-R, or Armv7E-M for that matter.
<thejpster[m]>
But rather than put the logic into every build.rs, I put it into a standalone crate
<thejpster[m]>
where can I push that crate?
<burrbull[m]>
<dirbaio[m]> "regenerated nrf5840-pac with..." <- I've also tried to update. There is approximately 10% of time decrease. Not big, yeah.
<burrbull[m]>
burrbull[m]: But anyway it should be updated as many bugs were fixed since 0.25
<adamgreig[m]>
if you want it in the wg, then maybe the libs team or cortex-m team
jokers[m] has joined #rust-embedded
<jokers[m]>
Hello, is anyone free to talk with me for some time?
<jokers[m]>
I want to talk to a developer.
<thejpster[m]>
it's cross-cutting across, A, R and M profiles.
<thejpster[m]>
you need an Arm team
<adamgreig[m]>
ok, but we don't have an arm team, or a cortex-r team, so that's why I suggested libs team
<dirbaio[m]>
teams in the wg could use some simplification
<dirbaio[m]>
there is a cortex-r team but it's empty
<thejpster[m]>
just rename cortex-m team to arm
<thejpster[m]>
or cortex
<thejpster[m]>
problem solved
<thejpster[m]>
fold cortex-a into it
<thejpster[m]>
my final item is - famously, you cannot run the Rust compiler test suite on bare-metal systems because the test suite runner requires libstd. Ferrocene has a solution for aarch64-unknown-elf and runs the full test suite on it for qualification purposes.
<thejpster[m]>
we have a working solution for thumbv7em-none-eabihf too.
<adamgreig[m]>
nice!
<adamgreig[m]>
does this mean you can qualify ferrous for thumbv7em-none-eabihf?
<adamgreig[m]>
s/ferrous/ferrocene/
<thejpster[m]>
in theory that would be possible. I cannot speak to the Ferrocene roadmap.
<thejpster[m]>
but hey, I'm going to Embedded World.
<adamgreig[m]>
ok, well let's suggest you push to the cortex-r repo for now if you want to do that, the cortex-m team can have a vote on taking your arm-targets crate since it's probably the closest fit, and if there's interest we can discuss renaming cortex-m to cortex or arm and merging with cortex-a next week
<thejpster[m]>
ok, please add @jonathanpallant to cortex-r with the appropriate perms
<thejpster[m]>
* Embedded World. Ask me about it in person.
<adamgreig[m]>
that's all for this week then, thanks everyone!
<adamgreig[m]>
we had one other agenda point from jannic about the svd repo having some unreviewed PRs, though it does also have plenty of recently merged PRs too, we can review next week
jason-kairos[m] has quit [Quit: Idle timeout reached: 172800s]
jistr_ has joined #rust-embedded
jistr has quit [Read error: Connection reset by peer]
SirWoodyHackswel has quit [Quit: Idle timeout reached: 172800s]
corecode[m] has quit [Quit: Idle timeout reached: 172800s]
ZachHeylmun[m] has joined #rust-embedded
<ZachHeylmun[m]>
Hello!
<ZachHeylmun[m]>
Just jumped back into an embedded project after a couple years aways and I wanted to thank all of you for the hard work and progress! The experience is pretty great.
<JamesMunns[m]>
Late (for the meeting) announcement: just released postcard-rpc v0.11.6 - this fixes an issue with bulk USB packet handling with NUSB, if you use postcard-rpc I suggest you update!
<JamesMunns[m]>
yeah, it got dropped a bit ago when I switched from usb-serial to raw bulk messages. There was still a host-side client, but I didn't provide an mcu-side server library
<JamesMunns[m]>
This time I'm bringing it back generic over embedded-io, so it should work with usb-serial or hardware uart (or whatever else implements embedded-io-async traits)
sirhcel[m] has joined #rust-embedded
<sirhcel[m]>
Looking forward to the official uart-support. Did hack some for a sync rpc server on a mcu with with some bespoke framing last summer. Everything else required more attention and there was alway something else more urgent than cleaning this up. Or in other words: postcard-rpc is doing an awesome job there James Munns. ❤️
<sirhcel[m]>
* job there since then James Munns.
<JamesMunns[m]>
heh, "comms you don't have to think about" is exactly the hope :D
<sirhcel[m]>
Ist is just easy going and joy to tweak the „protocol“ compared to turning a knob in the legacy c part of the application.
<sirhcel[m]>
Well. Sounds like i should have a look at the latest release then.
<JamesMunns[m]>
(serial is still on a branch, but should be coming soon!)
<JamesMunns[m]>
Got sidetracked hunting this usb bug, and working on getting poststation out the door :)
<sirhcel[m]>
<JamesMunns[m]> "heh, "comms you don't have to..." <- It is. And mocking/hardware in the loop is essentially a no brainer on such a foundation. 😀
<JamesMunns[m]>
yeah, postcard-rpc has a "channel transport", which I use for testing
<JamesMunns[m]>
I might actually move it out from behind the test-utils feature tho, I've had some people interested in using it, and surprised it was hiding there
<JamesMunns[m]>
it just sends frames back and forth over tokio channels
<JamesMunns[m]>
It makes it REALLY easy to set up a "simulator" build that does fake/pretend things for testing
<JamesMunns[m]>
And on the host client side, the handle isn't generic over the transport, so your code works 1:1 as if you were talking over USB or UART
<JamesMunns[m]>
oh, and I have a branch with RTT transport too, I should get that in soon as well :D
<sirhcel[m]>
I will have a look at this too. This was what the heck whispered to me to give it a shot.
<sirhcel[m]>
Im looking forward to the these nice things. 😀
<JamesMunns[m]>
Poststation adds a lot of good tooling on top of that, as well :D
<sirhcel[m]>
Another thing on my to look into list. I have to get the current project over the finish line and i‘m already keen to look into poststation as well.
<JamesMunns[m]>
Feel free to ping me if you want to test it out. Really hoping to have it public preview some time this week or early next week.