<takkaryx[m]>
are there videos that will be posted of talks from rustconf? I can't attend virtually because of work, wondering what I'm missing
AtleoS has quit [Ping timeout: 264 seconds]
AtleoS has joined #rust-embedded
sugoi has quit [Ping timeout: 264 seconds]
jsolano has quit [Quit: leaving]
jsolano has joined #rust-embedded
M9names[m] has joined #rust-embedded
<M9names[m]>
They post them on YouTube after a few months. Last year's were uploaded 7 months ago, for example.
dinkelhacker_ has joined #rust-embedded
<M9names[m]>
<SeanWykes[m]> "Hi, rust embedded newbie here ....." <- you can also ask anything here. if there's a better place for the question someone will point you towards an appropriate channel to get the best answer.
<therealprof[m]>
<SeanWykes[m]> "Hi, rust embedded newbie here ....." <- Here is good, but there's also a dedicated room: https://matrix.to/#/#rp-rs:matrix.org
<M9names[m]>
if it's embassy-rp related there's also #embassy-rs:matrix.org but we won't know until they ask the question ;)
ryan-summers[m] has quit [Quit: Idle timeout reached: 172800s]
starblue has quit [Ping timeout: 246 seconds]
starblue has joined #rust-embedded
dinkelhacker_ has quit [Quit: Client closed]
<thejpster[m]1>
<roklobsta[m]> "How's your French?" <- terrible
sugoi has joined #rust-embedded
RobertJrdens[m] has quit [Quit: Idle timeout reached: 172800s]
sugoi has quit [Ping timeout: 252 seconds]
dinkelhacker_ has joined #rust-embedded
sugoi has joined #rust-embedded
sugoi has quit [Ping timeout: 272 seconds]
RobertJrdens[m] has joined #rust-embedded
<RobertJrdens[m]>
<dirbaio[m]> "there's the `fn source() -> &dyn..." <- Do you have details on that? I'd love to understand that.
<RobertJrdens[m]>
I know how trait objects work and how they are implemented but have a hard time imagining why it should be "terrible". From what I've seen when experimenting with casts and trait objects on embedded (in `crosstrait`, `serde-erased` etc) it looked just fine.
<RobertJrdens[m]>
The one bloat case I can see now is trait objects preventing inlining of monomorphizations.
<d3zd3z[m]>
I also have similar feelings about alloc on embedded. Much of the fear around using malloc is because it is so hard to get right, and leaks are pretty devastating. But, Rust keeps most of that from being an issue.
mabez[m] has joined #rust-embedded
<mabez[m]>
d3zd3z[m]: Beyond leaks, heap fragmentation is a larger issue when the total pool is smaller, which it typically is on embedded - static memory means a static guarantee that the memory required is available at compile time
dodothattried[m] has joined #rust-embedded
<dodothattried[m]>
<d3zd3z[m]> "I also have similar feelings..." <- One part of the fear of alloc on embedded is that you would only find out about OOMs at runtime, probably only in the field, instead of at compile time when you ran out of RAM to statically allocate things
<dodothattried[m]>
dodothattried[m]: And sometimes that is a tradeoff you're willing to make, but a lot of the time it's not
<d3zd3z[m]>
mabez[m]: Heap fragmentation is an issue, but it is also a bounded problem. I do understand the concerns, though. It's one of the reasons that I'm really trying to make sure that Rust on Zephyr is at least usable without allocation.
<M9names[m]>
bounded in which sense?
<d3zd3z[m]>
Given a set of allocations of a set of sizes, there is a worst case possible heap usage for that data.
<jonored>
if you _can_ just structure your code so it's provably not going to run out, might as well... and measuring your maximum stack is doable if your call graph is reasonable.
<d3zd3z[m]>
Even stack allocation can be a challenge in embedded. Right now, I've supported Zephyr's threads in Rust, but it requires you create stacks for every thread. I'm looking forward to an executor where something async might be able to do that on a single stack.
<d3zd3z[m]>
Generally, my allocations aren't "arbitrary", but bounded.
<jonored>
I think you have to know you've caught all of the possible function pointer invocations to do that statically, from what I recall. I was in a context where proof was absolutely valuable enough to structure the program around.
<d3zd3z[m]>
jonored: Is there an expectation of being able to get rid of the function pointer invocations? Generally, the idea of dyn is to be able to do things with the general types.
<d3zd3z[m]>
Although, static analysis and elimination of function pointers is a pretty nifty optimization. But has it been implemented anywhere?
<jonored>
I mean with code changes, like not using dyn as much. That was doable in my case, but I was writing the code that walked the call graph and frame size data, so just adding the assumption that partiuclar function pointer calls can be any of the function pointer definitions was doable too.
<d3zd3z[m]>
The other thing I'm discovering, working on the Rust on Zephyr stuff, and getting feedback, is that different people have very different ideas of what they want. There are plenty of people that don't want allocation, dyn, etc, and want more bare metal. Others are wondering why I'm not porting std to Zephyr.
<jonored>
I don't have general code for that static analysis, but connecting it up as a partial solution scoped to the particular code I was writing was very practical.
Ralph[m] has quit [Quit: Idle timeout reached: 172800s]
<jonored[m]>
I really wouldn't say that dyn is terrible for embedded, sometimes it's very helpful. Picking and choosing when and where you use it vs. monomorphization can be pretty important if you are close to resource limits, and inlining with large stack allocations has some very big implications as well.
<mabez[m]>
I think the original comment is being taken a bit out of context, I believe they meant libraries shouldn't force users into using dyn. Libraries using generic params give the option to the end user whether they want to using dyn trait or monomorphization. Forcing dyn in such a core ecosystem crate like embedded-hal would be a bad idea for example
<jonored[m]>
for inlining, llvm leans on it's code for reusing memory between variables that aren't live at the same time to keep stack frames from expanding too much, but that code can't (as of a year and a half or so ago) pack two or more smaller variables into one big variable's allocation, and there isn't any notion of "these variables are out of scope entirely because they were from an inlined thing, so we can reuse that as a chunk". This
<jonored[m]>
hurts quite a bit if your initialization for some big structure inlines too much.
<jonored[m]>
Definitely agree that forcing dyn is not great. Forcing not-dyn could also be not great for embedded.
<mabez[m]>
but dyn Trait impls Trait so you can always pass that in, I've done this before in numerous applications. The downside is that there is a type parameter involved but its always dyn Trait instead of some placeholder type. Forcing dyn Trait means there is no way to opt out.
<mabez[m]>
* but dyn Trait impls Trait so you can always pass that in as the generic param, I've done this before in numerous applications. The downside is that there is a type parameter involved but its always dyn Trait instead of some placeholder type. Forcing dyn Trait means there is no way to opt out.
crabbedhaloablut has quit []
crabbedhaloablut has joined #rust-embedded
balbi[m] has joined #rust-embedded
<balbi[m]>
when will svd2rust generate a Safe writer for a register? I have a "write-1-to-clear" register containing a single field that spans the entire register and every combination of the 32-bits in the register are valid. Yet, the writer only gives me bits() and no set().
<dirbaio[m]>
in my experience every single time I use dyn the code size ends up being higher than expected :P
<dirbaio[m]>
of course there's situations where you do need dyn, or where using dyn prevents monomorphization therefore reduces code size. In those situations it's perfectly fine to use dyn in embedded, it's jut these are relatively rare
<dirbaio[m]>
perhaps "terrible" is a too strong word, but i'd definitely say dyn is "best avoided" in embedded
danielb[m] has quit [Quit: Idle timeout reached: 172800s]
<jonored[m]>
My experience has been that when you're near space limits on ram and code in embedded you really care about a balance between inlined and not-inlined, and function pointers (whether via vtable or not) are a valuable trick to have in the kit for shrinking code size. I was always writing the traits in the expensive path, though, so I might just have lived in a "relatively rare" area.
<dirbaio[m]>
if you want to force not inlining, you can do #[inline(never)]
<dirbaio[m]>
using function pointers as a way to force no inlining is a bit strange
<TomB[m]>
can guide llvm on that front though can't you with code size to look at inlining? inline-threshold option or something along those lines?
<jonored[m]>
the problem is that inlining has some bad cases with large on-stack variables in llvm, getting the data is easy enough. And function pointers are to avoid monomorphization, not inlining, it's just kind of a related blob of stuff.
<jonored[m]>
(or perhaps more specifically, to let the code include only the version monomorphized to use a function pointer)
<jonored[m]>
it's "fun" trying to predict whether swapping #[inline(never)] to #[inline(always)] or the other way around will increase or decrease your peak stack and make the build fit in ram :)
<jonored[m]>
also, yes, llvm can produce the data you need to make those calls, it's just hard to get a feel for what will make a good or bad change through the differences in optimization vs. pathological inlined stack use that you get. Processing tens of kilobytes of data with 4.5 kilobytes of ram is fun.
<AlexandrosLiarok>
Is there anything similar to the -mfpu flag for rustc/cargo ?
<AlexandrosLiarok>
Same for mcpu
<dirbaio[m]>
-mcpu is -Ctarget-cpu
<dirbaio[m]>
for -mfpu you typically specify whether you have a fpu by choosing the -eabi or -eabihf targets
<AlexandrosLiarok>
Yea but that is just the float-abo
<AlexandrosLiarok>
s/abo/abi/
<AlexandrosLiarok>
In my case I use both m4 and m7 cores, both are thumbv7em-none-eabihf but one has fpv4-sp-d16 fpu while the other has fpv5-d16
<dirbaio[m]>
you can enable those with -Ctarget-features
<dirbaio[m]>
be careful though that -Ctarget-cpu=cortex-m4 (or m7) can autoenable FPU stuff higher than what you have
<AlexandrosLiarok>
Is that even available? Hmm.
<dirbaio[m]>
in which case you have to disable them with -Ctarget-features=-fpv5-d16 or similar
<dirbaio[m]>
it's cursed
<AlexandrosLiarok>
Ah I guess I need to specify the target otherwise the list options change
<AlexandrosLiarok>
Is there any way to get the enabled features when using the target-cpu ?
<dirbaio[m]>
good question, i'd like to know too :S
<AlexandrosLiarok>
Seems it is --print cfg
<AlexandrosLiarok>
.. I think
<AlexandrosLiarok>
But who knows, probably not.
<dirbaio[m]>
hmm it seems wrong. rustc --target thumbv7em-none-eabihf --print cfg doesn't print any of the vfp features
<thejpster[m]1>
The platform docs cover some of this. Please let me know if you find gaps in those docs.
sugoi has joined #rust-embedded
<AlexandrosLiarok>
So I /think/ I need target-cpu=cortex-m4, target-features=+vpf4d16sp for the m4
<AlexandrosLiarok>
And target-cpu=cortex-m7, target-features=+fp-armv8d16 for the m7 because the vfp5d16 is not yet supported
<AlexandrosLiarok>
But I think they are equivalent in terms of features from looking into llvm sources, dunno
<AlexandrosLiarok>
(for the stm32h745 dual core mcu)
sugoi has quit [Ping timeout: 260 seconds]
vollbrecht[m] has quit [Quit: Idle timeout reached: 172800s]
i509vcb[m] has quit [Quit: Idle timeout reached: 172800s]
JamesMunns[m] has quit [Quit: Idle timeout reached: 172800s]
<AlexandrosLiarok>
Yea these fpu features are not yet supported apparently
starblue has quit [Ping timeout: 265 seconds]
<thejpster[m]1>
target-cpu automatically enables all possible cpu features.
<thejpster[m]1>
It’s LLVM policy
<thejpster[m]1>
So you generally only need to turn off what you don’t have
<thejpster[m]1>
I don’t know how you’re testing for features but try nightly. I think stable here a bunch