ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
dequbed has quit [Ping timeout: 240 seconds]
nadja has joined #rust-embedded
IlPalazzo-ojiisa has joined #rust-embedded
dandels has quit [Quit: ZNC 1.8.2 - https://znc.in]
AnandGedam[m] has quit [Quit: Idle timeout reached: 172800s]
K900 has quit [Quit: Idle timeout reached: 172800s]
Curid[m] has quit [Quit: Idle timeout reached: 172800s]
Maebli[m] has quit [Quit: Idle timeout reached: 172800s]
BenoitLIETAER[m] has quit [Quit: Idle timeout reached: 172800s]
ivmarkov[m] has joined #rust-embedded
<ivmarkov[m]> Wondering if the maintainers of `heapless` would be OK with introducing something like `unsafe fn heapless::Vec::push_init<F: FnOnce(&mut MaybeUninit<T>)>(&mut self, f: F) -> Result<(), ()>`? Use case being in-place construction (i.e. avoiding initialization of `T` on-stack first when `T` is really large). Am I missing something obvious there or am I the only one operating on `Vec`s with really large elements hence having this
<ivmarkov[m]> problem?
<ivmarkov[m]> swap_remove is not ideal either, as it materializes the T which is being removed so that it can be returned. Not sure if Rust would optimize this away in all cases (as in when this T is not really used). Probably not.
<ivmarkov[m]> * swap_remove is not ideal either, as it materializes the T on-stack which is being removed so that it can be returned. Not sure if Rust would optimize this away in all cases (as in when this T is not really used). Probably not.
ryan-summers[m] has joined #rust-embedded
<ryan-summers[m]> <ivmarkov[m]> "Wondering if the maintainers of..." <- Generally very large objects are statically allocated in embedded systems, so something like a slice or array is usually sufficient, which is why I suspect this hasn't been a primary use case yet. As a work around, you could just directly make a `Vec<MaybeUninit<T>>` and then push a `MaybeUninit::uninit()` into it, then reference it from the vec directly to initialize it
<ryan-summers[m]> But I imagine this API is very similar to what you're proposing anyways
<ivmarkov[m]> ryan-summers[m]: Yes of course what you suggest is doable. But the whole point was NOT to sprinkle all my Vec elements **access** code with `unsafe { vec[inxex].assume_init_mut() }`. But rather, to only pay the `unsafe` price when pushing (and initializing) an element into the vec. And eventually when dropping it.
<ivmarkov[m]> s/inxex/index/
<ivmarkov[m]> And there are even more problems with what you suggest. ^^^ As in who is going to run the drop fn on the elements, given that they are MaybeUninit? :D
<ryan-summers[m]> What do you mean by "paying the unsafe price"? There's no cost to unsafe, it is just a point to keep eyes on your code
<ivmarkov[m]> That's what I mean by "price".
<ryan-summers[m]> Out of curiosity, what are you trying to do that requires such large objects?
<ryan-summers[m]> Or is it just that your platform has a small stack space?
<ivmarkov[m]> `rs-matter`? A fabric in `rs-matter` is > 1K. Depending how you operate on these objects, you might end up allocating even > 2K on-stack (due to how Rust generates the state machines for `async` code which is not really very well optimized)
<ivmarkov[m]> ... and others too. As in a Matter session can also end up quite large.
<ryan-summers[m]> Could you possibly just declare a single object at the static scope that you use for initialization to keep it off the stack?
<ryan-summers[m]> Then push that into the vec, which would do a copy
<ryan-summers[m]> Although you mention async, which would complicate that
<ryan-summers[m]> I guess I'm also a bit confused why 1K on the stack is killing you if you're dealing with an application protocol that uses such large objects - I'd expect the chip you're using to not really care about a 1KB stack allocation
<ryan-summers[m]> Are you hitting stack overflows, or is this something you're trying to optimize?
<ivmarkov[m]> I don't follow, sorry. I can always do const FABRIC: Fabric = Fabric::new_empty() and then push FABRIC in the vec. BUT - you have no warranty, that vec.push(FABRIC) will be optimized by the compiler. In most opt settings it will be, but not in all of them.
<ivmarkov[m]> * I don't follow, sorry. I can always do const FABRIC: Fabric = Fabric::new_empty() and then push FABRIC in the vec if that's what you mean. BUT - you have no warranty, that vec.push(FABRIC) will be optimized by the compiler. In most opt settings it will be, but not in all of them.
<ryan-summers[m]> ivmarkov[m]: There's no warranty that anything will ever not do temporary stack allocations though
<ivmarkov[m]> There is: for MaybeUninit. These are always optimized.
<ryan-summers[m]> You can just look at the result in the end. The only way to guarantee no allocations would be to use statically allocated data or use references etc
<ryan-summers[m]> At least that's as far as I understand it - I could be wrong in that some things guarantee no copies
<ryan-summers[m]> But again: Are you doing this to essentially optimize your stack usage prophylactically, or are you doing this in response to observed stack overflows? Those are kind of two different beasts in my opinion.
<ivmarkov[m]> Alright - thanks for your suggestions, but in the end - I still believe a push variant that allows the user to do in-place init of the MaybeUninit which is anyway inside the vec (if you look in the vec impl - it is all about MaybeUninits) still has value IMO.
<ryan-summers[m]> Yeah I don't disagree - feel free to open an issue
<ivmarkov[m]> ryan-summers[m]: If you are suggesting that I might be optimizing prematurely - no I'm not doing that.
<ryan-summers[m]> Just trying to help you find workarounds
<ryan-summers[m]> You can always fork and add it yourself for now
<ivmarkov[m]> Yes, sure. The question was whether the maintainers of heapless would find this beneficial. So that I know whether eventually such an extension can be upstreamed (in case any of them is listening in here). Rather than me inventing my own vec variant or whatever.
<ivmarkov[m]> (Or whether I'm somehow missing the elephant in the room and it is somehow there / or a similar thing is there. Not that I did not look like 10 times.)
<JamesMunns[m]> One possible idea here is to use something like heapless::pool instead
<JamesMunns[m]> so your "big things" live in an actual allocation pool, maybe something that you can alloc in-place
<JamesMunns[m]> then you can have something that is morally equivalent to `Vec<Box<Fabric>>`
<JamesMunns[m]> fair warning: I solve a lot of problems by writing custom allocators, and that's what I'm kinda proposing here.
<ryan-summers[m]> Yeah I actually think that's what I did for an ethernet PHY ring, let me find the reference
<ryan-summers[m]> Ah ended up ripping out the pool alloc for stack based alloc, nevermind. It's hidden somewhere in Git history now
<ryan-summers[m]> But using a pool + Vec<Box<>> does sound like a nice idea here
<ivmarkov[m]> JamesMunns[m]: Doing this has its own benefits I agree. Like not polluting your `rodata` with BS values if the `vec` is const-newable. But... am I blind, or the `Box` in `heapless::pool` is subject to the same issues we are discussing here?: https://github.com/rust-embedded/heapless/blob/main/src/pool/boxed.rs#L161
<ivmarkov[m]> ivmarkov[m]: ... in that putting the value in the `Box` will also potentially go via an intermediate stack allocation?
<JamesMunns[m]> ivmarkov[m]: yeah, it likely is. Again, you'd probably want to have an "in place constructor".
<ivmarkov[m]> ivmarkov[m]: I don't see all the magic of recent `alloc` module, like `new_unini`, `init` and so on...
<ivmarkov[m]> s/new_unini/new_uninit/, s/init/assume_init/
<ivmarkov[m]> ivmarkov[m]: (... seems like `heapless::pool` is also lagging behind in this regard)
<JamesMunns[m]> If I were to do this now, I would probably:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/WgOCVrZdvZWKnToWxAPYpMKI>)
<JamesMunns[m]> I admit, I've written a lot of my own pool allocators, I don't actually use heapless::pool very often.
<ivmarkov[m]> JamesMunns[m]: But that's the point: unless `heapless::pool` has at least _some_ facilities to help me implement this "in-place constructor", I'm hitting a dead-end again.
<ivmarkov[m]> ivmarkov[m]: And it does not have those?
<JamesMunns[m]> ivmarkov[m]: It probably does not! You are right!
<JamesMunns[m]> JamesMunns[m]: if heapless were to add "uninit"/"in place constructors", I think it would be better to do it on pool than vec
<ivmarkov[m]> ivmarkov[m]: Sorry, do not mean to argue for the sake of arguing - really asking as I might be missing something...
<ivmarkov[m]> ivmarkov[m]: Hmmm, why not on both?
<JamesMunns[m]> ivmarkov[m]: I'm honestly not sure if all the methods on vec don't require stack storage. For example, if you sort things
<ivmarkov[m]> ivmarkov[m]: Adding it on `vec` should be just fine I think. And on `LinearMap` too. Of course - where do we stop, but oh well, this stuff is viral anyway...
<JamesMunns[m]> JamesMunns[m]: vec *probably* assumes it's not a problem to have at least one "extra" slot on the stack temporarily
<ivmarkov[m]> ivmarkov[m]: I just looked at `swap_remove` it does intelligent stuff, like `ptr::copy`. So putting aside that it materializes on-stack the element you are removing, the rest is fine. LEt me look at sort...
<ivmarkov[m]> s/LEt/Let/
<JamesMunns[m]> ivmarkov[m]: sort is probably the stdlib unstable sort on slices
<ivmarkov[m]> ivmarkov[m]: No `sort` in `heapless::Vec`! :D
<JamesMunns[m]> ivmarkov[m]: yeah, it's available on any type that derefs to a slice
<ivmarkov[m]> ivmarkov[m]: But that's OK. As I can avoid the generic `core` sort. I can't avoid pushing large stuff into the vec though. And ditto for boxing large stuff.
<vollbrecht[m]> ivmarkov[m]: To have more eye's on that topic maybe something to put on the next weeks meeting agenda?
<JamesMunns[m]> vollbrecht[m]: btw, anywhere in the std sort (or elsewhere) you see "ptr::swap", it incurs an extra stack copy: https://doc.rust-lang.org/stable/src/core/ptr/mod.rs.html#939-954
<ivmarkov[m]> <JamesMunns[m]> "If I were to do this now, I..." <- > <@jamesmunns:beeper.com> If I were to do this now, I would probably:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/uOdrBWkYGPSKITdtSIWrDjwx>)
<ivmarkov[m]> James Munns: thanks for the custom pool idea. The more I think about it, the more I'm warming up to it. Particularly because it solves the issue where the `Matter` object currently takes too much flash space (~ 50K) because all large structures are in-place, inside the various vecs, thus polluting the `Matter` object with large `MaybeUninit`s which do not need to be in flash in the first place...
<ivmarkov[m]> (`Matter` is `const`-newable if that's not clear and thus usually lives in `StaticConstCell`)
andresovela[m] has quit [Quit: Idle timeout reached: 172800s]
<JamesMunns[m]> fwiw, I'm open to figuring out if any reusable, somewhat unsafe, building blocks are possible to develop/share. grounded has a lot of the basic building blocks you might need already, and like I said I've built a lot of my own custom pool allocs before.
<ryan-summers[m]> What's the benefit of writing your own custom pool as opposed to heapless::pool?
<JamesMunns[m]> https://docs.rs/grounded/latest/grounded/uninit/struct.GroundedArrayCell.html is essentially the "pool storage", you could do a lot of productive damage with:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/nbLPfcltVUBqKqIWcVXEAUBb>)
<JamesMunns[m]> ryan-summers[m]: IMO: the ability to customize. Maybe it's possible to have heapless:pool cover a lot of this already tho!
<JamesMunns[m]> The goal for grounded, IMO, is to have a sound "foundation" for when you need to do custom, unsafe things.
<diondokter[m]> Does anyone know a crate with which I could store a `dyn Trait` with a max size? Something like `FixedBox<dyn Foo, 128>`?
<JamesMunns[m]> I've seen bump allocators that can do that, not pool allocators tho
<ivmarkov[m]> <JamesMunns[m]> "https://docs.rs/grounded/latest..."; <- James Munns: Sorry to bother. ^^^ unfortunately does not solve the "flash memory pollution" problem. What I mean is, since `GroundedArrayCell` is allocated in-place, and since `Pool` is `const`-able, the above code will materialize the `Pool` is flash ram the moment I declare a `const` of type `Pool`. Which is one of the two reasons I warmed up to your "use pools" idea. It
<ivmarkov[m]> seems to me the only way to avoid it is if `Pool` is not `const`-able, but instead takes a `&mut GroundedArrayCell` on construction (or any other `MaybeUninit<[T, N]>` wrapper. Unfortunately, that means a lifetime on `Pool`, as in `Pool<'a, T, const N: usize>` and because the lifetime is for a `&mut` reference, it is invariant, meaning when I borrow the `Pool` _itself_ with `&'a Pool<'r, T, N>` the `'r` lifetime cannot be (as it
<ivmarkov[m]> is covariant) replaced with `'a` so my `Matter<'a>` will become `Matter<'a, 'r>` which is kinda gross. That is, unless `'r` is `'static`.
<ivmarkov[m]> ivmarkov[m]: I feel I'm turning in circles :D
<diondokter[m]> Well, not really looking for an arena. I just need the one box :P
<diondokter[m]> I guess the box carries its own arena with it?
<ivmarkov[m]> * N]>` wrapper). Unfortunately,
<ivmarkov[m]> * James Munns: Sorry to bother. ^^^ unfortunately does not solve the "flash memory pollution" problem. What I mean is, since `GroundedArrayCell` is allocated in-place, and since `Pool` is `const`-able, the above code will materialize the `Pool` is flash ram the moment I declare a `const` of type `Pool`. Which is one of the two reasons I warmed up to your "use pools" idea. It seems to me the only way to avoid it is if `Pool` is
<ivmarkov[m]> not `const`-able, but instead takes a `&mut GroundedArrayCell` on construction (or any other `MaybeUninit<[T, N]>` wrapper). Unfortunately, that means a lifetime on `Pool`, as in `Pool<'a, T, const N: usize>` and because the lifetime is for a `&mut` reference, it is invariant, meaning when I borrow the `Pool` _itself_ with `&'a Pool<'r, T, N>` the `'r` lifetime cannot be (as it is invariant) replaced with `'a` so my `Matter<'a>`
<ivmarkov[m]> will become `Matter<'a, 'r>` which is kinda gross. That is, unless `'r` is `'static`.
<JamesMunns[m]> ivmarkov[m]: ivmarkov GroundedArrayCell is maybeuninit, it will end up living in `.bss` not `.data`. If you allocate one `T` at a time, you're right it's likely there will be *one* `T` in flash as an initializer
<JamesMunns[m]> JamesMunns[m]: Maybe I don't understand your use case. I'm assuming your pool will start empty, and be populated one at a time at runtime
<ivmarkov[m]> ivmarkov[m]: Hmmmmmmmm... but _because_ `Pool` itself is not maybeuninit, the whole `Pool` will end up in flash though! Together with `GroundedArrayCell`
<JamesMunns[m]> ivmarkov[m]: I don't think so, if you init the `AtomicBool`s a `false` to start.
<diondokter[m]> Basically, we're using TAIT now, but we want to go to stable. Gotta store a generated future somewhere
<JamesMunns[m]> JamesMunns[m]: if all the fields are "zero or uninit", it will end up in `.bss`, with no static initializer.
<ivmarkov[m]> ivmarkov[m]: Uh-oh. I think I get it. And rustc absolutely does that, we just have to be super-careful not to end up with something which is not 0 or not maybeuninit for the magic to work?
<JamesMunns[m]> JamesMunns[m]: You can more aggressively guarantee this, wrapping it one more layer of uninit, or making the AtomicBools also `GroundedArrayCell<AtomicBool, N>`
<JamesMunns[m]> JamesMunns[m]: But yes, if you accidentally add a field that doesn't start life zeroed or uninit, the whole thing will move from `.bss` to `.data`
<JamesMunns[m]> diondokter[m]: https://docs.rs/static-alloc/latest/static_alloc/leaked/struct.LeakBox.html is what I've used before, if you only need one dyn future ever, but it doesn't give you a way to free it.
<diondokter[m]> Hmmm interesting. But I probably need to drop it at some point
<JamesMunns[m]> diondokter[m]: then I would not recommend a bump allocator :)
<JamesMunns[m]> JamesMunns[m]: I could imagine how to write what you want, and I think it's possible stably, I don't have a crate for it today tho.
<diondokter[m]> Yeah, I started writing it myself, but I'd much rather use a vetted crate for that
<JamesMunns[m]> diondokter[m]: So many problems can be solved by writing an allocator!
<JamesMunns[m]> (the problem: I keep writing custom allocators)
<diondokter[m]> <diondokter[m]> "This looks promising: https://..." <- Ah yes, this works!
<diondokter[m]> Never saw this crate before. It was started 9 years ago!
<diondokter[m]> They give this example:
<diondokter[m]> It's been updated to use const generic too (the example uses generic array or something)
FrreJacques[m] has joined #rust-embedded
<FrreJacques[m]> After looking at crates.io for open names, I figured out that there are sophisticated drivers for the two hobby electronic devices, which I have not noticed yet. Should have done that earlier than just web search. Now I am a bit demotivated to start my noob attempts :(.
<FrreJacques[m]> OTOH it's still a good start I guess. So maybe I will go for it anyway.
<ryan-summers[m]> Maybe you can look at async vs. non-async versions too. FWIW this is a pretty common problem, and just because one exists doesn't mean a new one couldn't
<FrreJacques[m]> I think they are not async. But I have no real clue of async yet. I guess with only a single core mcu at hand, it's maybe not a good starting point. But maybe I am wrong on that.
<ryan-summers[m]> Async works fine with a single core :)
<JamesMunns[m]> (async works REALLY well on single core, it's a way to juggle multiple things with a single CPU)
<ryan-summers[m]> Check out embassy or RTIC
<dirbaio[m]> <diondokter[m]> "Basically, we're using TAIT now,..." <- Also there's https://github.com/microsoft/stackfuture
<diondokter[m]> dirbaio[m]: Oh nice! That seems to do the job too
<dirbaio[m]> But these incur tus overhead of tables vs Tait. Maybe you can try refactoring the code so you never have to store the future, or you use generics to avoid having to name it..?
<FrreJacques[m]> Ah, wasn't sure about that. So if I would write a async driver for a display I could write stuff and in parallel acquire new data.
<diondokter[m]> dirbaio[m]: I don't think I can make that work in my setup. I need to store the future in a struct somewhere
<dirbaio[m]> IME it's extremely rare to have to name future types (other than within the executor)
<diondokter[m]> Oh and it can't be generic
<dirbaio[m]> What does that future do?
<dirbaio[m]> Why is it so special? :D
<dirbaio[m]> (if you can share ofc)
<diondokter[m]> So there's an async fn that needs to be executed to change power level. That's fine on its own.
<diondokter[m]> But we have an extension trait for futures so you can do `fut.with_power(Power::High, &power_domain_handle).await` It will read the current power level, change it to the requested one, run `fut` and change the power back.
<diondokter[m]> To make this all work we needed a lot of juggling since every power domain has it's own async closures to set the power
<diondokter[m]> The PoweredFuture doing this needs to call an async function on the domain handle
<diondokter[m]> Maybe the PoweredFuture could just be an async block? Not sure. Anyways, I got it working now :P
<FrreJacques[m]> <ryan-summers[m]> "Check out embassy or RTIC" <- So if I understood right, there is a completly seperate ecosystem around embassy right? So instead of using stm32 hals one would use embassy-stm32. And in order to make a compatible driver one would write the methods of the driver struct as embassy tasks etc?
<dirbaio[m]> > So instead of using stm32 hals one would use embassy-stm32
<dirbaio[m]> both the stm32xxx-hal crates (from the stm32-rs project), and the embassy-stm32 crate from the embassy project, are HALs.
<dirbaio[m]> you can use either with embassy-executor, they're not tightly coupled.
<dirbaio[m]> (and you can use either with RTIC too, even)
<dirbaio[m]> but stm32xxxx-hals support only blocking, while embassy-stm32 supports both blocking and async. so if you want to use async you want embassy-stm32
<dirbaio[m]> > And in order to make a compatible driver one would write the methods of the driver struct as embassy tasks etc?
<dirbaio[m]> no. both embassy-stm32 and stm32xxx-hals implement the embedded-hal traits. existing drivers are already compatible with both.
<ryan-summers[m]> You do not get locked in to an ecosystem :)
<ryan-summers[m]> I use embassy-futures in an RTIC project for example
<dirbaio[m]> (additionally embassy-stm32 implements the embedded-hal-async traits too, so a driver written for embedded-hal-async will work on embassy-stm32 (and all other hals that support async), but not stm32xxxx-hal)
<FrreJacques[m]> So in order to write an async driver I don't make use of embassy nor rtic. Instead I use embeddeh-hal-async traits. And then I could use this driver asynchrone within a firmware that uses embassy or rtic?
<JamesMunns[m]> Same with embedded-hal traits for non-async stuff!
<dirbaio[m]> yup! 🚀
Rustnoob[m] has joined #rust-embedded
<Rustnoob[m]> Hi guys...a question on objcopy to create .bin file...i have this memory.x :... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/IYKAvhpsMKvzudHvTMnIAIZJ>)
<dirbaio[m]> what command are you using to convert from elf to bin?
<dirbaio[m]> maybe you can give it some arg to cut out the uicr thing
<dirbaio[m]> note you DO need the uicr value to get flashed, or the bootloader won't work
<dirbaio[m]> so a .bin without the uicr will work for e.g. OTA updates, but it won't work for initial factory flashing
<M9names[m]> or you could make a hex file instead. i guess it depends on why you want a bin to begin with
<JamesMunns[m]> yeah, bin files aren't great for this, they have to be contiguous.
<dirbaio[m]> yeah for factory flashing you want elf or hex
<dirbaio[m]> for ota updates you typically want bin though
<M9names[m]> oic
<M9names[m]> is that just for storage efficiency?
<dirbaio[m]> bootloaders usually can't parse hex/elf
<JamesMunns[m]> dirbaio[m]: agree on elf, I've seen a lot of simple bootloaders that parse hex lol
<M9names[m]> i've written them too
<JamesMunns[m]> not saying it's good, but I've seen more than one that do that over serial or wireless
<dirbaio[m]> yuck :D
<JamesMunns[m]> just request or receive one line of hex at a time
<dirbaio[m]> you wouldn't use hex for an OTA update over BLE or similar tho
<dirbaio[m]> 2-3x bigger
<JamesMunns[m]> dirbaio[m]: I would bet big money that it has been done
<dirbaio[m]> 🤮
<JamesMunns[m]> agreed it's a BAD idea, but I would bet very serious money it exists, in more than one case, today.
<JamesMunns[m]> "everything over the nordic serial port GATT profile!"
<dirbaio[m]> oh yeah that's also 🤮
<JamesMunns[m]> M9names[m]: tbh they aren't the WORST idea. At least you have linewise checksums!
<JamesMunns[m]> and a reasonableish chunking state machine!
<JamesMunns[m]> not the most efficient, but not the conceptually worst.
<JamesMunns[m]> and support for non-contiguous loading, like the question asker wanted :D
<Rustnoob[m]> <dirbaio[m]> "what command are you using to..." <- simple cargo objcopy --release -- -O binary firmware.bin
<Rustnoob[m]> <dirbaio[m]> "so a .bin without the uicr..." <- I was thinkin exactly to this, this could be just for first programming indeed
<Rustnoob[m]> and probably I wont update bootloader...
<Rustnoob[m]> sorry maybe I didn't think of it! Was just curious to have a workaround in case
<dirbaio[m]> this binary is the bootloader, right?
<Rustnoob[m]> <JamesMunns[m]> "agree on elf, I've seen a lot of..." <- Yes you can, but I have not big space free on micro so i didn't want to parse hex or mot files
<Rustnoob[m]> dirbaio[m]: yep...
<dirbaio[m]> I can't find any objcopy flag to tell it to copy only one address range 🥲
<dirbaio[m]> what you want is for it to copy only the flash addr range, not extend it up to the uicr
<dirbaio[m]> but not sure how
<dirbaio[m]> either way
<dirbaio[m]> if you won't update the bootloader in the field, i'd recommend just using elf or hex for flashing
<dirbaio[m]> if you do want a bootloader bin for bootloader updates, you'll have to do hacks :(
<dirbaio[m]> perhaps objcopy to hex, then find some tool that can convert from hex to bin keeping only some addr range
<dirbaio[m]> iirc nordic had some hex2bin tool, maybe it can
<dirbaio[m]> or you can write your own with the ihex crate...
<Rustnoob[m]> Thanks everybody! Bootloader are always source of discussion...anyone has his preferred recipe for it! ;)
rmja[m] has joined #rust-embedded
<rmja[m]> Rust noob: Sorry for being late here. What you can do is that you can include the bootloader as a blob by default in the app using something like:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/CriCTsKXDMsiUZWkdegSmqJY>)
<Rustnoob[m]> <rmja[m]> "Rust noob: Sorry for being..." <- > <@rmja:matrix.org> Rust noob: Sorry for being late here. What you can do is that you can include the bootloader as a blob by default in the app using something like:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/dVxKziedPyhnMPZaSRpfqzqu>)
<M9names[m]> <dirbaio[m]> "perhaps objcopy to hex, then..." <- i've used srec_cat for this in the past. it's a pretty powerful hex/srecord/bin mangler
<Rustnoob[m]> M9names[m]: yep! Totally agree
<M9names[m]> i guess those sorts of tools aren't for everyone
<M9names[m]> trying to get the chain of steps to perform the task you want does make it feel a lot like writing an awk script. or parsing html with regex.
<birdistheword99[> Heya guys, following up from my question yesterday about using fliplink and memory.x files, I have found that smoltcp network task doesn't seem to work if I run from DTCMRAM, but it works fine if I run from AXISRAM. This is on an STM32H7. It gets to the point where it starts the network task and applies... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/SNgrFrCaZVAOVmLCwXxVgQTV>)
<JamesMunns[m]> > Is this to do with specific memory regions allowed by the ethernet peripheral or something
<JamesMunns[m]> STM32H7 definitely has different DMA that only works with certain memory region
<JamesMunns[m]> it's not uncommon that TCM RAM doesn't work with DMA
<JamesMunns[m]> (it's "tightly coupled" to the CPU, not DMA)
<JamesMunns[m]> not certain re: ethernet, but it's definitely a potential thing.
<Rustnoob[m]> Sorry guys a stupid question...sorry for dumbness...
<Rustnoob[m]> I got this error
<Rustnoob[m]> rust-lld: error: undefined symbol: _critical_section_1_0_acquire
<JamesMunns[m]> Rustnoob[m]: (you don't have to apologize for asking questions here, btw, lots of people are learning)
<dirbaio[m]> if you're using the softdevice, enable feature critical-section-impl in the nrf-softdevice crate in Cargo.toml
<diondokter[m]> I believe most (or even all) DMA's can reach the TCM on the H7. Might be a bit slow though
<Rustnoob[m]> after cargo build i remember i had another time what is the source of that?
<dirbaio[m]> if you're not using the softdevice, enable feature critical-section-single-core in the cortex-m crate in Cargo.toml
<Rustnoob[m]> dirbaio[m]: seems enabled...will check at home anyway
<JamesMunns[m]> > All data passed to Ethernet peripheral (ETH) must NOT be stored in DTCM RAM, since the ETH DMA can't access DTCM RAM (starting at 0x20000000)
<JamesMunns[m]> yeah, STM32H7 DMA is such a trap when it comes to "what DMA works out of what region"
<dirbaio[m]> DMA, BDMA can't access TCM. MDMA can
<dirbaio[m]> the ETH DMA can't
<dirbaio[m]> so many DMAs
<dirbaio[m]> thankfully they fixed this in H7[RS}
<dirbaio[m]> DMA in the classic H7's is all fucked up
<JamesMunns[m]> Using the D1 (RV64) core was so nice, there was exactly ONE kind of DMA, it was mapped to all peripherals, it had all the nice features you could want
<JamesMunns[m]> I wish more MCU cores had that lol
<dirbaio[m]> however
<FreeKill[m]> JamesMunns[m]: And you have to deal with the mmu even if it does work 🙃
<dirbaio[m]> what matters is where the PacketQueue is stored https://github.com/embassy-rs/embassy/blob/main/examples/stm32h7/src/bin/eth.rs#L66
<dirbaio[m]> that's what DMA accesses
<dirbaio[m]> so for example, putting stack in DTCMRAM while statics in AXISRAM should be OK
<dirbaio[m]> putting everything in DTCMRAM is not
<dirbaio[m]> or
<dirbaio[m]> you can put everything in DTCMRAM if you add a #[link_section] to that static to place just that one in AXISRAM
<dirbaio[m]> but
<diondokter[m]> Cortex-m-rt should have something for this. You can use other memory regions, but they won't get initialized which means you can only do MaybeUninit stuff there
<dirbaio[m]> if you do be careful
<dirbaio[m]> becuase if you're also using async spi/uart/i2c/whatever that also uses DMA, you also have to ensure those buffers are not in DTCMRAM
<JamesMunns[m]> diondokter[m]: We've talked about having this be an opt-in feature
<diondokter[m]> Yeah, I remember
<JamesMunns[m]> JamesMunns[m]: (at least: initializing multiple sections potentially)
<diondokter[m]> Maybe I could spend some hacking time on it...
<JamesMunns[m]> diondokter[m]: yeah, happy to chat. IMO you could either:
<JamesMunns[m]> * allow cmrt to know about multiple sections, initalize them (multiple bss would be much easier than multiple data)
<JamesMunns[m]> * have some kind (???) of transform that acts like a managed `GroundedCell` that makes you init it before you access it
<JamesMunns[m]> I have Ideas, some of which I wanted to do (or at least make the building blocks for) in grounded
<diondokter[m]> IMO ideally cmrt would do it. But how is an open question
<diondokter[m]> Mostly, how would it know which sections to initialize?
<dirbaio[m]> maybe use tricks like `linkme` to build a "list of flash->ram copies" and "list of ram zeroinits"
<JamesMunns[m]> diondokter[m]: yep, you'd have to tell it, in code, or linker script
<dirbaio[m]> and have the pre-main asm iterate that
<dirbaio[m]> instead of hardcode data/bss
<diondokter[m]> Could work
<thejpster[m]> STM32H7, for me, is an example of a chip that is basically too complex to program in bare metal.
<birdistheword99[> When I change it to this (my memory.x file has these correctly defined):... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/aRhgiKlfTHdIWiBenJwXyZZN>)
<dirbaio[m]> yeah that's because you're not zero-initializing it
<thejpster[m]> If you have a chip that complex, pay a few pennies more to get an MMU and let Linux deal with all the DMA nonsense.
<dirbaio[m]> linux is massively complex, you're trading one complexity for another
<JamesMunns[m]> I need to write the "things you can't do with other link sections" guide uuuuuuugh
<thejpster[m]> It’s less complex when it just works because the kernel does it all for you. I agree it’s more complex if the kernel is broken or doesn’t do what you want.
<dirbaio[m]> IME it never just works 🙈
<birdistheword99[> Having written bare metal embedded c for a number of years, this isnt the first time I have fallen into the 'where should I put this buffer' trap... STM seem to make this a lot more challenging than NXP though :P
<dirbaio[m]> but maybe I just suck at embedded linux
<thejpster[m]> Some SoC vendors are better than others.
<JamesMunns[m]> maybe someday mnemos will reach the goal of being a "liminal space" OS :D
<JamesMunns[m]> (no breath holding, please)
<dirbaio[m]> even rpi linux doesn't Just Work
<thejpster[m]> What’s Zephyr like at understanding tightly coupled memory versus non tightly coupled memory?
<thejpster[m]> dirbaio[m]: In my experience it’s been excellent. But experiences do vary and anecdotes are not proof.
<dirbaio[m]> the boot process is weird, the graphics stack is weird
<JamesMunns[m]> also, in this analogy, WE are the zephyr devs, but not being paid by STM32 to do the work :p
<thejpster[m]> Yes. One person’s nice OS is another OS developer‘s bare metal nightmare.
<thejpster[m]> dirbaio[m]: Without wanting to go off topic, I think it’s getting less weird, and I’m very happy not using U-Boot.
<JamesMunns[m]> thejpster[m]: do new Pis still boot from the GPU first?
<thejpster[m]> I guess my point is, I wish someone would sort all this out and build nice abstractions and I wish I wasn’t is doing it for free.
<thejpster[m]> s/is/us/
<dirbaio[m]> you do want uboot for some things. I ran into a lot of trouble on rpi4 trying to use mender+uboot to do A/B partition OTA updates
<dirbaio[m]> the boot blob just can't do that
<dirbaio[m]> so mender tells you to layer uboot on top
<dirbaio[m]> but then you run into a lot of trouble because the boot blob does some weird devicetree patching, like to get linux to learn the ethernet mac addr, and also some stuff I never figured out that makes video work
<JamesMunns[m]> (would still like more people to be paid more directly to make things better, for the record, even if we do care a lot today)
<dirbaio[m]> so if you make uboot load the kernel image+devicetree things are broken because the devicetree doesn't have the patches done by the blob
<dirbaio[m]> and not making uboot load the devicetree and use the blob-loaded one was not an option because I did want to be able to update the kernel
<dirbaio[m]> so I just gave up
<dirbaio[m]> 🥲
<thejpster[m]> JamesMunns[m]: Something runs before your code. I don’t know which core it runs on, but on a Pi 5 it can boot from SD, NVME, or USB or Ethernet, and so it’s not that different to a PC BIOS.
<dirbaio[m]> x86 minipc + systemd-boot + mkosi building efi images. that did Just Work
embassy-learner[ has joined #rust-embedded
<embassy-learner[> <dirbaio[m]> "if you're not using the softdevi..." <- Thanks!
<dirbaio[m]> JamesMunns[m]: rpi4 still does. dunno 5
<birdistheword99[> or I'm just getting lucky
<thejpster[m]> Does axisram imply the existence of alliedram?
<JamesMunns[m]> <birdistheword99[> "```..." <- > <@birdistheword99:matrix.org> ```... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/TJFgALOFWtpFhYLNMzjWUJJi>)
<JamesMunns[m]> (RAM includes stack and global variables. global variables include .bss and .data sections)
<birdistheword99[> JamesMunns[m]: Can I tell the linker specifically where to put it? Something like:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/MOzawNnWMjnNNVElIJRJrlBv>)
<JamesMunns[m]> birdistheword99[: > <@birdistheword99:matrix.org> Can I tell the linker specifically where to put it? Something like:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/YzqIjGGOdmaXplClGvjTAiys>)
<JamesMunns[m]> (I gotta run, unfortunately. hopefully someone else can comment)
<ryan-summers[m]> JamesMunns[m]: This works fine because the global object isn't part of BSS, right?
<ryan-summers[m]> You're declaring it in a section, not in BSS, and it's not zero-init'd? Or you have to declare it as MaybeUninit. We do this in Stabilizer (also an STM32H7 with manyd ifferent memory regions): https://github.com/quartiq/stabilizer/blob/main/src/hardware/setup.rs#L144
<ryan-summers[m]> But yeah, using other memory sections comes at the expense of you having to do memory initialization processes. The runtime doesn't do it for you
<ryan-summers[m]> * But yeah, using other memory sections other than RAM comes at the expense of you having to do memory initialization processes. The runtime doesn't do it for you
<ryan-summers[m]> For example, ITCM, DTCM, AXISRAM, etc.
<ryan-summers[m]> So its easy to hit UB if you're not careful
<ryan-summers[m]> Hell, seems easy even if you are
<dirbaio[m]> I remember a conversation a while ago about people being interested in a "slice-backed Vec"
<dirbaio[m]> where the user passes a `&mut [MaybeUninit<T>]` on new and the vec stores the data there, instead of an inline array
<dirbaio[m]> * inline array like in heapless
<ryan-summers[m]> Yeah I remember wanting that for some reason. I think it was around passing borrowed data around easily to drivers?
<dirbaio[m]> who was interested, are there any issues about it somewhere?
<ryan-summers[m]> Not sure where to open an issue
<ryan-summers[m]> Would that be a heapless or managed type thing?
<dirbaio[m]> closest I can find is https://github.com/rust-embedded/heapless/issues/353
<dirbaio[m]> I think it's asking for this, even though it's explained a bit weird
<ryan-summers[m]> It was me that was asking about this a while ago, but I don't think I ever spawned anything
<dirbaio[m]> okay
<dirbaio[m]> i'm interested in adding it to heapless
<dirbaio[m]> generically somehow, to not duplicate absolutely everything
<ryan-summers[m]> I'd be happy to review! I can't remember exactly why I needed it any more, but I think it was for minimq or miniconf?
<dirbaio[m]> I think it's incompatible with the recently-added VecView though https://github.com/rust-embedded/heapless/pull/425
<dirbaio[m]> because it uses some neat unsizing trick
<dirbaio[m]> unsizing `(usize, [T; N]) -> (usize, [T])`
<ryan-summers[m]> I think the desire is to allow the user to give you a `new(data: &mut [u8])`and then you can use `data`as a Vec-like object, so yeah, there's a lifetime, but no generic N
<dirbaio[m]> SliceVec would be `(usize, &mut [T])`
<dirbaio[m]> you can't turn that into a VecView, it has different layout :|
<dirbaio[m]> so I dunno whether to find another design for VecView, or keep VecView only for array-backed Vec, or not do VecView, or not do SliceVec ...
<dirbaio[m]> I have a prototype based on this idea
<dirbaio[m]> Vec = owned len, owned storage
<dirbaio[m]> VecView = borrowed len, borrowed storage
<dirbaio[m]> * what's there now is essentially:
<dirbaio[m]> Vec = owned len, owned storage
<dirbaio[m]> VecView = borrowed len, borrowed storage
<dirbaio[m]> SliceVec is actually a mix of both: owned len, borrowed storage
<dirbaio[m]> s///
<dirbaio[m]> makes Vec generic over a "storage" trait, which decides whether the len and the storage are owned or borrowed
<ryan-summers[m]> Ah, length is the length of the internal vec, not of the buf
<dirbaio[m]> yea
<ryan-summers[m]> Looks reasonable to me. Is the intent to make it all compat with ViewStorage?
<ryan-summers[m]> err, VecView
<dirbaio[m]> this is a different way of implementing VecView
<ryan-summers[m]> Ah I see, I like it
<dirbaio[m]> so the resulting VecView works more or less the same as the VecView we have now
<dirbaio[m]> kinda
<ryan-summers[m]> Although sometimes base traits make docs super wonky for some reason
<ryan-summers[m]> I've noticed this with HALs when the impl is hidden behind some trait
<ryan-summers[m]> So newcomers can have a really hard time finding the API to use
<ryan-summers[m]> Not sure if that's an us problem or a rustdoc issue
<dirbaio[m]> you mean when you make stuff trait methods
<dirbaio[m]> here all methods would be on the Vec structs
<dirbaio[m]> * trait methods?
<ryan-summers[m]> Yeah, like the VecBase push and às_view` funcs
<dirbaio[m]> VecBase is a struct
<ryan-summers[m]> Oh duh
<ryan-summers[m]> Yeah nvm
<ryan-summers[m]> So they're essentially all type-aliased versions of the base. Interesting approach
<dirbaio[m]> the current VecView impl does sorta that... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/wCrHshKChPZjmTQpHkOIfWwB>)
<dirbaio[m]> * the current VecView impl does sorta that already... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/dRxsZSIYfJjMVgKVDAkWCNQS>)
<JamesMunns[m]> for my "16 flavors of bbqueue" I plan to have a bunch of type aliases, or more likely, wrapper types
<dirbaio[m]> cc Sosthène Guédon reitermarkus
<JamesMunns[m]> (mostly because type aliases can be weird, UX wise)
SosthneGudon[m] has joined #rust-embedded
<SosthneGudon[m]> I'm pretty sure that with som smart generics you could get `SliceVec` to share the implementation with vec view:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/AwJxTFnfXZxqXCwmsdkqFJKS>)
<SosthneGudon[m]> With thout you could have the implementation be over `VecInner<T> where T: AsRef<[MaybeUninit]> + AsMut<[MaybeUninit]>`, so only one implementation is used for the 3 versions.
<dirbaio[m]> I find it quite magical that this works on stable
<SosthneGudon[m]> The issue would be getting the documentation for each type alias to be specialized and good
<SosthneGudon[m]> s/thout/thatt/
<SosthneGudon[m]> s/thout/that/
<dirbaio[m]> if only there was some way to make it work for both Vec and SliceVec
<SosthneGudon[m]> This also work, which is pretty neat:
<SosthneGudon[m]> ```rust
<SosthneGudon[m]> ```
<SosthneGudon[m]> * This also work, which is pretty neat:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/wpVMuZXCZnSLteFWoVlQLwHA>)
<SosthneGudon[m]> dirbaio[m]: You wouldn't build a `SliceVec` from a "real" vec
<SosthneGudon[m]> So the coercion would not be useful anyway
<SosthneGudon[m]> Quick question, what would be the use cases for `SliceVec` that `VecView` does not solve?
<birdistheword99[> <ryan-summers[m]> "But yeah, using other memory..." <-
<birdistheword99[> Sorry to sound like a broken record, but just for clarification, what is the best way to deal with placing a buffer or StaticCell in a particular memory region with `#[link_section = ...]`, bearing in mind that it may need be be zero-initialised?
<birdistheword99[> For example. `static PACKETS: StaticCell<PacketQueue<4, 4>> = StaticCell::new();`
<JamesMunns[m]> I gave a related example here: https://github.com/embassy-rs/embassy/issues/2905
<JamesMunns[m]> the answer is: "you have to use `MaybeUninit`, and if you want it to be `static` and not `static mut`, you have to use `UnsafeCell`.
<JamesMunns[m]> `GroundedCell`/`GroundedArrayCell` are basically just `UnsafeCell<MaybeUninit<T>>`, but with some helper methods, and SAFETY docs how you need to hold them right.
<JamesMunns[m]> so you'd want to use the ::uninit constructors, then unsafely initialize them at runtime. Thats the best way I know how today, without writing more docs or code.
<birdistheword99[> Thanks, I'll try that :)
<JamesMunns[m]> (`GroundedCell<T>` is for a single item `T`, `GroundedArrayCell<T, N>` is for an array `[T; N]`)
<JamesMunns[m]> so for `PacketQueue<4, 4>`, you'd probably want `GroundedCell<PacketQueue<4, 4>>`
<dirbaio[m]> <SosthneGudon[m]> "This also work, which is..." <- > <@sosthene:nitro.chat> This also work, which is pretty neat:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/svwVqbfbqPCruDHNvOUHuVXs>)
<dirbaio[m]> <SosthneGudon[m]> "Quick question, what would be..." <- SliceVec would still "logically own" the items. so it'd drop them when dropped
<dirbaio[m]> while VecView doesn't, you have to drop the original Vec
<dirbaio[m]> so for example you could allocate a `static mut [MaybeUninit<T>; N]`
<dirbaio[m]> then create a SliceVec out of that
<dirbaio[m]> then pass the SliceVec around as if it was owned, because it is
<dirbaio[m]> when you drop the SliceVec all the items are dropped
<dirbaio[m]> then the original `[MaybeUninit<T>; N]` storage is logically unused again, so you can reuse it to create another vec
<dirbaio[m]> vs with VecView you'd have to instead allocate a `static mut Vec<T, N>`, make the view out of that, then pass it around. but you'd have to ensure you manually .clear() the original Vec between uses.
andar1an[m] has quit [Quit: Idle timeout reached: 172800s]
<dirbaio[m]> also initializing is harder because it's a Vec, it's not a purely uninit chunk of ram anymore
<dirbaio[m]> it's true there's a lot of overlap though
<birdistheword99[> <JamesMunns[m]> "so for `PacketQueue<4, 4>`, you..." <- Seems like we're on the right track, but with this code:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/DceOKbREfNyjwaTLSpzZpyTC>)
<birdistheword99[> Will just wrapping the whole thing in a mutex solve that?
<JamesMunns[m]> uhhhh
<JamesMunns[m]> it's probably an issue with `GroundedCell`, right now I only `impl<T: Sync> Sync for GroundedCell<T>`, but I probably need to `impl<T: Send> Sync for GroundedCell<T>`
<JamesMunns[m]> I need to make sure that is sound with how I've documented it though, sorry.
<birdistheword99[> Ah ok no worries, i'll look at the code for Grounded cell and see how that works, and try doing it manually
<birdistheword99[> I have to run, but I will probably have a few more questions when I pick this up tomorrow, sorry!
<JamesMunns[m]> Opened https://github.com/jamesmunns/grounded/issues/7, thanks for trying that birdistheword99!
cr1901_ has quit [Remote host closed the connection]
cr1901_ has joined #rust-embedded
<dirbaio[m]> Sosthène Guédon: shouldn't this drop the items only if it's a Vec, not a VecView? https://github.com/rust-embedded/heapless/blob/main/src/vec.rs#L1578
<dirbaio[m]> ah nope, the user never owns a VecView, only &VecView or &mut VecView, so the drop never runs for VecView
<dirbaio[m]> hmmm
<dirbaio[m]> wow
<FrreJacques[m]> I can't find an embedded-hal-async analogon to digital::OuputPin. Is it missing or can't one set a pin high or low async?
<GrantM11235[m]> There isn't an async version because it (usually) doesn't take any time to set the pin, you just set it and it's done
<dirbaio[m]> use the non-async one yep
firefrommoonligh has joined #rust-embedded
<firefrommoonligh> <ryan-summers[m]> "Although sometimes base traits..." <- Yea huge problem with traits. Can be mitigated with code examples, eg in a repo's examples folder. I think at its core its a rustdoc issue, but IMO any crate that uses traits should recognize this problem and mitigate using examples.
<firefrommoonligh> Ie, if a concrete type is used, you can follow the docs links until you get to a constructor or see the struct fields to add. If there is a trait, you are blocked and get no info on how to create the item.
chek has left #rust-embedded [#rust-embedded]
adamgreig[m] has quit [Quit: Idle timeout reached: 172800s]
<dirbaio[m]> I think i'm OK with not doing SliceVec
<dirbaio[m]> because it's true there's a lot of overlap with VecView. most "const generic param is annoying" issues are solvable fine with VecView
<dirbaio[m]> and unsizing is super neat, and I can't see any way to add SliceVec while keeping it
<dirbaio[m]> and my other concern is code duplication, which that PR fixes while keeping unsizing
<dirbaio[m]> so ... yay?
therealprof[m] has quit [Quit: Idle timeout reached: 172800s]
andreas[m] has quit [Quit: Idle timeout reached: 172800s]
rmsyn[m] has quit [Quit: Idle timeout reached: 172800s]
Klemens[m] has joined #rust-embedded
<Klemens[m]> Is there something comparible to sequential-storage that isnt async?
pflanze has quit [Remote host closed the connection]
pflanze has joined #rust-embedded
<diondokter[m]> <Klemens[m]> "Is there something comparible to..." <- Not really. But you can use it blocking. Use a `block_on` function and a blocking wrapper for the flash traits
<diondokter[m]> <Klemens[m]> "https://crates.io/crates/..."; <- Also, there's a much more up-to-date version than this
<Klemens[m]> Yeah ik, but Google has indexed the old thing 😅😅😊
<diondokter[m]> Ah right
<Klemens[m]> I'll try my best 😊
<diondokter[m]> Embassy-futures has a simple block-on executor you can use.
<diondokter[m]> Embassy-embedded-hal has a wrapper that can turn the blocking NorFlash traits into (fake) async
<diondokter[m]> Klemens:
<Klemens[m]> Yeah the reason is that I don't have a async runtime at all so yeah. 😅
<diondokter[m]> That's fine. For something like this it really makes sense to use async since reading from an external flash chip over a slow SPI bus can be very slow. Don't want to block the entire chip for all that time. So keep that in mind! It can take a while
<Klemens[m]> Yeah well. I'm might be switching at some point, im am not to used to rust future model let alone embassy and constantly ran into timing issues...😐
<Klemens[m]> Since than I only did game loops
bartmassey[m] has quit [Quit: Idle timeout reached: 172800s]
<Klemens[m]> Yeah it looks like sequential-storage is the wrong library to use. They inforce a MAX_WORD_SIZE of 32 bytes. The minimum size a rp2040 can write is 256 bytes...
<diondokter[m]> The smallest is really 256 bytes?
<diondokter[m]> Isn't that a whole page?
<diondokter[m]> Usually those types of flashes can do byte writes
<Klemens[m]> yeah you can only write a whole page and only delete a whole sector
<dirbaio[m]> No, you can write bytes
<dirbaio[m]> Or maybe 4-byte words
<Klemens[m]> (windbond w25q16jv)
<dirbaio[m]> But you can definitely write 4-byte words
<diondokter[m]> Apparently the flash_range_program from the pico sdk says the size must be a multiple of 256
<dirbaio[m]> That's a lie, it works with smaller ranges
<diondokter[m]> Really? lol
<Klemens[m]> why would they lie about that sort of thing?
<diondokter[m]> Embassy-rp defines the write size as 1 byte: https://docs.embassy.dev/embassy-rp/git/rp2040/flash/constant.WRITE_SIZE.html
<dirbaio[m]> /shrug
<diondokter[m]> Maybe they figured that's the worst case for any flash that could work on the rp2040?
<dirbaio[m]> No idea. Maybe because *some* flash chips can only write in 256 byte pages, so they say that just to be safe it works on all flash chips?
<dirbaio[m]> Yea
<dirbaio[m]> You can definitely write down to 4b on the flash in the Pico, and probably down to 1b too
<diondokter[m]> 1 byte is perfect. You won't get any padding with sequential-storage that way
<diondokter[m]> The max of 32 bytes that it supports is mostly for STM chips. It's not great...
<Klemens[m]> well huh
<diondokter[m]> Yeah, the flash itself can do single byte writes
<diondokter[m]> Also, flash chips like this often aren't explicit about it, but they support multiple writes too. So you can use the MultiWriteNorFlash trait for this too
<Klemens[m]> doesnt seem to be working tho
<diondokter[m]> What's the code you're doing this with?
<diondokter[m]> Looking at the embassy flash impl, it makes sure the data you're flashing is stored in RAM
<diondokter[m]> So if you do `program(0x101F0000, &[1, 2, 3, 4, 5])`, then it's likely the slice you pass is stored in flash
<Klemens[m]> looks like futures block forever
<Klemens[m]> ohh ehm
<diondokter[m]> What's flash.write here? Which HAL are you using?
<diondokter[m]> Also, maybe don't overwrite address 0. There's nothing there
<Klemens[m]> no no dont worry about address 0
<diondokter[m]> Ah, there's a start address yeah
<Klemens[m]> I already limited everything... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/wJyiYeSWqjKPykiAUbHKbYND>)
<diondokter[m]> The implementation is wrong. It should use XIP_START_ADDRESS instead of START_ADDRESS I think
<Klemens[m]> no
<Klemens[m]> i already argued with the maintainer about this, its not the case
<Klemens[m]> this is the address relative to flash
<Klemens[m]> * relative to start of flash
<diondokter[m]> huh
<diondokter[m]> Yeah, looks like that's consistent with the pico sdk
<Klemens[m]> and the code works, when using 256 bytes for writing
<diondokter[m]> Klemens[m]: > <@klemens:mozilla.org> ```rust... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/FFOwtKdbwaaPSRVXlzalojYF>)
<diondokter[m]> Hmmm I see...
<Klemens[m]> oh wait actually it isnt working with 256 bytes either
<diondokter[m]> This is the embassy code. It only emulates 1-byte writes by padding with 0xFF to whole pages
<dirbaio[m]> What crate are you using for writing to flash?
<Klemens[m]> it stopped working once i wrapped everything in async
<Klemens[m]> dirbaio[m]: https://crates.io/crates/rp2040-flash
<Klemens[m]> jannic is one of the main maintainer of the rp2040-hal
<diondokter[m]> Huh, the rp2040-hal doesn't seem to have a flash impl. Weird
<dirbaio[m]> Iirc that one had bugs where it broke depending on the optimization level
<dirbaio[m]> Because the compiler would call things that are in flash
<M9names[m]> Mixing HALs is generally a bad idea
<dirbaio[m]> Try compiling with opt-level=z lto=fat
<dirbaio[m]> Or maybe it's not that, I see it has inline asm. Maybe it was an older impl
<dirbaio[m]> Still worth a try
<dirbaio[m]> Or try embassy-rp, that one works for sure
<Klemens[m]> M9names[m]: which i am kinda forced todo, besides implementing my own blocking wear leveling
<Klemens[m]> the issue is mainly that i have written an application, and all it needs is store a stupid number...
<diondokter[m]> rp2040-flash and embassy-rp look identical on the low-level.
<diondokter[m]> But embassy-rp has figured out the high level api
<diondokter[m]> Klemens[m]: Ha yeah, I get that's annoyinh
<diondokter[m]> s/annoyinh/annoying/
<diondokter[m]> How often do you need to update it?
<Klemens[m]> often 😅
<Klemens[m]> like not a thousand times a minute but the flash should last a while
<diondokter[m]> I think your flash can be erased 100k times per page
<diondokter[m]> s/per/every/
<Klemens[m]> ive actually started on embassy-rp back i n the day. but i timing became a major headache
<Klemens[m]> diondokter[m]: yeah i mean ofc, as a last solution but prob not very sexy... welll ehm...im gonna think about it...
<SosthneGudon[m]> The user can own a `Box<VecView>`
<diondokter[m]> I'm biased, I think s-s is a very good option haha
<diondokter[m]> But if it's just for one number...
<Klemens[m]> flash failure is a pretty critical event in my application. 😐😐😐. but i get your point...
m5zs7k has quit [Ping timeout: 268 seconds]
<JamesMunns[m]> <Klemens[m]> "like not a thousand times a..." <- once a second to the same sector will burn out in 1.15 days, btw
<Klemens[m]> yeah nah aint gonna happen. its more of a...you can set an address kind of situation
<diondokter[m]> Well, if you run into anything with sequential-storage itself, feel free to open an issue!
<diondokter[m]> It's well tested at this point and has nice CRCs too. Turns out it's quite difficult to make it work well without bugs
<JamesMunns[m]> yeah, if it's like "100 times lifetime max cal setting", probably just use a hardcoded sector
m5zs7k has joined #rust-embedded
<diondokter[m]> JamesMunns[m]: Well if you put it that way, 100k is nothing!
<JamesMunns[m]> or like "update cal parameters once a quarter", totally fine
<diondokter[m]> I wrote s-s initially for the nrf internal flash. That only supports 10k. Even less
<JamesMunns[m]> yeah, if you need to write like 4x a day, that's still 68 years lol
<JamesMunns[m]> (6.8 on nordic flash)
<Klemens[m]> yeah i mean its pretty well made...just not really ehm applicable to my scenario...but do keep up the great work
<SosthneGudon[m]> <dirbaio[m]> "ah nope, the user never owns a..." <- But the user can own a `Box<VecView>` or equivalent. I don't know of a case that would be equivalent to that on stable Rust without an allocator, but it's still better to have the correct behavior
<Klemens[m]> i mean if i could s-s running without embassy ill sign up 🤣🤣. but 9names is prob right about the mixing hals thing
<dirbaio[m]> SosthneGudon[m]: Yeah I realized later, when I saw there's a test that does that. But yea either way it's working as intended. Very cool!
<dirbaio[m]> Klemens[m]: Using embassy-futures is not "mixing hals" tho, that should definitely work
<dirbaio[m]> "mixing hals" would be using embassy-rp and rp2040-Hal at the same time
<dirbaio[m]> The hal is only embassy-rp
<Klemens[m]> The type adapter broke my lib 😅😅😅 prob because of the optimisation stuff ...
<SosthneGudon[m]> Ok, for using the `Storage` trait approach, though I didn't go that way from the beginning since I had concerns about the documentation generated for that, but there were bugs with Rust doc that have been fixed at the time, so it may be actually better.
<SosthneGudon[m]> I'll answer on the issue tomorrow
<SosthneGudon[m]> <dirbaio[m]> "Yeah I realized later, when I..." <- Initially it was not. I also had the intuition that VecView could not be dropped, I had to fix that in a second PR
<SosthneGudon[m]> <SosthneGudon[m]> "Ok, for using the `Storage..." <- > <@sosthene:nitro.chat> Ok, for using the `Storage` trait approach, though I didn't go that way from the beginning since I had concerns about the documentation generated for that, but there were bugs with Rust doc that have been fixed at the time, so it... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/YewtdiiHsvZIPIwnfdbbOdOo>)
<dirbaio[m]> oh, hmm
<dirbaio[m]> but the flip side is also true, it can also hurt perf
<dirbaio[m]> impls monomorphized for a particular length can be faster
<dirbaio[m]> so if you force Vec methods to forward to VecView you're forcing slower code
<dirbaio[m]> and the code size gain only applies if you use two vecs of different length but same type, which is not necessarily always
<dirbaio[m]> so maybe it's better to monomorphize? users who do care about code size can still get good code size with VecView.
<dirbaio[m]> hmm
<dirbaio[m]> wtf
<SosthneGudon[m]> <dirbaio[m]> "impls monomorphized for a..." <- Is that really true? On systems with a cache, more code bloat means more instruction cache misses.
<SosthneGudon[m]> Even on systems without, I would say the compiler should be able to inline and make the decision regarding being able to better optimize or to not inline.
<dirbaio[m]> first of all, first two rows should be identical and are not. one is taking heapless v0.8 from crates.io, the other is taking v0.8 from git, the exact tag. why?
<SosthneGudon[m]> <dirbaio[m]> "and the code size gain only..." <- I'm pretty sure that `Vec<u8, N>` will appear multiple times on many codebases
<dirbaio[m]> second, git main (with the methods forwarding to vecview) is also a bit bigger. not smaller
<dirbaio[m]> but why is my branch so much bigger? 😭
<SosthneGudon[m]> Given the current state of embedded Rust, I think that code size is a concern for more users than performance (which is already really good)
<SosthneGudon[m]> dirbaio[m]: > <@dirbaio:matrix.org> ```... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/nFTtUpDTvCtFJxLMibRcDyaV>)
<SosthneGudon[m]> Was did you test on?
<dirbaio[m]> I get different results If I patch to git or to path, lol
<dirbaio[m]> off by 20-40 bytes
<SosthneGudon[m]> Panic strings include filenames no? That could explain the difference
cr1901 has joined #rust-embedded
<dirbaio[m]> <SosthneGudon[m]> "Was did you test on?" <- company firmware
cr1901_ has quit [Ping timeout: 260 seconds]
IlPalazzo-ojiisa has quit [Quit: Leaving.]
<dirbaio[m]> <SosthneGudon[m]> "Panic strings include filenames..." <- hmm I got no panic strings, if I `strings` the bin none show up.
<JamesMunns[m]> dirbaio[m]: could diff `nm` output?
<JamesMunns[m]> JamesMunns[m]: (wont tell you much, probably, but might be interesting)
<dirbaio[m]> it's really annoying because it prints the absolute address of everything :(
<JamesMunns[m]> could cut -k1 then sort
<dirbaio[m]> <SosthneGudon[m]> "I'm pretty sure that `Vec<u8, N>..." <- if this theory was true, i'd have seen git main be smaller than git v0.8, but they're equal. I do have `Vec<u8, N>` of many lengths in my firmware
<dirbaio[m]> > cut: invalid option -- 'k'
<JamesMunns[m]> ah, nvm, the way to do that is with an awk command :p
<dirbaio[m]> yikes D:
<JamesMunns[m]> yeah
<JamesMunns[m]> could also creatively cut it with your editor
<JamesMunns[m]> idk