ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
oneDragon[m] has quit [Quit: Idle timeout reached: 172800s]
starblue has quit [Ping timeout: 246 seconds]
<tangotaylor[m]> <dngrs[m]> "You can do qemu tests - for an..." <- Awesome, thanks!
starblue has joined #rust-embedded
cr1901 has quit [Read error: Connection reset by peer]
cr1901 has joined #rust-embedded
johnmcnuggets has quit [Ping timeout: 255 seconds]
lehmrob has joined #rust-embedded
ryan-summers[m] has quit [Quit: Idle timeout reached: 172800s]
Kaspar[m] has joined #rust-embedded
<Kaspar[m]> > Pin is about "thou shalt not replace"
<Kaspar[m]> But `Pin` implements `DerefMut` as well, so it does allow replacing. Isn't `Pin` about `thou shalt not change location in memory`?
ryan-summers[m] has joined #rust-embedded
<ryan-summers[m]> Anyone know of an easy way to make e-h 1.0 traits that will work with drivers that still use the 0.2 versions? I'm using a crate that hasn't been updated in 5 years, so it's unlikely that it'll be updated
<ryan-summers[m]> I guess you could make your own wrapper shim that impls the 0.2 versions pretty easily (in this case, it's just GPIO), but curious if there's a crate that already does this
<diondokter[m]> There's this one, but it's not yet updated to the full 1.0: https://github.com/ryankurte/embedded-hal-compat
<ryan-summers[m]> Ah but not sure if I can have both 1.0 and 0.2 in the dep tree and differentiate things
<diondokter[m]> You can
<diondokter[m]> Gotta rename one
<ryan-summers[m]> Ah didn't know you could rename the dep with cargo, very cool
<ryan-summers[m]> Maybe a better question - anyone have a pin-debounce crate that works with the 1.0 e-h?
mameluc[m] has joined #rust-embedded
<mameluc[m]> sanity check, I am doing this right. Trying to read the temperature calibration values but nothing makes sense... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/XUpseUqzQTpqxxGVeycKbfTe>)
<mameluc[m]> * sanity check, I am doing this right? Trying to read the temperature calibration values but nothing makes sense... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/tPJFNNFdfNOPnERUkVLHOles>)
<mameluc[m]> Do I need to offset the address somehow? The values I get kind of makes sense but the result doesn't
<ryan-summers[m]> Shouldn't you be creating a raw pointer to a u16 instead?
<mameluc[m]> my understanding is that it is "just a pointer", the const is not aware what it points to
<ryan-summers[m]> I mean in teh context of rust pointers, not logical pointers... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/eEYwKqZGBJpqNFIbWQGHAbKa>)
<ryan-summers[m]> * I mean in teh context of rust pointers, not logical pointers... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/DHlAyWqZTLJqVjtAvnEMGePM>)
<ryan-summers[m]> * I mean in the context of rust pointers, not logical pointers... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/WNLHTneEOmwmTuQjtHgmUuLB>)
<ryan-summers[m]> * I mean in the context of rust pointers, not logical pointers... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/kchYcBjPXrqWilGFwcXPsXlf>)
<ryan-summers[m]> * I mean in the context of rust pointers, not logical pointers... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/kpFhZoHIxxMvLMVQgaOdHfRc>)
<ryan-summers[m]> * I mean in the context of rust pointers, not logical pointers... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/LArfgLkvHtkTVJswCLMlirVW>)
<mameluc[m]> ryan-summers[m]: > <@ryan-summers:matrix.org> I mean in the context of rust pointers, not logical pointers... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/RqYFryqkSZIbyShHWKIiVzkq>)
<ryan-summers[m]> Yeah then it's probably not from reading the calibration constants, likely how you're applying them. Also double check the endianness
<ryan-summers[m]> Also, you could also jump into GDB and manually print the memory at that location (or use probe-rs to dump it) just to confirm what you get in code is what's actually at that address
<mameluc[m]> I get 946 and 1256 for 30c and 130c. my adc gives 498 but it is kind of cold in my room so it could make sense. If I calculate the value on my computer or on the mcu I get -114.516.
<mameluc[m]> ryan-summers[m]: good idea
<ryan-summers[m]> probe-rs-cli can do a memory read for you pretty easily I believe
<ryan-summers[m]> Does anyone know how to use the embedded-hal-bus with a framework like RTIC that handles concurrency for you without having to sprinkle in a ton of critical sections? RTIC handles that for me
<diondokter[m]> ryan-summers[m]: RTIC has its own Arbiter that does the same thing but using RTIC stuff
<ryan-summers[m]> Ah okay thanks, didn't know that. I assume it's not in rtic 1.0 though?
<diondokter[m]> Ehhh
<diondokter[m]> IDK if it works with 1.0
<ryan-summers[m]> Welp, maybe this is the reason to just jump to 2.0. No reason to stay on 1.0 anyways
starblue has quit [Ping timeout: 252 seconds]
starblue has joined #rust-embedded
IlPalazzo-ojiisa has joined #rust-embedded
<ryan-summers[m]> Ah, but the rtic-sync stuff requires e-h-async, not e-h 1.0
<dirbaio[m]> if you put all tasks that use devices on the same SPI bus at the same priority you should be able to use RefCellDevice
<ryan-summers[m]> Yeah I don't. They're all at different priorities and need to be in different tasks
<ryan-summers[m]> For context, this is for migrating https://github.com/quartiq/booster to rtic 2.0 and e-h 1.0
<ryan-summers[m]> Although could possibly refactor it to be like that, just would prefer not to
<dirbaio[m]> yea...
<dirbaio[m]> then you can either use the 2.0 arbiter and move everything to async
<dirbaio[m]> or I think you should be able to do your own SpiDevice impl on top of the classic (non-async) RTIC locks...?
<ryan-summers[m]> Yeah I think that's the approach for now. Should be as simple as updating shared-device to the newer e-h
<ryan-summers[m]> s/device/bus/
<dirbaio[m]> don't use shared-bus
<dirbaio[m]> it's unsound for SPI https://github.com/Rahix/shared-bus/issues/23
<dirbaio[m]> and fundamentally unfixable as long as drivers are managing CS on their own
<dirbaio[m]> this is why embedded-hal 1.0 introduced SpiDevice, so CS management can be done by the bus sharing layer, not by the drivers on their own
<dirbaio[m]> if you upgrade shared-bus to eh1.0 instead of moving to SpiDevice you're perpetuating the problem! :D
IlPalazzo-ojiisa has quit [Remote host closed the connection]
<ryan-summers[m]> Yeah context, this is an I2C bus, not SPI. I know it's unsound on SPI
<dirbaio[m]> ahhhhh
<dirbaio[m]> okay
<dirbaio[m]> so
<ryan-summers[m]> Yeah I wrote the RTIC additions to shared-bus specifically because of this project
<ryan-summers[m]> I think bumping to 1.0 is still fine for it. Would be nice if rtic-sync had a non-async version, but seems like it's going the route of async-first
<dirbaio[m]> where's the code for that? can't find it in upstream shared-bus?
<ryan-summers[m]> It's the AtomicCheckMutex
<dirbaio[m]> ah yes
<ryan-summers[m]> It's obviously incredibly panic-prone if you use it wrong, but gets the job done
<dirbaio[m]> yeah was about to ask
<dirbaio[m]> how do you prevent a higher-prio task from preempting a lower-prio one while it's using the bus, and running into the panic?
<ryan-summers[m]> That's what RTIC does by design, the RTIC mutex handles setting the priority level of the NVIC to prevent that situation
<ryan-summers[m]> So this panic is just a sanity check
<dirbaio[m]> ah.. you still have to manually ensure all resources with drivers touching i2c have the same priority ceiling though, right?
<ryan-summers[m]> The caveat is that you have to have all your shared devices in a single RTIC "resource"
<ryan-summers[m]> Or what you said
<dirbaio[m]> I see!
<ryan-summers[m]> Primarily just trying to avoid any kind of critical section here for the device because there's a lot of processing being done
<dirbaio[m]> okay so in the end this is the same as RefCellDevice, CriticalSectionDevice in embedded-hal-bus, except with a different kind of mutex
<ryan-summers[m]> Yeah basically
<ryan-summers[m]> Maybe the e-h-bus could have a Mutex trait?
<dirbaio[m]> iirc there were some discussions about it back then
<dirbaio[m]> but ultimately, you can always implement the I2c or SpiDevice traits on your own
<dirbaio[m]> * your own already
<dirbaio[m]> so
<dirbaio[m]> * for your custom mutex
<ryan-summers[m]> Fair! Was just hoping for a tool that did it already
<dirbaio[m]> so all we'd gain by introducing a mutex trait would be allowing reusing those 60 lines, at the cost of increased complexity and harder to read docs
<dirbaio[m]> my recommendation would be to copypaste those 60 lines into your project and adapt them to use AtomicCheckMutex
andres[m] has joined #rust-embedded
<andres[m]> <mameluc[m]> "sanity check, I am doing this..." <- > <@mameluc:matrix.org> sanity check, I am doing this right? Trying to read the temperature calibration values but nothing makes sense... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/jMysbyJeOthlijqmSHlrwcfs>)
<mameluc[m]> andres[m]: > <@andresv:matrix.org> This works on stm32g081:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/aPAdqvtAOedWhZWoYZOmwfzu>)
<dirbaio[m]> or maybe PR AtomicCheckDevice to embedded-hal-bus. I personally think it'd be useful to have if it enables using it with non-async RTIC
<ryan-summers[m]> Yeah there's just a lot of caveats on how you have to use it
<dirbaio[m]> yeah. perhaps it'd be less bad if it returned a "busy" error instead of panicking
<dirbaio[m]> and then we just document the caveats
<mameluc[m]> This example gets me pretty close but it is without calibration so a little low in my case, also some consts needs to be adjusted for my chip.
<mameluc[m]> Amazing how much I learn from looking at the examples ^^
<dirbaio[m]> anyway, if you do decide to update `shared-bus` then please please please cut out SPI support entirely 🙏
<dirbaio[m]> (or update it to properly handle CS, but at that point you'd be reinventing `embedded-hal-bus`...)
<dirbaio[m]> though i'd prefer if as an ecosystem we consolidated efforts in embedded-hal-bus :)
<ryan-summers[m]> Yeah if you would accept it into e-h-bus, I'd prefer to do it there
<dirbaio[m]> I think we should, can't see any other way of supporting non-async RTIC
<dirbaio[m]> because you can't hold RTIC locks across task invocations, right? or can you?
<dirbaio[m]> like, can you make the I2C proxy struct actually own an RTIC lock? and actually lock it?
<ryan-summers[m]> No, they're closure based
<dirbaio[m]> * lock it when accesing i2c?
<dirbaio[m]> yeah you do cx.i2c.lock(|i2c| use(i2c) )
<dirbaio[m]> but can you move the actual lock (the cx.i2c) into another struct?
<dirbaio[m]> so the struct would do self.i2c.lock(|i2c| use(i2c) )
<ryan-summers[m]> I don't believe so. I think I tried owning a lock at one point but the type isn't defined until post-compilation or something weird like that
<dirbaio[m]> yeah I feared so...
<ryan-summers[m]> Or, owning something that could be locked, i.e. owning the mutex
<dirbaio[m]> oh well
<ryan-summers[m]> But maybe it changed. That was rtic 1.0
<dirbaio[m]> would be cool if you could, then you could have "real" shared spi/i2c without the risk of panics if you set it up wrong
<dirbaio[m]> wonder if there's something RTIC can do to support it
<ryan-summers[m]> Yeah might ask around
johnmcnuggets has joined #rust-embedded
olsen__[m] has quit [Quit: Idle timeout reached: 172800s]
<rault[m]> I found this crate using embedded-io,... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/NMdjJCqFJSdqOrFArZUFgAug>)
johnmcnuggets has quit [Changing host]
johnmcnuggets has joined #rust-embedded
lehmrob has quit [Ping timeout: 255 seconds]
alexmoon[m] has quit [Quit: Idle timeout reached: 172800s]
johnmcnuggets has quit [Ping timeout: 272 seconds]
haobogu[m] has quit [Quit: Idle timeout reached: 172800s]
thomas25 has quit [Quit: fBNC - https://bnc4free.com]
thomas25 has joined #rust-embedded
jbaczuk has joined #rust-embedded
<mabez[m]> Who has admin access to rust-embedded-community? Would it be possible to add me (Github: mabezdev) and temporarily Jorge (Github: japaric) to the org so they can transfer aligned?
Ralph[m]1 has joined #rust-embedded
<Ralph[m]1> mabez[m]: eldruin took care of that for my crates the last time
juliand[m] has joined #rust-embedded
<juliand[m]> <mabez[m]> "Who has admin access to rust-..." <- https://github.com/rust-embedded-community/meta?tab=readme-ov-file#maintainers not sure if this list is up to date, but maybe a good start
<Ralph[m]1> unrelated, but japaric: what are your plans for [`hash32`](https://github.com/japaric/hash32)? it's stuck on 0.x and has two open PRs since a while. it might be nice to get a 1.0 release as it's a dependency of `heapless` (and acc. to [the API guidelines](https://rust-lang.github.io/api-guidelines/necessities.html#c-stable) `heapless` should only have a stable release after `hash32` has one
starblue has quit [Ping timeout: 252 seconds]
crabbedhaloablut has quit []
crabbedhaloablut has joined #rust-embedded
jbaczuk has quit [Quit: Client closed]
starblue has joined #rust-embedded
corecode[m] has joined #rust-embedded
<corecode[m]> hi, does anybody generate jlink commander files during build so that a ready to use build asset pops out?
<eldruin[m]> <mabez[m]> "Who has admin access to rust-..." <- Do you want to join the org? If so please open a PR to the [meta](https://github.com/rust-embedded-community/meta) repository adding yourself. We would be happy to have you
<eldruin[m]> as for transferring a repository without being a member of the org, there are a couple of permission problems. The easiest for everybody is that you transfer it to me personally and I transfer it then to the org
<eldruin[m]> hash32 sounds like a good fit as well
<eldruin[m]> regardless of whether you joined the org, you can keep whatever privileges on the repo
starblue has quit [Ping timeout: 264 seconds]
starblue has joined #rust-embedded
wyager[m] has joined #rust-embedded
<wyager[m]> Is it possible to link against alloc without needing a global allocator? I'm only using the _in functions to allocate in a non-global arena allocator. I would actually prefer if my program failed to link specifically if I accidentally use a global allocator anywhere, but I still want to pull in Vec and Arc and so on without getting no global memory allocator found but one is required; link to std or add #[global_allocator]
<wyager[m]> to a static item that implements the GlobalAlloc trait
<JamesMunns[m]> If you can, it definitely requires nightly features, I'm not sure if you can fully avoid the global alloc tho. You might be able to have a dummy impl that can't actually alloc tho
<JamesMunns[m]> Like have a "null" global alloc to satisfy the dependency, then ensure you only ever use local allocators
<wyager[m]> <JamesMunns[m]> "Like have a "null" global..." <- That's what I'm doing - a global alloc with only like 16 bytes of space. Problem is, re "ensure I only ever use local allocators" - the compiler should really do this for me. Easy to make a mistake (use wrong `new` API) that results in a runtime crash. No good
<wyager[m]> Or, alternatively, pull in a dependency that uses a global allocator under the hood even though it marks itself as no_std (at least one of my deps did this, and I had to fork it to fix)
<wyager[m]> * Or, alternatively, I might unintentionally pull in a dependency that uses a global allocator under the hood even though it marks itself as no_std (at least one of my deps did this, and I had to fork it to fix)
<wyager[m]> * Or, alternatively, I might unintentionally pull in a dependency that uses a global allocator under the hood even though it marks itself as no_std. At least one of my deps did this, and I had to fork it to fix. If I had been using the "fake global allocator" strategy at that point, my program would have paniced deep in the bowels of that library in a rarely-traversed code path
<JamesMunns[m]> Yep, I don't have a better answer though, and it's definitely an early, in progress feature.
<wyager[m]> Ok thanks, just wanted to confirm. I'm pretty excited about the effort to move things to explicit allocators - should be great for embedded work
aktaboot[m] has quit [Quit: Idle timeout reached: 172800s]
<wyager[m]> I just got talc working in my embedded board's external RAM. very handy
<JamesMunns[m]> I've built a lot of projects with my own allocators and collections, but that loses you access to the built in collections.
<JamesMunns[m]> It works well, but it's not a portable or good general solution
<wyager[m]> Honestly the main thing I want is Arc. I have a hard-realtime app but I'd like a little bit of dynamic alloc. Plan is to allocate and deallocate only in low-prio threads, but high-prio threads can pass around already-constructed Arcs
<JamesMunns[m]> Yep
<JamesMunns[m]> I built a "pool of vecs" I've used which works well for packet comms: https://docs.rs/erdnuss-comms/latest/erdnuss_comms/frame_pool/index.html
<JamesMunns[m]> But yeah, that only works for "first party" solutions
<wyager[m]> Yeah, I had my own custom allocator with GC in a low-prio thread for growable `u8` streams, but it was A) super complicated and B) not usable except to simulate `Vec<u8>`. Having a real allocator is nice for sure
<wyager[m]> Plus it's the kind of thing you want a lot of eyes on, bug-wise
<JamesMunns[m]> mnemos-alloc also used to have it's own set of custom collections, tho we've moved a lot of that back to alloc/async wrappers over a global allocator
<JamesMunns[m]> Anyway - yeah, you have my commiseration, even if I don't have good solutions :)
<wyager[m]> Haha thanks, appreciate it (and the confirmation)
<JamesMunns[m]> Happy to chat if anyone has questions - I really have built a lot of my own collections, and run them through miri and such, happy to share notes
<wyager[m]> Do you know who's responsible for the allocator_api stuff (or where I can find that info)? May be worth pinging them with this use case in case this is not something they're optimizing for currently
<JamesMunns[m]> Not off the top of my head!
GrantM11235[m] has joined #rust-embedded
<GrantM11235[m]> <wyager[m]> "Is it possible to link against..." <- Maybe you can define a global allocator that will cause a linker error if you try to use it. Like the alloc method tries to call a missing extern fn
<wyager[m]> <GrantM11235[m]> "Maybe you can define a global..." <- Hmmm that's an interesting idea
<JamesMunns[m]> panic-never is good prior art for that, and describes the downsides of that approach
<GrantM11235[m]> I suspect that in practice it would work better than panic-never. Panic-never requires the compiler to do a lot of optimizations to get rid of every single bounds check, etc, but this (alloc-never?) only requires simple dead code elimination
<corecode[m]> <corecode[m]> "hi, does anybody generate..." <- i guess i'll write a `cargo xtask dist` that handles this