oneDragon[m] has quit [Quit: Idle timeout reached: 172800s]
starblue has quit [Ping timeout: 246 seconds]
<tangotaylor[m]>
<dngrs[m]> "You can do qemu tests - for an..." <- Awesome, thanks!
starblue has joined #rust-embedded
cr1901 has quit [Read error: Connection reset by peer]
cr1901 has joined #rust-embedded
johnmcnuggets has quit [Ping timeout: 255 seconds]
lehmrob has joined #rust-embedded
ryan-summers[m] has quit [Quit: Idle timeout reached: 172800s]
Kaspar[m] has joined #rust-embedded
<Kaspar[m]>
> Pin is about "thou shalt not replace"
<Kaspar[m]>
But `Pin` implements `DerefMut` as well, so it does allow replacing. Isn't `Pin` about `thou shalt not change location in memory`?
ryan-summers[m] has joined #rust-embedded
<ryan-summers[m]>
Anyone know of an easy way to make e-h 1.0 traits that will work with drivers that still use the 0.2 versions? I'm using a crate that hasn't been updated in 5 years, so it's unlikely that it'll be updated
<ryan-summers[m]>
I guess you could make your own wrapper shim that impls the 0.2 versions pretty easily (in this case, it's just GPIO), but curious if there's a crate that already does this
<ryan-summers[m]>
Yeah then it's probably not from reading the calibration constants, likely how you're applying them. Also double check the endianness
<ryan-summers[m]>
Also, you could also jump into GDB and manually print the memory at that location (or use probe-rs to dump it) just to confirm what you get in code is what's actually at that address
<mameluc[m]>
I get 946 and 1256 for 30c and 130c. my adc gives 498 but it is kind of cold in my room so it could make sense. If I calculate the value on my computer or on the mcu I get -114.516.
<mameluc[m]>
ryan-summers[m]: good idea
<ryan-summers[m]>
probe-rs-cli can do a memory read for you pretty easily I believe
<ryan-summers[m]>
Does anyone know how to use the embedded-hal-bus with a framework like RTIC that handles concurrency for you without having to sprinkle in a ton of critical sections? RTIC handles that for me
<diondokter[m]>
ryan-summers[m]: RTIC has its own Arbiter that does the same thing but using RTIC stuff
<ryan-summers[m]>
Ah okay thanks, didn't know that. I assume it's not in rtic 1.0 though?
<diondokter[m]>
Ehhh
<diondokter[m]>
IDK if it works with 1.0
<ryan-summers[m]>
Welp, maybe this is the reason to just jump to 2.0. No reason to stay on 1.0 anyways
starblue has quit [Ping timeout: 252 seconds]
starblue has joined #rust-embedded
IlPalazzo-ojiisa has joined #rust-embedded
<ryan-summers[m]>
Ah, but the rtic-sync stuff requires e-h-async, not e-h 1.0
<dirbaio[m]>
if you put all tasks that use devices on the same SPI bus at the same priority you should be able to use RefCellDevice
<ryan-summers[m]>
Yeah I don't. They're all at different priorities and need to be in different tasks
<ryan-summers[m]>
Yeah I wrote the RTIC additions to shared-bus specifically because of this project
<ryan-summers[m]>
I think bumping to 1.0 is still fine for it. Would be nice if rtic-sync had a non-async version, but seems like it's going the route of async-first
<dirbaio[m]>
where's the code for that? can't find it in upstream shared-bus?
<dirbaio[m]>
so all we'd gain by introducing a mutex trait would be allowing reusing those 60 lines, at the cost of increased complexity and harder to read docs
<dirbaio[m]>
my recommendation would be to copypaste those 60 lines into your project and adapt them to use AtomicCheckMutex
<mameluc[m]>
This example gets me pretty close but it is without calibration so a little low in my case, also some consts needs to be adjusted for my chip.
<mameluc[m]>
Amazing how much I learn from looking at the examples ^^
<dirbaio[m]>
anyway, if you do decide to update `shared-bus` then please please please cut out SPI support entirely 🙏
<dirbaio[m]>
(or update it to properly handle CS, but at that point you'd be reinventing `embedded-hal-bus`...)
<dirbaio[m]>
though i'd prefer if as an ecosystem we consolidated efforts in embedded-hal-bus :)
<ryan-summers[m]>
Yeah if you would accept it into e-h-bus, I'd prefer to do it there
<dirbaio[m]>
I think we should, can't see any other way of supporting non-async RTIC
<dirbaio[m]>
because you can't hold RTIC locks across task invocations, right? or can you?
<dirbaio[m]>
like, can you make the I2C proxy struct actually own an RTIC lock? and actually lock it?
<ryan-summers[m]>
No, they're closure based
<dirbaio[m]>
* lock it when accesing i2c?
<dirbaio[m]>
yeah you do cx.i2c.lock(|i2c| use(i2c) )
<dirbaio[m]>
but can you move the actual lock (the cx.i2c) into another struct?
<dirbaio[m]>
so the struct would do self.i2c.lock(|i2c| use(i2c) )
<ryan-summers[m]>
I don't believe so. I think I tried owning a lock at one point but the type isn't defined until post-compilation or something weird like that
<dirbaio[m]>
yeah I feared so...
<ryan-summers[m]>
Or, owning something that could be locked, i.e. owning the mutex
<dirbaio[m]>
oh well
<ryan-summers[m]>
But maybe it changed. That was rtic 1.0
<dirbaio[m]>
would be cool if you could, then you could have "real" shared spi/i2c without the risk of panics if you set it up wrong
<dirbaio[m]>
wonder if there's something RTIC can do to support it
<ryan-summers[m]>
Yeah might ask around
johnmcnuggets has joined #rust-embedded
olsen__[m] has quit [Quit: Idle timeout reached: 172800s]
<mabez[m]>
Who has admin access to rust-embedded-community? Would it be possible to add me (Github: mabezdev) and temporarily Jorge (Github: japaric) to the org so they can transfer aligned?
Ralph[m]1 has joined #rust-embedded
<Ralph[m]1>
mabez[m]: eldruin took care of that for my crates the last time
<corecode[m]>
hi, does anybody generate jlink commander files during build so that a ready to use build asset pops out?
<eldruin[m]>
<mabez[m]> "Who has admin access to rust-..." <- Do you want to join the org? If so please open a PR to the [meta](https://github.com/rust-embedded-community/meta) repository adding yourself. We would be happy to have you
<eldruin[m]>
as for transferring a repository without being a member of the org, there are a couple of permission problems. The easiest for everybody is that you transfer it to me personally and I transfer it then to the org
<eldruin[m]>
hash32 sounds like a good fit as well
<eldruin[m]>
regardless of whether you joined the org, you can keep whatever privileges on the repo
starblue has quit [Ping timeout: 264 seconds]
starblue has joined #rust-embedded
wyager[m] has joined #rust-embedded
<wyager[m]>
Is it possible to link against alloc without needing a global allocator? I'm only using the _in functions to allocate in a non-global arena allocator. I would actually prefer if my program failed to link specifically if I accidentally use a global allocator anywhere, but I still want to pull in Vec and Arc and so on without getting no global memory allocator found but one is required; link to std or add #[global_allocator]
<wyager[m]>
to a static item that implements the GlobalAlloc trait
<JamesMunns[m]>
If you can, it definitely requires nightly features, I'm not sure if you can fully avoid the global alloc tho. You might be able to have a dummy impl that can't actually alloc tho
<JamesMunns[m]>
Like have a "null" global alloc to satisfy the dependency, then ensure you only ever use local allocators
<wyager[m]>
<JamesMunns[m]> "Like have a "null" global..." <- That's what I'm doing - a global alloc with only like 16 bytes of space. Problem is, re "ensure I only ever use local allocators" - the compiler should really do this for me. Easy to make a mistake (use wrong `new` API) that results in a runtime crash. No good
<wyager[m]>
Or, alternatively, pull in a dependency that uses a global allocator under the hood even though it marks itself as no_std (at least one of my deps did this, and I had to fork it to fix)
<wyager[m]>
* Or, alternatively, I might unintentionally pull in a dependency that uses a global allocator under the hood even though it marks itself as no_std (at least one of my deps did this, and I had to fork it to fix)
<wyager[m]>
* Or, alternatively, I might unintentionally pull in a dependency that uses a global allocator under the hood even though it marks itself as no_std. At least one of my deps did this, and I had to fork it to fix. If I had been using the "fake global allocator" strategy at that point, my program would have paniced deep in the bowels of that library in a rarely-traversed code path
<JamesMunns[m]>
Yep, I don't have a better answer though, and it's definitely an early, in progress feature.
<wyager[m]>
Ok thanks, just wanted to confirm. I'm pretty excited about the effort to move things to explicit allocators - should be great for embedded work
aktaboot[m] has quit [Quit: Idle timeout reached: 172800s]
<wyager[m]>
I just got talc working in my embedded board's external RAM. very handy
<JamesMunns[m]>
I've built a lot of projects with my own allocators and collections, but that loses you access to the built in collections.
<JamesMunns[m]>
It works well, but it's not a portable or good general solution
<wyager[m]>
Honestly the main thing I want is Arc. I have a hard-realtime app but I'd like a little bit of dynamic alloc. Plan is to allocate and deallocate only in low-prio threads, but high-prio threads can pass around already-constructed Arcs
<JamesMunns[m]>
But yeah, that only works for "first party" solutions
<wyager[m]>
Yeah, I had my own custom allocator with GC in a low-prio thread for growable `u8` streams, but it was A) super complicated and B) not usable except to simulate `Vec<u8>`. Having a real allocator is nice for sure
<wyager[m]>
Plus it's the kind of thing you want a lot of eyes on, bug-wise
<JamesMunns[m]>
mnemos-alloc also used to have it's own set of custom collections, tho we've moved a lot of that back to alloc/async wrappers over a global allocator
<JamesMunns[m]>
Anyway - yeah, you have my commiseration, even if I don't have good solutions :)
<wyager[m]>
Haha thanks, appreciate it (and the confirmation)
<JamesMunns[m]>
Happy to chat if anyone has questions - I really have built a lot of my own collections, and run them through miri and such, happy to share notes
<wyager[m]>
Do you know who's responsible for the allocator_api stuff (or where I can find that info)? May be worth pinging them with this use case in case this is not something they're optimizing for currently
<JamesMunns[m]>
Not off the top of my head!
GrantM11235[m] has joined #rust-embedded
<GrantM11235[m]>
<wyager[m]> "Is it possible to link against..." <- Maybe you can define a global allocator that will cause a linker error if you try to use it. Like the alloc method tries to call a missing extern fn
<wyager[m]>
<GrantM11235[m]> "Maybe you can define a global..." <- Hmmm that's an interesting idea
<JamesMunns[m]>
panic-never is good prior art for that, and describes the downsides of that approach
<GrantM11235[m]>
I suspect that in practice it would work better than panic-never. Panic-never requires the compiler to do a lot of optimizations to get rid of every single bounds check, etc, but this (alloc-never?) only requires simple dead code elimination