jcroisant has quit [Quit: Connection closed for inactivity]
pbsds has quit [Quit: Ping timeout (120 seconds)]
pbsds has joined #rust-embedded
IlPalazzo-ojiisa has quit [Quit: Leaving.]
emerent has quit [Ping timeout: 260 seconds]
emerent has joined #rust-embedded
notgull has joined #rust-embedded
notgull has quit [Ping timeout: 264 seconds]
crabbedhaloablut has quit []
crabbedhaloablut has joined #rust-embedded
sjm42[m] has quit [Quit: Idle timeout reached: 172800s]
sirhcel[m] has joined #rust-embedded
<sirhcel[m]>
Got aware of e-h pr [#567](https://github.com/rust-embedded/embedded-hal/pull/567) proposing to switch to a generic badge for the msrv showing the msrv of the latest released version of a crate. Now i'm wondering that for example linux-embedded-hal has a badge with a fixed rustc version and not a `rust_version` in `Cargo.toml`. Is there a reason for giving a batch but not specifying a msrv in `Cargo.toml`? Is this like a 'serving
<sirhcel[m]>
suggestion'?
<sirhcel[m]>
s/and/but/
crabbedhaloablut has quit [Read error: Connection reset by peer]
crabbedhaloablut has joined #rust-embedded
disasm[m] has quit [Quit: Idle timeout reached: 172800s]
<Ralph[m]>
i'd like to move the [tb6612fng-rs](https://github.com/rursprung/tb6612fng-rs/) crate there (with myself & @ripytide staying as maintainers for the time being) as i don't even have the matching hardware anymore (and while i can still access it for the time being this will change eventually). i do not expect the crate to undergo a lot more changes in the future (we're currently iterating on it a bit to clean up the API as part of the
<Ralph[m]>
e-h 1.0 migration, but after that it should "just work").
<thejpster[m]>
that email will work - have marked my membership as public
<Ralph[m]>
ah, great, thanks! i was wondering whether you had retired from that org and the email hadn't been updated
<thejpster[m]>
which is likely what I will ask you to do if you email me :)
<Ralph[m]>
ah, great! i didn't see the meta repository! might i suggest that you add a `.github` repository (or rename `meta` to it?) so that the README is shown on the org overview (as is e.g. the case for [rust-embedded](https://github.com/rust-embedded/))?
emerent has quit [Remote host closed the connection]
emerent has joined #rust-embedded
Dr_Who has joined #rust-embedded
<sirhcel[m]>
What are best practices for supporting both, e-h and e-h-async for a driver from a single crate? Sharing code within a single crate seems like something to go for from my perspective. Is there some “magic fairy dust” like embedded-hal-compat or other support for not duplicating much code around?
<sirhcel[m]>
For non-embedded code i’ve seen a blocking api wrapping the non-blocking many times. But this does not look feasible on desirable in the embedded case to me.
<JamesMunns[m]>
For example, if you have a lot of packet parsing/wrangling, that doesn't need to do I/O directly. You can pass in/out slices. Then you can have async and non-async stuff code re use the same core functions
<JamesMunns[m]>
but for things "over time", like "read the sensor until it reaches value x, then send message y, then wait for response z" will never, IMO, be abstractable over async and non async. At least not well.
<JamesMunns[m]>
For a lot of sensor and basic drivers: separating the I/O and logic is totally reasonable. Stuff like I2C/SPI sensors, etc.
<thejpster[m]>
And by I/O and logic, I think of these as "does this function have all the data it needs to calculate its return value, either from the input arguments or from a global variable, or does it have to go and touch hardware to get it and will that hardware maybe not be ready with the answer yet"
<thejpster[m]>
Parsing eight bytes into a sensor reading - that's logic. You only need the eight bytes (and maybe some cached state, I don't know). Reading a sensor reading however will involve touching the hardware to get the eight bytes.
<thejpster[m]>
This question comes up a lot for me because I wrote a blocking FAT32 and SD/MMC driver and people keep asking for an async one.
<thejpster[m]>
In FAT32, and in SD/MMC protocol, there isn't actually all that much of "I have data X and data Y, now calculate result Z" because the data is spread around a block device - the disk - and accessing the disk is very slow and hardware specific.
<JamesMunns[m]>
yeah, ideally In A Perfect World nb or nb-alike hand-rolled code would have "notifications" for completion/readiness too, but e-h-nb doesn't because there's really no agreed way to do that
<JamesMunns[m]>
(I'm speaking more general in that there are three "genres" of I/O approach, so I'm including some hand-rolled stuff that might match option 1 or option 2, even if not exactly those crates. If you write your own state machine loop or "poll" function that doesn't use async, I'd still call that "in the genre of option 2")
<JamesMunns[m]>
(not explain to jp, who is likely familiar with these options, more for the "general discussion of why it's hard to abstract over these three approaches")
<thejpster[m]>
I'd perhaps offer option 4 - you have a layered system where every layer is a task, which is a thread and a quue, and the tasks communicate by passing messages over queues to other tasks. The tasks at the bottom get messages posted from IRQ handlers to indicate that the hardware is ready. The application posts messages to / receives messages from the tasks at the top.
<JamesMunns[m]>
* (not explaining directly to JP, who is likely familiar with these options, more for the "general discussion of why it's hard to abstract over these three approaches")
<JamesMunns[m]>
I mean, there are probably lots of ways to handle this! Message passing certainly is one, and could be implemented in many ways.
<JamesMunns[m]>
especially if you are willing to have pre-emptive multitasking
<thejpster[m]>
function calls vs message passing has been a matter of robust discussion since the 1960s I believe.
<thejpster[m]>
but to highlight that the three approaches above all assume communication via function call
<dirbaio[m]>
message passing tends to consume a lot of RAM tho
<JamesMunns[m]>
thejpster[m]: we could have a smalltalk about it
<dirbaio[m]>
you need to reserve RAM for the "queue" for each possible function call
<thejpster[m]>
some small values of "lots"
<thejpster[m]>
s/some/for/
<dirbaio[m]>
and that RAM is in use "all the time"
<dirbaio[m]>
vs
<dirbaio[m]>
with function calls, only when the function call is being actually done
<thejpster[m]>
heap allocate the messages and your queue is just a list of pointers ;)
<JamesMunns[m]>
I think we're getting away from the original question of "can you abstract over async or not", and not "how to model I/O and events when designing a system"
<JamesMunns[m]>
Which is also a fun discussion, but I have to drop out if we are switching topics :)
<dirbaio[m]>
then your memory usage becomes unpredictable, no way to tell if you're going to OOM at compile time
<dirbaio[m]>
and you got fragmentation
<JamesMunns[m]>
<thejpster[m]> "I'd perhaps offer option 4 - you..." <- for what it's worth thejpster, if you squint your eyes, this is how async/await works in Rust, in most implementations. The only nit is the only message you can send (out of the box) is "ready". Like an empty notification.
<JamesMunns[m]>
There is usually an intrusive linked list of "ready" tasks. Hardware, data structures, etc., will have some kind of "waker", that contains one or more task that is waiting for some kind of outcome. When that condition occurs, that task is "sent a message" by placing the task into the ready queue for processing.
<JamesMunns[m]>
Now, you COULD use channels (either statically-pre-allocated or heap-allocated, pick your poison), but in async/await at least, these would still use the "readiness notification" infra. So your layered tasks would await a message being received, and would be "pushed a notification" whenever someone sent a message to that queue.
<JamesMunns[m]>
(note that this isn't hardcoded to ONLY use this model, but MOST executors use an intrusive readiness queue for the "send notification" mechanism. the only one I'm aware of that DOESN'T use that model is cbiffle's executor)
<JamesMunns[m]>
* (note that async/await in Rust isn't hardcoded to ONLY use this model, but MOST executors use an intrusive readiness queue for the "send notification" mechanism. the only one I'm aware of that DOESN'T use that model is cbiffle's executor)
<thejpster[m]>
<dirbaio[m]> "then your memory usage becomes..." <- I've found that in practice that's not an issue
<thejpster[m]>
and the messages are small, and pool allocators don't fragment
<thejpster[m]>
James Munns: florian has also tried to convince me that async/await is just message passing in a trench-coat.
<thejpster[m]>
maybe I just need to play with it more
<JamesMunns[m]>
thejpster[m]: I wouldn't go all the way to "is". But "has nontrivial commonalities"
<JamesMunns[m]>
but "message passing", "object oriented", and "actor model" tend to mean very different things to people, sort of like how "embedded" can mean 128B of RAM or an NVIDIA Jetson :)
<JamesMunns[m]>
* can mean an 8051 with 128B of
<JamesMunns[m]>
thejpster[m]: Honestly it took some time to wrap my head around, but once I did, I wouldn't build anything nontrivial without it. Particularly where preemptive threads aren't an option.
FreeKill[m] has quit [Quit: Idle timeout reached: 172800s]
<mabez[m]>
That basically sums up how embedded-fatfs exists now, I asked the maintainer of rust-fatfs if they wanted async support, they weren't interested so I hard forked and proceeded for my usecase. Fortunately (as mentioned above) there are many cases where you keep your business logic separate from io, but I think a filesystem is too closely knit with io that it becomes impossible to do cleanly, without resorting to hacks like maybe_async.
<JamesMunns[m]>
Whatcha mean by "new rust room"?
<JamesMunns[m]>
Is that #rust:matrix.org or something?
<JamesMunns[m]>
(I don't think I'm in that channel)
<JamesMunns[m]>
xiretza since you are in the screenshot above ^
<JamesMunns[m]>
(I don't actually know how room upgrades work on matrix)
<JamesMunns[m]>
I was able to join clicking the link in my last message, or you could try clicking #rust:matrix.org
<JamesMunns[m]>
other than that, no idea :D
<JamesMunns[m]>
That being said, you seem to have your own homeserver, so I wonder if maybe your homeserver needs to be updated to support that version or something?
<JamesMunns[m]>
My client (Beeper, with beeper's homeserver) had no problem joining the new room
dngrs[m] has quit [Quit: Idle timeout reached: 172800s]
moerk[m] has quit [Quit: Idle timeout reached: 172800s]
ChristianHeussy[ has joined #rust-embedded
<ChristianHeussy[>
Does anyone have a good reference (walkthrough, blog post, etc?) for writing a HAL driver on top of a PAC. I started with taking a stab at GPIO for my part and my head is swimming a bit with the type erasure and generic pin aspects.