<re_irc_>
<@willeml:matrix.org> Oh, sorry, I should have specified it’s all open and up to date on GitHub, I think I posted the repo a few times, but that was a long time ago
<re_irc_>
<@willeml:matrix.org> Also, main.rs is not meant to be a link, someone is just parking the domain
<re_irc_>
<@josfemova:matrix.org> Is there any sort of daq-hal for data acquisition systems like National Instruments equipment? or at least wrappers around the C api NI provides?
<re_irc_>
<@firefrommoonlight:matrix.org> willeml: sweet - I'll take a look later this week
SomeWeirdAnon has quit [Ping timeout: 268 seconds]
<re_irc_>
<@firefrommoonlight:matrix.org> Related: Would you use internal Flash on applicable MCUs over external unless you're out of space? Should be faster and simplify hardware
<re_irc_>
<@firefrommoonlight:matrix.org> Although external is cheap
<re_irc_>
<@9names:matrix.org> probably not. have used flash for static webpages before, because it's simple and cheap.
<re_irc_>
<@9names:matrix.org> have also used external sdram for this when it was available.
<re_irc_>
<@9names:matrix.org> but if you go external flash you have the option of flashless mcu's, which can also be a good fit for some applications
<re_irc_>
<@jamesmunns:matrix.org> If you're just using rtt and not defmt, you can skip the steps on piping it to that, since you don't need to decode the data.
<re_irc_>
<@jamesmunns:matrix.org> Just the `nc localhost 8765` step should print the contents of rtt
xnor has joined #rust-embedded
<re_irc_>
<@korken89:matrix.org> Does anyone know of an chip that is SPI to UART? So an MCU can talk to multiple of these over SPI to get many UARTs?
<re_irc_>
<@grantm11235:matrix.org> If you just want to multiplex your uart, you could use something like a 74hc4052
<re_irc_>
<@korken89:matrix.org> Thanks, though I need multiple physical UARTs
<re_irc_>
<@korken89:matrix.org> Preferably with a few kB of FIFO AND MAYBE UP TO 10 Mbaud
<re_irc_>
<@korken89:matrix.org> I'll give it a look :)
<re_irc_>
<@dirbaio:matrix.org> it has 2 hardware uarts, they can do 10Mbaud
<re_irc_>
<@dirbaio:matrix.org> but with PIO you can do crazy stuff
<re_irc_>
<@dirbaio:matrix.org> all the 32 pins could be uarts :D
<re_irc_>
<@ryan-summers:matrix.org> You could also go with a USB -> UART solution? I know that FTDI makes some USB -> 4/8/16 UART chips
<re_irc_>
<@ryan-summers:matrix.org> Although that might require a usb host controller on chip, which would be no-fun
<re_irc_>
<@dirbaio:matrix.org> nrf can't do usb host :(
<re_irc_>
<@jamesmunns:matrix.org> I've used the black pill to go up to 10-12Mbps
<re_irc_>
<@jamesmunns:matrix.org> (I was using that for a high speed RS485)
<re_irc_>
<@jamesmunns:matrix.org> I think at least two of the UARTs can go that fast? Maybe 3?
<re_irc_>
<@dirbaio:matrix.org> the issue is stm32s are unobtainium :P
<re_irc_>
<@jamesmunns:matrix.org> I mean, if he needs five, I can send him them
<re_irc_>
<@jamesmunns:matrix.org> but if he needs a prod design, yeahhhhh
<re_irc_>
<@dirbaio:matrix.org> 🤣
<re_irc_>
<@jamesmunns:matrix.org> Honestly it does sound like a job for PIOs or a ice40 or something
<re_irc_>
<@thalesfragoso:matrix.org> How about the gigadevice chips ?
<re_irc_>
<@lachlansneff:matrix.org> Could `std::io::{Read, Write}` make their way into libcore? Looks like they don't have anything that relies on libstd.
<re_irc_>
<@thalesfragoso:matrix.org> I think io::Error was the problem
<re_irc_>
<@lachlansneff:matrix.org> I see that now. Hmm
<re_irc_>
<@dirbaio:matrix.org> io::Error has `Box<dyn Stuff>` 💩
<re_irc_>
<@lachlansneff:matrix.org> Could the core traits have an error associated type?
<re_irc_>
<@lachlansneff:matrix.org> and the std traits would be trait aliases
<re_irc_>
<@dirbaio:matrix.org> it'd have to be a parallel trait no matter what :(
<re_irc_>
<@dirbaio:matrix.org> trait aliases are not a thing :D
<re_irc_>
<@lachlansneff:matrix.org> Indeed they are!
<re_irc_>
<@lachlansneff:matrix.org> A few things would still need work
<re_irc_>
<@lachlansneff:matrix.org> The `read_to_string` method
<re_irc_>
<@lachlansneff:matrix.org> and `read_to_end`
<re_irc_>
<@dirbaio:matrix.org> either way you could get the same by having 2 traits, then doing blanket impls like `impl<T: core::io::Write> std::io::Write for T`
<re_irc_>
<@dirbaio:matrix.org> but still
<re_irc_>
<@dirbaio:matrix.org> ANNOYING
<re_irc_>
<@lachlansneff:matrix.org> Right, but having two traits would be annoying
<re_irc_>
<@dirbaio:matrix.org> and would stuff from std impl core::io::Write? 🤔
<re_irc_>
<@lachlansneff:matrix.org> I think a trait alias would work *except* for `read_to_string` and `read_to_end`.
<re_irc_>
<@lachlansneff:matrix.org> No, they'd be the same trait
<re_irc_>
<@lachlansneff:matrix.org> They could impl std::io::Read
<re_irc_>
<@lachlansneff:matrix.org> It'd be a trait alias with the error type set to io::Error
<re_irc_>
<@dirbaio:matrix.org> ah yeah with the trait alias sure
<re_irc_>
<@lachlansneff:matrix.org> I dunno, maybe it be possible if trait extensions were possible
<re_irc_>
<@lachlansneff:matrix.org> Like if a trait alias could include additional methods
<re_irc_>
<@lachlansneff:matrix.org> Or, if `Vec` and `String` were moved to `core`, the trait could have `Error` and `Allocator` be associated types
fabic has quit [Ping timeout: 240 seconds]
<re_irc_>
<@sympatron:matrix.org> Vec and String need alloc though
<re_irc_>
<@lachlansneff:matrix.org> They'll be able to take a custom allocator type parameter
<re_irc_>
<@lachlansneff:matrix.org> Vec already can on nightly
<re_irc_>
<@dirbaio:matrix.org> having `Allocator` associated type would force users to have *some* allocator to use the traits
<re_irc_>
<@lachlansneff:matrix.org> Yep
<re_irc_>
<@lachlansneff:matrix.org> I don't really see too much of an issue with that
<re_irc_>
<@lachlansneff:matrix.org> We could default it to a no-op one
<re_irc_>
<@dirbaio:matrix.org> but what if an impl actually tries to return a `String`? it explodes?
<re_irc_>
<@lachlansneff:matrix.org> I mean, I wouldn't suggest defaulting to a no-op.
<re_irc_>
<@lachlansneff:matrix.org> But, the user could if they wanted
<re_irc_>
<@dirbaio:matrix.org> doesn't seem a very solid solution :S
<re_irc_>
<@dirbaio:matrix.org> imo core::io shouldn't require alloc at all
<re_irc_>
<@lachlansneff:matrix.org> core already has an alloc module
<re_irc_>
<@lachlansneff:matrix.org> Which contains the `Allocator` trait.
<crabbedhaloablut>
Why don't we use the transparent bridge from libera<->matrix?
<re_irc_>
<@lachlansneff:matrix.org> (on nightly)
<re_irc_>
<@dirbaio:matrix.org> Yeah but I don't want to have to supply an allocator to do IO
<re_irc_>
<@lachlansneff:matrix.org> You could just set to no-op.
<re_irc_>
<@dirbaio:matrix.org> if `core::io::Write` requires an allocator, I'm going to continue using my custom `Write` trait that doesn't
<re_irc_>
<@lachlansneff:matrix.org> I agree, it's not ideal
<re_irc_>
<@dirbaio:matrix.org> setting it to no-op would crash if impls try to use it
<agg>
crabbedhaloablut: a few reasons: a) the libera channel still isn't registered because it's in the #rust namespace and the core team haven't replied to my email asking if we can get this room delegated (they need to claim the namespace first)
<re_irc_>
<@dirbaio:matrix.org> impls have no idea if the allocator is noop or not
<re_irc_>
<@dirbaio:matrix.org> if the API allows returning a string, they should be able to return one without fear of crashing
<agg>
b) the matrix room has >100 users so we need matrix.org admins to enable the bridge for us manually, and they're also not responding to my messages about it >_>
<agg>
c) when we had the transparent bridge, it regularly kicked lots of idle matrix users, which annoyed everyone
<re_irc_>
<@lachlansneff:matrix.org> Yeah, I guess
<re_irc_>
<@lachlansneff:matrix.org> It just seems like there must be an elegant solution to this
<re_irc_>
<@lachlansneff:matrix.org> Might require language changes
<re_irc_>
<@lachlansneff:matrix.org> The read_to_string and read_to_vec methods really should've been in an extension trait
<re_irc_>
<@adamgreig:matrix.org> @room it's meeting time again! we'll start in 5 minutes, agenda is here: https://hackmd.io/skUOXiM0RK2fM828HiYeAg please add anything you'd like to announce or discuss!
<re_irc_>
<@cuno555:matrix.org> I hope that we are not drifting towards a design where no_std implies alloc, whether dummy allocator or not. (And I also hope that we won't be forced to use async.) I see a complexity risk in relying on ever more arcane and subtle features, and language extensions, that make the Rust / embedded Rust learning...
<re_irc_>
... curve even steeper and longer.
<re_irc_>
<@lachlansneff:matrix.org> I certainly agree.
<re_irc_>
<@dirbaio:matrix.org> > And I also hope that we won't be forced to use async
<re_irc_>
<@adamgreig:matrix.org> having core+alloc+std all together and you can just only use allocing-things when you have an allocator would be pretty neat
<re_irc_>
<@adamgreig:matrix.org> though might make it really hard to know if you can use a given lib without an OS or alloc....
<re_irc_>
<@thejpster:matrix.org> I don't know why people are so down on alloc
<re_irc_>
<@adamgreig:matrix.org> nice that if something is no_std then you can probably use it embedded
<re_irc_>
<@thejpster:matrix.org> A pools allocator is a perfectly reasonable way of assigning blocks of memory
<re_irc_>
<@jamesmunns:matrix.org> (this was one of my ideas for std-aware cargo)
<re_irc_>
<@adamgreig:matrix.org> I think just a lot of embedded people traditionally like only static allocation
<re_irc_>
<@dirbaio:matrix.org> being able to verify everything fits in RAM at compile time is priceless
<re_irc_>
<@thejpster:matrix.org> I mean, otherwise you just keep a bunch of blocks in each module
<re_irc_>
<@jamesmunns:matrix.org> Basically: make `std` just a crate with optional features: `alloc` and `libstd`
<re_irc_>
<@thejpster:matrix.org> Mostly unused
<re_irc_>
<@thejpster:matrix.org> Just have one set of blocks and share them
<re_irc_>
<@adamgreig:matrix.org> yea, but then you know you have enough all the time
<re_irc_>
<@jamesmunns:matrix.org> so `std` + `no-default-features` == `core
<re_irc_>
<@adamgreig:matrix.org> if you share them you might need more than you have, which can be a real problem on some embedded systems
<re_irc_>
<@thejpster:matrix.org> If you keep leaking them, note which file and line allocated it
<re_irc_>
<@lachlansneff:matrix.org> And `#![no_std]` would alias to that somehow?
<re_irc_>
<@adamgreig:matrix.org> if you know you always have enough ram because it's all allocated at the start, you can never have this problem
<re_irc_>
<@thejpster:matrix.org> If you keep double freeing, note who freed it last
<re_irc_>
<@jamesmunns:matrix.org> Lachlan Sneff: it was a rejected RFC, so we never even got that far :)
<re_irc_>
<@adamgreig:matrix.org> it's not about leaking or double freeing so much as different runtime execution paths that might in some circumstances require more allocation than memory
<re_irc_>
<@thejpster:matrix.org> adamgreig: You can always need more than you have, shared or not
<re_irc_>
<@lachlansneff:matrix.org> Anyhow, core implies no global allocation at the moment.
<re_irc_>
<@lachlansneff:matrix.org> a no_std crate can require that you pass in an allocator
<re_irc_>
<@adamgreig:matrix.org> that's often not true in embedded, you can know exactly how much you need statically and never need to exceed it
<re_irc_>
<@adamgreig:matrix.org> quick announcement from this week is that probe-rs 0.11 is out now, there's a blog post about it here: https://probe.rs/blog/release-0-11-0/
<re_irc_>
<@adamgreig:matrix.org> fair to say there's just a whole bunch of good fixes and improvements I think
<re_irc_>
<@adamgreig:matrix.org> and some very cool looking things in the pipeline!
<re_irc_>
<@adamgreig:matrix.org> if anyone has any other announcements, now's the time
<re_irc_>
<@dirbaio:matrix.org> multicore 😍
<re_irc_>
<@adamgreig:matrix.org> multicore + sequences => rp2040 support in mainline
<re_irc_>
<@adamgreig:matrix.org> I feel like there should probably be something besides embedded-hal, but what is it...
<re_irc_>
<@lachlansneff:matrix.org> I wonder if embedded-nal should be discussed in these meetings too
<re_irc_>
<@therealprof:matrix.org> How's the blog coming along? 😛
<re_irc_>
<@adamgreig:matrix.org> I guess technically it's an r-e-c project and i wouldn't wanna butt in on their stuff but happy to discuss specific things if you want to :)
<re_irc_>
<@adamgreig:matrix.org> therealprof: 🙈 well, we added the probe-rs release announcement...
<re_irc_>
<@lachlansneff:matrix.org> Is there a process for lifting projects from r.e.c to e.c. ?
<re_irc_>
<@lachlansneff:matrix.org> Not sure if they'd want to do that, just curious
<re_irc_>
<@adamgreig:matrix.org> I think last time it was discussed we worked out it would be the same as any other project, i.e. an rfc adding the repo to the relevant team's list and that team voting to approve
<re_irc_>
<@adamgreig:matrix.org> ok, well, let's get back to some of the open embedded-hal things I guess
<re_irc_>
<@adamgreig:matrix.org> there's the async traits, and also the new prs on separate-buffers spi and changing the return type of blocking::spi
<re_irc_>
<@adamgreig:matrix.org> honestly I guess last I looked most of them were progressing ok in the pr itself, are there any details that we should try and clear up a bit further now?
<re_irc_>
<@lachlansneff:matrix.org> I think the key bit is whether the spi traits can do partial read/writes.
<re_irc_>
<@dirbaio:matrix.org> they can't
<re_irc_>
<@dirbaio:matrix.org> rereturning the slice is completely redundant
<re_irc_>
<@lachlansneff:matrix.org> or the length
<re_irc_>
<@lachlansneff:matrix.org> I agree about returning a slice now
cr1901 has joined #rust-embedded
<re_irc_>
<@lachlansneff:matrix.org> The issue I can think of with enforcing complete reads is that you can't really read an unknown amount of data without doing it byte-wise.
<re_irc_>
<@dirbaio:matrix.org> you read N bytes, the peripehral clocks the bus for N*8 cycles and gives you N bytes, period
<re_irc_>
<@lachlansneff:matrix.org> So, we'd need a separate trait for non-dma, partial reads?
<re_irc_>
<@dirbaio:matrix.org> you're supposed to know how much data you want to read upfront
<re_irc_>
<@dirbaio:matrix.org> if you don't know, then read to a 1-byte slice in a loop
<re_irc_>
<@dirbaio:matrix.org> no need for separate trait
<re_irc_>
<@lachlansneff:matrix.org> 👍️
<re_irc_>
<@dirbaio:matrix.org> not knowing the size upfront for SPI is very rare
<re_irc_>
<@dirbaio:matrix.org> thankfully UART crap like "read until newline" is not common on SPI 🤣
<re_irc_>
<@lachlansneff:matrix.org> Good point
<re_irc_>
<@dirbaio:matrix.org> (AT commands 🤮)
<re_irc_>
<@lachlansneff:matrix.org> yuck
<re_irc_>
<@lachlansneff:matrix.org> Been dealing with those a lot recently
<re_irc_>
<@dirbaio:matrix.org> related: what happened to these? I thought they were ready to go in, but they seem to have stuck
<re_irc_>
<@jamesmunns:matrix.org> oh, might be off topic
<re_irc_>
<@dirbaio:matrix.org> yeah I changed it from ReadWrite to Transfer, it seems to have more consensus and is in line with the old naming which is nice
<re_irc_>
<@jamesmunns:matrix.org> but I think we probably want a trait for "double buffered dma" actions
<re_irc_>
<@jamesmunns:matrix.org> I have this implemented for nrf's SPIM and SAADC peripherals, we call it `TransferPending`, but yeah.
<re_irc_>
<@jamesmunns:matrix.org> I can share the code for that if anyone wants to see the pattern. Feel free to ignore if too off-topic now.
<re_irc_>
<@adamgreig:matrix.org> as a generic trait in e-h? what would the trait interface look like?
<re_irc_>
<@dirbaio:matrix.org> jamesmunns: I think that's a bit out of scope of embedded-hal, as it's very specialized. HALs can offer a hardware-specific API doing that, it doesn't have to be a trait
<re_irc_>
<@jamesmunns:matrix.org> I just figured if we were talking about locking in DMA-friendly naming conventions
<re_irc_>
<@jamesmunns:matrix.org> back-to-back transfers are probably not TOO uncommon
<re_irc_>
<@adamgreig:matrix.org> wonder if double-buffered or circular buffered would be more helpful.. i guess sort of much of a muchness
<re_irc_>
<@dirbaio:matrix.org> circular buffer is super hard with rust borrow guarantees 🤣
<re_irc_>
<@adamgreig:matrix.org> for things that need to stream continuously like an ADC it could be quite useful
<re_irc_>
<@adamgreig:matrix.org> (double buffering I mean)
<re_irc_>
<@adamgreig:matrix.org> perhaps for device-initiated things like spi and uart traits it's less important
<re_irc_>
<@therealprof:matrix.org> adamgreig: We really should have the CHANGELOG.md check in there. Both of them are missing CHANGELOG.md and I almost missed it. Luckily(?) bors didn't react...
<re_irc_>
<@adamgreig:matrix.org> but adc, spi peripherals, maybe uart reception, it can be essential to always have one buffer on-the-go
<re_irc_>
<@jamesmunns:matrix.org> I can't remember how I did that for sprocket-boot's reception, but probably just always blocking
<re_irc_>
<@thalesfragoso:matrix.org> adamgreig: heh, I've done double buffering drivers, but only with three buffers >_>
<re_irc_>
<@adamgreig:matrix.org> triple buffering, the extra buffer is for extra safety :P
<re_irc_>
<@adamgreig:matrix.org> must sacrifice one buffer to the borrowck gods
<re_irc_>
<@thalesfragoso:matrix.org> haha, but your idea was also nice, to write an invalid value to cause a DMA error
<re_irc_>
<@thalesfragoso:matrix.org> people are using on the H7, so you can make `next_transfer_with` safe, but still you can't do much, if you take too long you will miss the bus...
<re_irc_>
<@adamgreig:matrix.org> i like that it still gives you memory safety while letting the hal return a buffer to the user, i guess, though i was interested to see a report recently that someone found they got much better performance doing things a bit differently
<re_irc_>
<@adamgreig:matrix.org> apparently 5x speedup which seems wild
<re_irc_>
<@adamgreig:matrix.org> but i guess depends a lot on how big your buffers are and how often you have to swap them around
<re_irc_>
<@thalesfragoso:matrix.org> that doesn't seem very safe...
<re_irc_>
<@jamesmunns:matrix.org> For the code that is using that SPIM driver, I'm just using a heapless-pool to allocate 4k pages
<re_irc_>
<@thalesfragoso:matrix.org> I mean, safe yeah
<re_irc_>
<@thalesfragoso:matrix.org> correct without fences, shaky
<re_irc_>
<@jamesmunns:matrix.org> so it just runs around trying to get back up to two-pending-buffers at all times
<re_irc_>
<@adamgreig:matrix.org> "oh we're not hard real-time" "...but if we take more than 100µs to process this buffer, our dma engine will overwrite memory we were using"
<re_irc_>
<@adamgreig:matrix.org> ooops, all real-time
<re_irc_>
<@thalesfragoso:matrix.org> yeah, it's nicer to do with a pool, I've seen that on the L0 driver
<re_irc_>
<@adamgreig:matrix.org> classic thing with double buffering in c, so i guess it's nice that the safe rust interfaces do prevent that
<re_irc_>
<@thalesfragoso:matrix.org> also on the embassy ethernet driver, we also use a pool
<re_irc_>
<@thalesfragoso:matrix.org> but then it's a bit hard to control because the "allocation" can fail
<re_irc_>
<@adamgreig:matrix.org> thalesfragoso: yea, it's not a safe method, the comments say "/// NOTE(unsafe): Memory safety is not guaranteed." 💀
<re_irc_>
<@thalesfragoso:matrix.org> adamgreig: I mean, but I think it's still need some DMBs
<re_irc_>
<@adamgreig:matrix.org> it has a note about that too
<re_irc_>
<@thalesfragoso:matrix.org> not sure if DMB itself, but at least compiler_fences
<re_irc_>
<@thalesfragoso:matrix.org> which should degrade the speed
<re_irc_>
<@adamgreig:matrix.org> not sure why it says to put compiler_fence into the user code instead of just putting it into the closure
<re_irc_>
<@thalesfragoso:matrix.org> if it does something is wrong...
<re_irc_>
<@thalesfragoso:matrix.org> I think the dmb is needed though, the write buffer can delay a write to the buffer before writing the enable in the DMA
<re_irc_>
<@thalesfragoso:matrix.org> anyways, probably got a bit side tracked
<re_irc_>
<@grantm11235:matrix.org> error[E0119]: conflicting implementations of trait `embedded_hal::prelude::_embedded_hal_blocking_spi_Write<_>` for type `GenericSpiDevice<_>`:
<re_irc_>
<@dirbaio:matrix.org> I don't get why though
<re_irc_>
<@thalesfragoso:matrix.org> I heard something about it but now I can't remember exactly, but maybe if the API guarantees that after finishing all items will be init, then it might be okay
<re_irc_>
<@dirbaio:matrix.org> shouldn't the downstream crate be the one getting the "conflicting impl" error if it tries to impl that?
<re_irc_>
<@lachlansneff:matrix.org> Yea, that's a weird error.
<re_irc_>
<@grantm11235:matrix.org> It's because of the weird opt-in blocking::spi::Default trait
<re_irc_>
<@lachlansneff:matrix.org> Those were pretty annoying when I was trying to write the async versions, so I just didn't add those
<re_irc_>
<@thalesfragoso:matrix.org> maybe because if W is a downstream type then they can implement it
<re_irc_>
<@dirbaio:matrix.org> maybe the Default thing should be removed?
<re_irc_>
<@dirbaio:matrix.org> it encourages writing the `nb` impl as the One True Impl and deriving the blocking one from it
<re_irc_>
<@lachlansneff:matrix.org> Yes
<re_irc_>
<@grantm11235:matrix.org> dirbaio: Yeah, it doesn't even save that much boilerplate anyway
<re_irc_>
<@lachlansneff:matrix.org> I vote for removing it
<re_irc_>
<@dirbaio:matrix.org> while I'd rather have the ecosystem move away from `nb` :D
<re_irc_>
<@dirbaio:matrix.org> also once `futures` traits is in there might be another Default deriving `blocking` from the `futures` ones
<re_irc_>
<@lachlansneff:matrix.org> I'd suggest not doing that, but would definitely be possible.
<re_irc_>
<@dirbaio:matrix.org> and there's hardware where you can't use the `Default` anyway
<re_irc_>
<@dirbaio:matrix.org> nrf's can only do DMA
starblue has quit [Ping timeout: 252 seconds]
<re_irc_>
<@adamgreig:matrix.org> I have to run, thanks everyone for attending!
starblue has joined #rust-embedded
<re_irc_>
<@thalesfragoso:matrix.org> thanks adam
<re_irc_>
<@lachlansneff:matrix.org> Thank you for helping organize this chaos
<re_irc_>
<@thalesfragoso:matrix.org> dirbaio: addition no, but maybe division... (looks at the rp2040)
<re_irc_>
<@adamgreig:matrix.org> 😅 it's nice to see a lot of movement on embedded-hal!
<re_irc_>
<@dirbaio:matrix.org> yea the rp2040 is just WTF
<re_irc_>
<@thalesfragoso:matrix.org> why go for thumbv7 if you can plug a peripheral for doing divisions, right ?
<re_irc_>
<@grantm11235:matrix.org> Does anyone else want to make a PR to remove the default traits? If not, I can probably do it Eventually ™️
<re_irc_>
<@lachlansneff:matrix.org> I don't really know anything about the rp2040, what's wrong with it?
<re_irc_>
<@dirbaio:matrix.org> GrantM11235: go for it, you have my emotional support
<re_irc_>
<@dirbaio:matrix.org> I predict it'll be controversial though
<re_irc_>
<@lachlansneff:matrix.org> I can go ahead and make the PR if you don't get to it first
<re_irc_>
<@dirbaio:matrix.org> the weird rp2040 stuff is probably because they weren't able to get a good deal on cortex m4 or higher licensing..?
<re_irc_>
<@thalesfragoso:matrix.org> even a M3 would make sense
<re_irc_>
<@thalesfragoso:matrix.org> Lachlan Sneff: thumbv6 doesn't have an instruction for division, so they made a hardware peripheral for that
<re_irc_>
<@grantm11235:matrix.org> Maybe they were just having fun designing peripherals
<re_irc_>
<@lachlansneff:matrix.org> thalesfragoso: wat
<re_irc_>
<@thalesfragoso:matrix.org> GrantM11235: I can relate to that, hehe
<re_irc_>
<@lachlansneff:matrix.org> why not switch to a newer isa?
<re_irc_>
<@dirbaio:matrix.org> ah true the M3 is included in that ARM "super easy to get started licensing stuff" right?
<re_irc_>
<@dirbaio:matrix.org> no idea why did they go with the m0 then
<re_irc_>
<@dirbaio:matrix.org> silicon area..?
<re_irc_>
<@thalesfragoso:matrix.org> M0 is marketed as low power too
<re_irc_>
<@dirbaio:matrix.org> but the rp2040 isn't even lowpower
<re_irc_>
<@thalesfragoso:matrix.org> but doesn't make much sense to have a fast dual core with it...
<re_irc_>
<@thalesfragoso:matrix.org> dirbaio: but you can still say it uses a low power core, haha
<re_irc_>
<@dirbaio:matrix.org> they market it as "low power", but in comparison to the full fat linux raspis 🤣
<re_irc_>
<@thalesfragoso:matrix.org> although I think doing fast, sleep more is way better
<re_irc_>
<@grantm11235:matrix.org> Is the rp2040 the fastest m0 on the market?
<re_irc_>
<@dirbaio:matrix.org> you can definitely tell they had lots of fun designing it
<re_irc_>
<@dirbaio:matrix.org> the PIO and stuff
<re_irc_>
<@thalesfragoso:matrix.org> assembly is back in the game baby, haha
<re_irc_>
<@dirbaio:matrix.org> the "basic" peripherals (uart, spi, i2c) are a bit meh, you can tell they just slapped the crappiest IP they could grab on there
<re_irc_>
<@dirbaio:matrix.org> SPI can't transfer bytes back-to-back 🤣
<re_irc_>
<@dirbaio:matrix.org> which is like WTF
<re_irc_>
<@grantm11235:matrix.org> I was surprised that they used any basic peripherals, I was hoping that they would do everything with PIOs
<re_irc_>
<@dirbaio:matrix.org> it makes some sense, PIOs must be much bigger in silicon area
<re_irc_>
<@dirbaio:matrix.org> so you can use the hardware uart for basic stuff and still have the PIOs free for fun stuff
<re_irc_>
<@lachlansneff:matrix.org> The default stuff really should be done with specialization
<re_irc_>
<@dirbaio:matrix.org> not likely to be stable anytime soon...
<re_irc_>
<@lachlansneff:matrix.org> Indeed
<re_irc_>
<@grantm11235:matrix.org> dirbaio: did you see the links I posted the other day about run-to-completion futures?
<re_irc_>
<@dirbaio:matrix.org> yeah
<re_irc_>
<@dirbaio:matrix.org> I have the same q as thalesfragoso : how does that protect from mem::forget
<re_irc_>
<@dirbaio:matrix.org> ah you did answer it
<re_irc_>
<@lachlansneff:matrix.org> Can you link again?
<re_irc_>
<@dirbaio:matrix.org> you're simply violating the `poll` safety contract
<re_irc_>
<@thalesfragoso:matrix.org> my following question was, how do you prevent a user from calling mem::forget in a future inside an "complete async fn"
<re_irc_>
<@dirbaio:matrix.org> the concern I see is that the "RunToCompletionFuture" infects everything: if you want to poll a RunToCompletionFuture deep within an `async fn` stack, you have to make them all RunToCompletionFutures
<re_irc_>
<@thalesfragoso:matrix.org> the complete async fn wouldn't leak but any futures inside it could be leaked
<re_irc_>
<@dirbaio:matrix.org> similarly for libs, they'd have to take `where F: RunToCompletionFuture` instead of `where F: Future`
<re_irc_>
<@dirbaio:matrix.org> and for executors, they'd have to take `RunToCompletionFutures` if they want to hope to run everything
<re_irc_>
<@dirbaio:matrix.org> and in cases where the end user wants to handle futures (join, select, timeouts) it's very likely that end user code will need explicit `unsafe` blocks
<re_irc_>
<@thalesfragoso:matrix.org> nothing prevents the user from calling mem::forget in a future inside an async fn, and the user doesn't call the unsafe poll, the executor does
<re_irc_>
<@dirbaio:matrix.org> thalesfragoso: the idea is if user calls `mem::forget`, they're violating the safety contract so it's "fine"
<re_irc_>
<@grantm11235:matrix.org> I think the dma would need to be started at the first poll, not when the future is created
<re_irc_>
<@thalesfragoso:matrix.org> dirbaio: which safety contract ? they didn't call any unsafe functions to get there
<re_irc_>
<@dirbaio:matrix.org> they have to either `.await` the RunToCompletionFuture, in which case Rust would enforce the current `async fn` is `#[completion]`
<re_irc_>
<@dirbaio:matrix.org> OR
<re_irc_>
<@dirbaio:matrix.org> call `.poll` manually, in which case they have an `unsafe{}` block
<re_irc_>
<@thalesfragoso:matrix.org> hmmm
<re_irc_>
<@dirbaio:matrix.org> OR call something that indirectly calls `.poll`, in which case that something should be an `unsafe fn` or return an RunToCompletionFuture
<re_irc_>
<@dirbaio:matrix.org> so yeah, it's all technically correct
<re_irc_>
<@thalesfragoso:matrix.org> now makes more sense
<re_irc_>
<@dirbaio:matrix.org> the problem is RunToCompletionFuture infects EVERYTHING with unsafe
<re_irc_>
<@dirbaio:matrix.org> also
<re_irc_>
<@grantm11235:matrix.org> Is there any way that it could be enforced at compile time instead of being unsafe?
<re_irc_>
<@dirbaio:matrix.org> canceling DMA operations is fine, as long as you call Drop
<re_irc_>
<@grantm11235:matrix.org> Or is that just re-inventing an auto leak trait
<re_irc_>
<@dirbaio:matrix.org> what we want is "you MUST call Drop"
<re_irc_>
<@thalesfragoso:matrix.org> GrantM11235: Leak auto trait
<re_irc_>
<@dirbaio:matrix.org> what RunToCompletionFuture gives is "you MUST call poll until completion"
<re_irc_>
<@thalesfragoso:matrix.org> hmm, yeah, it's a bit different
<re_irc_>
<@dirbaio:matrix.org> DMA can be stopped on drop just fine, blocking for a little bit if needed
<re_irc_>
<@thalesfragoso:matrix.org> it also gets rid of async drop it seems
<re_irc_>
<@grantm11235:matrix.org> That pre-rfc and blog post both mention cancelable run-to-completion futures
<re_irc_>
<@dirbaio:matrix.org> yeah, I'm very not convinced about that part :S
<re_irc_>
<@dirbaio:matrix.org> "cancelable RunToCompletionFuture" is "like async Drop, but only for Futures"
<re_irc_>
<@grantm11235:matrix.org> Basically, you signal the future that it should cancel, then await the future until it cooperatively cancels
<re_irc_>
<@dirbaio:matrix.org> want to async close a `std::fs::File` on drop? you can't
tokomak has quit [Ping timeout: 256 seconds]
<re_irc_>
<@grantm11235:matrix.org> Futures yield cooperatively, so it makes sense to me that they would be cancelled cooperatively too
<re_irc_>
<@dirbaio:matrix.org> my point is if we had "async drop", it'd be useful for futures and other things
<re_irc_>
<@dirbaio:matrix.org> so "cancelable RunToCompletionFutures" would no longer be needed, they'd use "async drop" like everything else
<re_irc_>
<@grantm11235:matrix.org> Hmm, the difference between a cancellable future and a droppable future is that a cancellable future returns something after it is cancelled. I'm not sure if this is an important difference.
<re_irc_>
<@dirbaio:matrix.org> in embassy a while ago we had that
<re_irc_>
<@dirbaio:matrix.org> an "uart read slice" future that you could `.stop()` to make it return how many bytes have been read (it'd wait until the full slice had been read otherwise)
<re_irc_>
<@grantm11235:matrix.org> But if it just returned a "cancelled" error, then there isn't much practical difference
<re_irc_>
<@dirbaio:matrix.org> but that's "orthogonal" to RunToCompletionFutures right?
<re_irc_>
<@dirbaio:matrix.org> like, that could be used for normal Futures
<re_irc_>
<@grantm11235:matrix.org> Yeah
<re_irc_>
<@dirbaio:matrix.org> dunno
<re_irc_>
<@dirbaio:matrix.org> my main criticism is that infecting everything with unsafe is not very rusty
<re_irc_>
<@dirbaio:matrix.org> not for something as foundational as futures
<re_irc_>
<@dirbaio:matrix.org> everything else is kinda bikeshedding
<re_irc_>
<@grantm11235:matrix.org> Does it infect "everything" or just executors and some combinators?
<re_irc_>
<@dirbaio:matrix.org> all executors, all combinators, all async fns in user code
<re_irc_>
<@dirbaio:matrix.org> I think that qualifies as "everything" :D
<re_irc_>
<@dirbaio:matrix.org> like
<re_irc_>
<@dirbaio:matrix.org> libs will want to work with both Future and RunToCompletionFuture
<re_irc_>
<@dirbaio:matrix.org> so they'll take and return RunToCompletionFuture
<re_irc_>
<@dirbaio:matrix.org> (or alternatively have a Future and a RunToCompletionFuture version of everything, which is not nice either)
<re_irc_>
<@dirbaio:matrix.org> so as a user, as soon as you call one of these libs, you now have to make all your `async fns` to be `#[completion]`
<re_irc_>
<@dirbaio:matrix.org> and at the very top level switch to an executor that takes RunToCompletionFuture instead of Future
<re_irc_>
<@dirbaio:matrix.org> or for example, the embedded-hal traits
<re_irc_>
<@grantm11235:matrix.org> dirbaio: But that doesn't require any `unsafe`, does it?
<re_irc_>
<@dirbaio:matrix.org> they'll return RunToCompletionFuture instead of Future
<re_irc_>
<@dirbaio:matrix.org> to acommodate DMA impls
<re_irc_>
<@dirbaio:matrix.org> so you're stuck with RunToCompletionFutures even if your particular impl doesn't use DMA
<re_irc_>
<@dirbaio:matrix.org> how would you do `select` with RunToCompletionFutures?
<re_irc_>
<@dirbaio:matrix.org> it HAS to have unsafe in user code
<re_irc_>
<@dirbaio:matrix.org> because you can give it a RunToCompletionFuture, have it poll a bit, then return it back to you before finishing, then drop it
<re_irc_>
<@dirbaio:matrix.org> if you store a RunToCompletionFuture in a struct of yours, now you have to make methods that poll the future `unsafe`
<re_irc_>
<@dirbaio:matrix.org> those two things are rather common
<re_irc_>
<@dirbaio:matrix.org> sometimes you manipulate futures in ways that are not just `.await`ing them, many of these become unsafe with RunToCompletionFuture
<re_irc_>
<@grantm11235:matrix.org> That blog post suggests changing `select` to only allow abort-safe futures, and turning run-to-completion futures into abort-safe futures by spawning and joining a new task
<re_irc_>
<@grantm11235:matrix.org> Does spawning new tasks require alloc?
<re_irc_>
<@dirbaio:matrix.org> I guess yeah
<re_irc_>
<@dirbaio:matrix.org> it's made abort-safe because if the current future gets aborted that child future keeps running right?
<re_irc_>
<@dirbaio:matrix.org> the memory for that child future needs to come from the heap then...
<re_irc_>
<@dirbaio:matrix.org> you could make like embassy executor does for toplevel tasks: define a static to store the future there
<re_irc_>
<@dirbaio:matrix.org> but then it can fail if that static is already being used for another future
<re_irc_>
<@grantm11235:matrix.org> I think I am coming back around to the auto leak trait idea lol
<re_irc_>
<@dirbaio:matrix.org> 🤣
<re_irc_>
<@dirbaio:matrix.org> it's for different usecases though...
<re_irc_>
<@dirbaio:matrix.org> `Leak` would allow safe+ergonomic dma impls
<re_irc_>
<@dirbaio:matrix.org> but the "block on Drop" problem remains
<re_irc_>
<@dirbaio:matrix.org> I think that problem is bigger for big-server std workloads than for embedded though
<re_irc_>
<@dirbaio:matrix.org> and could be solved by async drop or by cancelation tokens or whatever
<re_irc_>
<@dirbaio:matrix.org> I don't really care about async drop personally though 🤣
<re_irc_>
<@grantm11235:matrix.org> I still think that cooperative cancellation is nicer than the current norm where you just stop polling, but I guess it is too late for that to change
<re_irc_>
<@dirbaio:matrix.org> yeah...
<re_irc_>
<@dirbaio:matrix.org> it makes sense though, if you want Futures to be regular Rust objects
<re_irc_>
<@dirbaio:matrix.org> all Rust objects can be dropped...
<re_irc_>
<@dirbaio:matrix.org> futures in most other languages have some "black magic" that runs them out of your control
<re_irc_>
<@dirbaio:matrix.org> the js event loop, the Go runtime...
<re_irc_>
<@dirbaio:matrix.org> in rust they're plain old objects, no magic "runtime" neede
<re_irc_>
<@dirbaio:matrix.org> that's what makes rust async awesome for embedded :D
<re_irc_>
<@dirbaio:matrix.org> people want Rust async to be like Go or JS
<re_irc_>
<@dirbaio:matrix.org> it's just not
<re_irc_>
<@grantm11235:matrix.org> IDK, `RawWakerVTable` still looks a bit like magic to me 🤣
<re_irc_>
<@bobmcwhirter:matrix.org> it's really just function pointers, at the end of the day
<re_irc_>
<@grantm11235:matrix.org> When I see too many asterisks, my eyes start to glaze over
<re_irc_>
<@dirbaio:matrix.org> the waker black magic stuff is just
<re_irc_>
<@dirbaio:matrix.org> a manually-written vtable, like the one you get with `dyn`
<re_irc_>
<@dirbaio:matrix.org> most executors write the vtable so wakers behave like `Arc<dyn Waker>`
<re_irc_>
<@dirbaio:matrix.org> embassy makes it behave like `&'static dyn Task`
<re_irc_>
<@dirbaio:matrix.org> they could've made Waker be just `Arc<dyn Wake>` but this way it's possible to write executors that don't need alloc which is awesome
<re_irc_>
<@lachlansneff:matrix.org> Sometimes I wish that the it was a trait instead and `poll` had a type parameter for it so there could be no vtable overhead in certain cases.
<re_irc_>
<@lachlansneff:matrix.org> But that boat has sailed
<re_irc_>
<@dirbaio:matrix.org> that wouldn't really work
<re_irc_>
<@lachlansneff:matrix.org> Perhaps not
<re_irc_>
<@dirbaio:matrix.org> executors often create different waker vtables for each task
<re_irc_>
<@lachlansneff:matrix.org> In the case of a static executor, it might work but
<re_irc_>
<@dirbaio:matrix.org> so if it wasn't vtable-based, polling the same fut from 2 different tasks would duplicate the entire code for the fut
<re_irc_>
<@dirbaio:matrix.org> also how would you store wakers?
<re_irc_>
<@lachlansneff:matrix.org> Good point
<re_irc_>
<@dirbaio:matrix.org> that's why it's type-erased to the max :S
<re_irc_>
<@dirbaio:matrix.org> also you wouldn't be able to do `dyn Future`
<re_irc_>
<@lachlansneff:matrix.org> Also a good point
<re_irc_>
<@lachlansneff:matrix.org> Did any of you see that Pre-RFC or post or something that suggested changing the way Futures are structured internally so instead of an enum, it was an indirect call?
<re_irc_>
<@dirbaio:matrix.org> no?
<re_irc_>
<@lachlansneff:matrix.org> I think it was a year or two ago
<re_irc_>
<@dirbaio:matrix.org> does CYCCNT work inside QEMU? asking just in case so I don't waste time if it doesn't 🤣
<re_irc_>
<@dirbaio:matrix.org> and is it accurate? or should I test on real hardware?
<re_irc_>
<@jamesmunns:matrix.org> I don't think it does
<re_irc_>
<@jamesmunns:matrix.org> rtic's docs talk about that?
<re_irc_>
<@sirhcel:matrix.org> I'm thinking about adding support for rs-485/half-duplex with transmitter control to `stm32f3xx-hal`s `Serial`. I wanted to take a look on how other hals implement this. But my search on github did not show up anything related to rs-485 in `stm32-rs` and `nrf-rs` and I'm a bit puzzled now. Did I miss the...
<re_irc_>
... obvious? Or could it be that there is no support for rs-485 in these hals?
<re_irc_>
<@dirbaio:matrix.org> isn't rs485 the same framing as normal uart but with different voltages?
<re_irc_>
<@sirhcel:matrix.org> Practically, yes and typically an external rs-485 driver ic gets used to convert between single-ended (uart side) and differential (bus side). And all the typical mcu uarts i've seen so far support the additional hardware control signal for switching between send and receive on that external transmitter.
<re_irc_>
<@dirbaio:matrix.org> oh hm
<re_irc_>
<@dirbaio:matrix.org> I haven't seen such signal in the nrfs
<re_irc_>
<@dirbaio:matrix.org> this sounds like something you could build on the embedded-hal uart and digital traits, so it works on any mcu
<re_irc_>
<@sirhcel:matrix.org> For me, using the hardware transmitter control is the interesting part. But in case of the `stm32f3xx-hal` this would require adding a third pin as type parameter and I wanted to see how other have done this.
<re_irc_>
<@sirhcel:matrix.org> dirbaio: That's absolutely possible and i've seen such an implementation on the esp32 whose uart does not support the control signal in hardware.
<re_irc_>
<@sirhcel:matrix.org> I like your idea because is sufficiently heretic to my tunnel vision focussed on just using the hardware transmitter control. :)