ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
<re_irc> <James Munns> I should not have read that compiler_fence thread
<re_irc> <James Munns> it breaks my brain
<re_irc> <James Munns> I haven't skimmed it entirely - but has anyone brought up the fact of using things like atomics (in spinlock mutexes) to synchronize shared access to non-atomic data?
<re_irc> <dirbaio> brain clobber fence
<re_irc> <adamgreig> in theory that's probably OK
<re_irc> <adamgreig> it synchronises between atomics
<re_irc> <James Munns> yeah, fair
<re_irc> <James Munns> just not in our usage for dma and stuff
<re_irc> <dirbaio> you can do spinlocks with just atomics, with the right orderings
<re_irc> <dirbaio> no need for fences
<re_irc> <adamgreig> if you have two fences it ensures the right things happen relative to each other, it's when you just have one fence and you're hoping it does something that the problems maybe creep in
<re_irc> <James Munns> where we use compiler fences to synchronize non-volatile code relative to volatiles
<re_irc> <dirbaio> James Munns: nrf-hal, embassy-nrf DMA
<re_irc> <dirbaio> ah it was not a question :P
<re_irc> <adamgreig> the volatiles aren't atomics alas
<re_irc> <adamgreig> so yes
<re_irc> <James Munns> yeah, agree with you dirbaio
<re_irc> <James Munns> we use those... everywhere
<re_irc> <James Munns> and it DOES have an effect
<re_irc> <James Munns> but maybe not an intended one
<re_irc> <James Munns> "volatile_fence" when?
<re_irc> <dirbaio> perhaps llvm documents "fence" as "atomics only" but the impl is stronger?
<re_irc> <James Munns> dirbaio: that sounds about right
<re_irc> <dirbaio> cortex-m ci broke due to rust 1.53 :D
<re_irc> <adamgreig> yea lol
<re_irc> <dirbaio> * 1.63
<re_irc> <adamgreig> is it NLL?
<re_irc> <adamgreig> I think that's what's done it lol
<re_irc> <adamgreig> fix incoming
<re_irc> <adamgreig> let's see what else broke lol
starblue has quit [Ping timeout: 252 seconds]
<re_irc> <adamgreig> surprisingly all the other compile-fail tests pass
<re_irc> <adamgreig> unfortunately when it builds for 1.59.0 MSRV it will now fail lol
<re_irc> <James Munns> Honestly, we probably shouldn't be using references with DMA at all
<re_irc> <dirbaio> adamgreig: oh no.. cfg's?
<re_irc> <adamgreig> lol I see, you beat me to pushing the pr but i predict yours will fail :P
<re_irc> <adamgreig> good luck cfg'ing that
starblue has joined #rust-embedded
<re_irc> <adamgreig> hopefully the compile-fail stuff has a way around this
<re_irc> <James Munns> my recent work with allocators and stuff has really led me to believe that for Cursed Shit you really do need to make the "core abstraction" based on UnsafeCell + ptrs
<re_irc> <adamgreig> honestly it seems more like for Cursed Shit you need to write it all in inline asm lol
<re_irc> <James Munns> then use stuff like Deref/DerefMut to grant temporary, non-simultaneous-with-cursed-shit references
<re_irc> <James Munns> ehhhhh
<re_irc> <James Munns> maybe?
<re_irc> <James Munns> but like, we are lying with &mut being live while "the DMA thread" has access to it
<cr1901> I don't like the notion that we are collectively too stupid to modify a piece of memory from two places at once without a complex set of rules that severely curtails what you can do, and that applies to DMA and refs too :P
<re_irc> <James Munns> since we can't really inform rust that we are "scoped thread"-style passing the reference to the"dma thread"
<re_irc> <James Munns> I mean, that's what we were _trying_ to do with the compiler fences
<re_irc> <James Munns> which as Ralf said, HAPPENED to work
<re_irc> <James Munns> but isn't really _guaranteed_ to work
<re_irc> <James Munns> but like, unsafecell sort of has that aliased guarantee built in
<re_irc> <dirbaio> it's fine to let DMA write to the buf, you got a raw pointer out of it which gives you permission to write to it according to stacked borrows
<re_irc> <James Munns> WHILE you have a reference live derived from the provenance of the unsafecell, it is guaranteed to not alias, but that ends when the reference does
<re_irc> <James Munns> yeah
<re_irc> <James Munns> I guess it's the difference between "reference usually, cursed shit goes here", vs "cursed everywhere, EXCEPT when we have a reference"
starblue has quit [Ping timeout: 268 seconds]
<re_irc> <James Munns> that being said, my gut is entirely trained on "things miri doesn't get mad about today"
<re_irc> <James Munns> (though that includes pretty much all the optional stuff too, like stacked borrows, provenance checks, and tagged pointers)
starblue has joined #rust-embedded
<re_irc> <adamgreig> that pr sure is taking its time failing lol
<re_irc> <adamgreig> don't really want to disable all the compile-fail tests on msrv
<re_irc> <adamgreig> hard to call it an msrv if we can't even test it fully
<re_irc> <dirbaio> It seems you can "// normalize-stderr: "lifetime of reference outlives lifetime of borrowed content" -> "lifetime may not live long enough""
<re_irc> <adamgreig> ah nice
<re_irc> <adamgreig> i couldn't find any docs for compiletest_rs that really explained the syntax of those comments
<re_irc> <dirbaio> yeah
<re_irc> <dirbaio> there's the rustc dev guide https://rustc-dev-guide.rust-lang.org/tests/ui.html
<re_irc> <adamgreig> but it seems like it might be about to pass anyway?
jamesmunns-irc has joined #rust-embedded
<jamesmunns-irc> cr1901: yo
<re_irc> <adamgreig> what does that mean...
<re_irc> <dirbaio> lol
<jamesmunns-irc> (fwiw, the bridge isn't echoing my messages either)
<cr1901> Ahhh goooooooooood
<re_irc> <dirbaio> 1.59 tests weren't running with 1.59 :P
<re_irc> <dirbaio> maybe?
<re_irc> <adamgreig> it only runs the tests on stable already
<jamesmunns-irc> cr1901: so, two answers: the first is using something like rtic/cmim/irq. Some crate that provides some way to "move" items to interrupt ownership
<re_irc> <dirbaio> > $TRAVIS_RUST_VERSION
<re_irc> <adamgreig> don't ask
<jamesmunns-irc> the second answer, bbqueue will have the ability to use only &self references in the future, I just haven't pub'd a version with that yet
<jamesmunns-irc> I publised bbqueue-sync as a demo of this while I was testing it out, but I should merge that back into the main release now
<re_irc> <dirbaio> but
<cr1901> In other words "static but not actually accessible from your application", and "not without unsafe"
<cr1901> right now*
<re_irc> <dirbaio> what's "rt-ci-linux (1.59.0)" doing? it's doing _something_
<re_irc> <adamgreig> yea it runs a bunch of other tests
<cr1901> err, let me rephrase
<re_irc> <adamgreig> just not the compiletest ones
<re_irc> <adamgreig> it's the rest of that script, it checks it can build all the examples in a bunch of different linkers etc
<re_irc> <dirbaio> ahhh okay
<cr1901> "it's implemented using static mut or some sort of reference to stack variables w/ unsafe, but you typically use a crate that does that for you"
<re_irc> <dirbaio> wow there's a LOT of stuff in cortex-m's CI... hadn't looked at it before 🤯
<re_irc> <dirbaio> so awesome
<jamesmunns-irc> well
<jamesmunns-irc> there's two things here:
<re_irc> <adamgreig> c-m-rt's anyway
<jamesmunns-irc> how are bbqueue handles Send, and how do you "move" data to an interrupt (or another thread)
<re_irc> <adamgreig> c-m does have the very cool HIL/qemu testsuite newam added
<jamesmunns-irc> for the latter: you need to use unsafe, either in the stdlib, or byo on embedded (or use another abstraction like rtic)
<re_irc> <adamgreig> but it now needs a bunch of actual tests writing for it I think
<jamesmunns-irc> for the former: cons/prods have a lifetime parameter that is bound to that of the BBBuffer they come from.
<cr1901> So you can't Send the cons/prod
<jamesmunns-irc> If it is static, then your prod/cons can be static. If your BBBuffer lives on the stack, then they cannot be static
<jamesmunns-irc> you can
<jamesmunns-irc> IFF the BBBuffer is 'static.
<cr1901> unsafe impl Send for Prod<'static>?
<cr1901> etc*
<jamesmunns-irc> no, that's just down to how Send works
<cr1901> oh, TIL
<jamesmunns-irc> for example
<jamesmunns-irc> if you put a BBBuffer in a Box, and leaked it
<jamesmunns-irc> then you could use that 'static reference to create a prod/cons pair
<jamesmunns-irc> and send them to different threads on the desktop
<jamesmunns-irc> (in the future, I hope to make the BBBuffer "storage agnostic", so you can do things like use an Arc, where the producer and consumer each act as a refcount
<jamesmunns-irc> so when you drop both, the bbbuffer gets dropped
<cr1901> Hmmm, interesting
<jamesmunns-irc> but like 99% of the time I use bbqueue, it's on embedded where I have actually static buffers
<jamesmunns-irc> Actually, in mnemOS, I *have* implemented bbqueue slightly differently, where it IS (basically) an Arc, and works the way I described
<jamesmunns-irc> it's just "hardcoded" like that, so still not storage agnostic, just a different approach lol
<jamesmunns-irc> for now mnemos' bbqueue is basically a fork, but I hope to unify them... sometime soon
<jamesmunns-irc> but really
<jamesmunns-irc> the reason this is safe:
<jamesmunns-irc> there are internal checks to make sure:
<re_irc> <dirbaio> oops it's 2:30am
<jamesmunns-irc> 1. you can only split the producer and consumer off of a BBBuffer once
<jamesmunns-irc> so there only EVER can be one producer and one consumer.
<re_irc> <adamgreig> time for your lunch?
* cr1901 nods
<jamesmunns-irc> 2. there are atomics that make sure only zero or one read grants are active, and only zero or one write grants are active
<re_irc> <adamgreig> i worked from home today and just spent 9 straight hours staring at a pcb and occasionally drinking some water, so no idea what's up now
<jamesmunns-irc> in the context of bbqueue's algorithm, that is enough of a guarantee to ensure you never have aliasing access to the same subregion of the static buffer
<jamesmunns-irc> which means it is sound for me to present a safe API over internally cursed unsafe code
<cr1901> I'm not sure I'd call that cursed, personally. Each to their own
<jamesmunns-irc> yeah, it's not really cursed (I have some other, worse stuff if you want the REAL good stuff)
<cr1901> I'll pass tonight
<cr1901> I have half an msp430 thread impl that uses a static buffer for saving registers. I wonder if I could get bbqueue to work there (sending producer and consumer on the stack to different closures)
<jamesmunns-irc> but rather: internal unsafe code
<jamesmunns-irc> So, bbqueue is good when you need a variable chunk of bytes
<jamesmunns-irc> IF you always need the same number of bytes "per transaction", heapless spsc is a MUCH simpler model to work from
<jamesmunns-irc> e.g. if you are ALWAYS stacking 8 regs, use heapless spsc
<jamesmunns-irc> well, actually bbqueue is a queue, not a stack
<jamesmunns-irc> so I dunno if I follow
<jamesmunns-irc> but maybe you do :)
<cr1901> stack as in "locals"
<cr1901> bbqueue buf is a 'static
<cr1901> I split to get prod/cons
<cr1901> those prod/cons are on the stack, and get moved into the closure
<jamesmunns-irc> all good so far
<cr1901> into each* closure I spawn a thread
<re_irc> <dirbaio> cs merged :D
<cr1901> they're still on the stack after the threads are spawned, and I haven't worked out how to do the inline asm for context switch yet lmfao
<re_irc> <dirbaio> adamgreig: nooo, I'm trying (and succeeding a bit) to fix my schedule
<jamesmunns-irc> so, really, the producer and consumer (more or less) are JUST a "flavored" pointer to the BBBuffer
<jamesmunns-irc> flavored, in that they have a set of API actions they each are allowed to do
<cr1901> Basically, this question is at least partially "how do I make my own abstraction that can Send to another thread, using bbqueue as an example where Sending would be useful"
<jamesmunns-irc> yeah, that's the fun one :D
<re_irc> <dirbaio> gonna go sleep a bit 😴
<re_irc> <adamgreig> good luck 💪
<cr1901> SP must remain unmodified after _the same_ block of inline asm exits, so I haven't quite worked out the details of "how to allocate a stack frame for another thread"
<jamesmunns-irc> Wait, are you doing full context switches OS style for the msp430?
<jamesmunns-irc> Because that is how you would do that.
<cr1901> yes
<jamesmunns-irc> Store the current SP, invoke... something, that would change the control flow, and when you switch back to the first thread, restore the SP where you left off
<jamesmunns-irc> "something" in this case is usually a SWI or longjmp or something to avert primary control flow
<cr1901> I haven't worked out the details of how to move SP for the spawning thread, and prepare PC for when the context switch into the newly spawned thread happens 1/2
<cr1901> I thought maybe thinking about how to do it for bbqueue would give me ideas
<cr1901> (it didn't and I've just wasted everyone's time :P)
<jamesmunns-irc> I mean, I'm actually sorting this out for mnemos right now
<jamesmunns-irc> at least to start, I plan to use a (software) interrupt, since at least then you have a somewhat "controlled" system state
<jamesmunns-irc> I could switch to asm longjmp later to avoid as harsh as a context switch
<jamesmunns-irc> but basically you have to stack up all your registers either way
<cr1901> when I do thread::spawn(f: FnOnce -> ()), I don't want f to run immediately, I just want it prepared to execute the first insn of f to be ready to run when the curr fn's timeslice is over
<cr1901> and I'm not sure if that's possible in Rust b/c Idk how much stack to allocate before codegen
<jamesmunns-irc> so
<jamesmunns-irc> this is more or less exactly what async is for, btw
<jamesmunns-irc> (mnemos uses both async and context switching)
<cr1901> sure, but I want to do this as a mental exercise :P
<jamesmunns-irc> totally fair
<jamesmunns-irc> the answer is: "there is no good deterministic way to know how much stack you need to allocate"
<jamesmunns-irc> "calculate max stack usage for a branch" is sometimes, with some limitations (no fn pointers, no recursion) possible to get an upper bound for
<jamesmunns-irc> but in the general case, is a very PITA thing to calculate
<jamesmunns-irc> the answer is usually "allocate enough"
<jamesmunns-irc> Also: you need to usually "store" the things you are sending to another thread until they can "take" them
<jamesmunns-irc> like, you might put the consumer into a 'static queue
<jamesmunns-irc> where the first thread pushes to the queue, and the new thread pops from it
<jamesmunns-irc> because it has to live *somewhere* between "giving it away" and "receiving it" in the new thread
<cr1901> Ahhh hmmm, I don't think I got that far into the design lmao
<cr1901> jamesmunns-irc: This asm is UB!!
<cr1901> It's just "as far as I got before putting it asie"
<cr1901> aside* even. Sorry for the confusion if you noticed how bad it was :P
<jamesmunns-irc> Yeah, looks generally like the right shape, though stacking registers is DEFINITELY going to be an asm thing
<jamesmunns-irc> since Rust has no concept of registers
<cr1901> the problem is asm! blocks need to restore the registers, and find_next_task() is called without the asm! block doing that. Which is UB.
<cr1901> Additionally, I completely punted on spawn_thread()
<jamesmunns-irc> so, it might have to be a global_asm block
<jamesmunns-irc> which I'm not sure has the same restrictions
<jamesmunns-irc> but I'm definitely gunna have to find out for mnemos in the next week or so lol
<jamesmunns-irc> I'm literally about to write the context switching code
<cr1901> Amanieu gave me good advice: make the fn naked, save regs, call into Rust code w/ the Regs on the stack (apparently passing a pointer from inline asm as a ref to Rust code is explicitly allow)
<cr1901> then restore regs and exit the inline asm block
<cr1901> I would've realized eventually that spawn_thread() needed to save it's shit somewhere until the task is called
<jamesmunns-irc> yeah
<cr1901> But you sped up the process :P
<jamesmunns-irc> the other option would be to invent your own version of thread_local!
<jamesmunns-irc> which you could maybe initialize on "spawn"
<jamesmunns-irc> but that's just a different way of getting to the same point
<cr1901> I will pass :)
<cr1901> Anyways that was question 1, lmao
<jamesmunns-irc> whew!
<cr1901> I'm going back to shelving the thread project for now
<jamesmunns-irc> btw, I'll ping you when I have the mnemOS RFC for userspace/context switching
<cr1901> Just the thing about bbqueue being used for Send made me go on the diversion
<cr1901> I'll take a look. Maybe some of it can be reused for msp430
<cr1901> We'll see
<cr1901> >[20:49:48] <jamesmunns-irc> this is more or less exactly what async is for, btw <-- right now this doesn't make much sense, but maybe it will once I see the RFC
<jamesmunns-irc> so
<jamesmunns-irc> I guess it depends on whether you want pre-emptive or cooperative concurrency
<jamesmunns-irc> but "set up something now so it can run later" is a very async'y thing
<jamesmunns-irc> but, definitely a different operational model to threads
<jamesmunns-irc> e.g. you still use the same single stack
<jamesmunns-irc> just the "context", e.g. data that is "owned" by a task, lives in a struct
<jamesmunns-irc> (in essence, sort of like a closure, a Future/Task is just a function with an anonymous context struct)
<jamesmunns-irc> for example, in mnemos, the KERNEL only uses async/await and cooperative multitasking
<jamesmunns-irc> but USER applications are each "threads"
<jamesmunns-irc> (with their own stacks)
<jamesmunns-irc> but the kernel is essentially "one thread with one stack".
<jamesmunns-irc> (which includes all of the drivers and stuff)
<cr1901> Well preemptive, I just wanted to reduce the amount of stack space I used, possibly even reusing some (thread B uses thread A's stack for things that were moved, but b/c of the move thread A can't see those vars anymore)
<jamesmunns-irc> btw, that is very much not a thing in Rust, typically
<jamesmunns-irc> because thread B has no control whether thread A returns past the point of popping that data off the stack
<jamesmunns-irc> (which is why Send items need to be 'static: either totally owned and moved, or live for the actual 'static lifetime)
<cr1901> oh, all these threads never return
<cr1901> or let me rephrase
<cr1901> what if all these threads never return*?
<jamesmunns-irc> it doesn't have to FULLY return
<jamesmunns-irc> just return past the stack frame where the data it gave to thread B was created
<jamesmunns-irc> unless you are implementing "scoped threads"
<jamesmunns-irc> where thread b is guaranteed to terminate before that point
<cr1901> Idk if I am
<cr1901> >just return past the stack frame where the data it gave to thread B was created <-- Oh. The move in an of itself means thread A's stack is free to be cleaned up
<jamesmunns-irc> tl;dr: for scoped threads you basically need to Join thread B before returning from the stack frame in which you spawned it
<cr1901> well scoped threads would work
<jamesmunns-irc> > possibly even reusing some (thread B uses thread A's stack for things that were moved)
<jamesmunns-irc> that's the dangerous part
<jamesmunns-irc> if you don't ACTUALLY move that to new mem
<jamesmunns-irc> how do you stop the old mem from getting clobbered/reused?
<cr1901> (I guess I don't :D)
<jamesmunns-irc> (here there is the "metaphorical rust move", and the "data actually went to live somewhere else"
<jamesmunns-irc> which may or may not happen at the same time in Rust
<cr1901> Basically "a move may very well reuse the same memory, but it's not guaranteed, and don't rely on it"
<jamesmunns-irc> yesish
<jamesmunns-irc> "the memory may be reused if the compiler knows it will be fine"
<jamesmunns-irc> but when you have divergent control flow, e.g. two threads that are unsynchronized with eachother, there is no way to know thread A won't go and stomp on that memory
<jamesmunns-irc> so it's definitely not fine, EXCEPT in the case of scoped threads.
<jamesmunns-irc> where you DO guarantee that memory won't get stomped on, by synchronizing the end of thread B with the return of the stack frame that created it.
<cr1901> Ahhh, that what it means when ! diverges
<jamesmunns-irc> that's one meaning! "we've gone somewhere else, and promise to never return"
<jamesmunns-irc> (e.g. you longjmp away or whatever)
<re_irc> <James Munns> It's a shame the IRC bridge is busted, cr1901 and I are having a blast talking about implementing threads :p
<re_irc> <James Munns> (poor Heisenbridge, I know it is doing its best)
<cr1901> (https://doc.rust-lang.org/std/thread/fn.spawn.html Note that thread::spawn doesn't use "!", so I'm not "convinced" that the compiler knows it's "definitely not fine" to reuse memory for the move :P)
<jamesmunns-irc> ! would mean "I call this function and never return"
<jamesmunns-irc> spawning a thread ISN'T that
starblue has quit [Ping timeout: 252 seconds]
<jamesmunns-irc> from the view of thread A, you call spawn and IMMEDIATELY return!
<jamesmunns-irc> it has NO IDEA what thread B is doing, at all!
<jamesmunns-irc> it could return immediately, it could be an endless loop
<cr1901> So why is it you MUST use scoped threads to reuse the stack spac- ahhh, hmmm
<jamesmunns-irc> basically, you have no idea how long data "shared" with thread B (this includes re-used space, AND borrows)
<jamesmunns-irc> needs to last
<jamesmunns-irc> could be 1ms, could be 1s, could be 127h
<jamesmunns-irc> and! That's not even up to thread B!
<jamesmunns-irc> the OS could just say "fuck you in particular", and not schedule thread B until next tuesday
starblue has joined #rust-embedded
<jamesmunns-irc> which would be odd, but within the OS' rights.
<cr1901> And indeed, if you exit the block where you spawned the thread, Rust is _free_ to reclaim the memory (tho Idk if it will until the actual return)
<jamesmunns-irc> I mean
<cr1901> Of course if the thread A returns, you're fucked six ways from Sunday
<jamesmunns-irc> there's no "reclaim", necessarily
<cr1901> fully* returns
<jamesmunns-irc> drop won't run, because you've moved the data
<jamesmunns-irc> "reclaim" just means popping the stack pointer back one frame
<cr1901> right, that's what I meant
<jamesmunns-irc> but the INSTANT you call the next function, you can bet that stack space is getting reused!
<jamesmunns-irc> (this is actually why returning a pointer to a function local is so bad in C)
<jamesmunns-irc> it works until it doesn't :D
<cr1901> https://doc.rust-lang.org/std/thread/fn.scope.html idk how it works, but presumably scoped threads use borrows to make sure you don't go out of the block before the spawned thread returns
<jamesmunns-irc> so
<jamesmunns-irc> back in the day, you would just call ".join()" in the Drop impl of the scoped thread handle
<jamesmunns-irc> but this had a problem: if you mem::forget the handle, oops, UB
<jamesmunns-irc> I'm not sure how it's implemented today to avoid that
<jamesmunns-irc> but it took 7 years to add it back, so I assume it wasn't trivial.
<cr1901> hmmm
<cr1901> Also, I just realized an implicit assumption I was making about my code and didn't say >>
<cr1901> re: [21:07:39] <jamesmunns-irc> just return past the stack frame where the data it gave to thread B was created
<jamesmunns-irc> (scoped threads existed pre-1.0, but were removed because mem::forget was marked safe, also shortly before 1.0)
<jamesmunns-irc> (this whole event was called "the leak-pocalypse"
<cr1901> This was pre-me-using-Rust
<cr1901> In my hypothetical code, my threads were all single functions that never returned and all threads were spawned from the top stack frame of the threadd
<jamesmunns-irc> me too :)
<jamesmunns-irc> I mean, I'm describing a lot of things of "why rust looks the way it does with the given safe interfaces"
<jamesmunns-irc> if you know you have a different environment
<jamesmunns-irc> you can write whatever unsafe code is acceptable for your environment
<cr1901> Well, it's not that I'm making excuses
<cr1901> it's more explaining "why am I thinking this might be okay, when in the general case, this is very much, NOT ok"?)
<jamesmunns-irc> makes sense!
<cr1901> (Answer: because I keep moving the goalposts)
<jamesmunns-irc> RTIC has a concept of "local resources"
<jamesmunns-irc> which are really just statics with a fancy API
<jamesmunns-irc> you could do the same
<jamesmunns-irc> allocate static space for stuff you are going to "move" to a thread
<jamesmunns-irc> initialize it from your "PID0", and give an "exclusive handle" to the thread you are spawning
<jamesmunns-irc> for example, you could have a wrapper type that is static, and has an interface something like:
<jamesmunns-irc> ThreadCarePackage<T>::init(data: T) -> Result<&'static T>
<jamesmunns-irc> that only succeeded once
<jamesmunns-irc> and then "give" that `&'static mut T` to your new thread
<jamesmunns-irc> then boom! Safe behavior!
<jamesmunns-irc> (init should return &'static mut T, not &'static T)
<cr1901> hmmm, that sounds "simple enough"
<jamesmunns-irc> It's basically Box::leak()
<cr1901> and &'static mut T is not Copy, so only the single thread gets it
<jamesmunns-irc> but with "stack allocated space", instead of "heap allocated space"
<jamesmunns-irc> yep!
<jamesmunns-irc> more or less: this is EXACTLY how RTIC resources work under the hood
<jamesmunns-irc> there's just a proc macro to do the "plumbing" part of it.
<re_irc> <agg (@agg:psion.agg.io)> it's like the bad old days of IRC netsplits, if I use this matrix account I can see your chat but not my matrix.org account "lol"
<jamesmunns-irc> to do this "right", your ThreadCarePackage would basically be two things:
<jamesmunns-irc> one, the storage, which would be something like UnsafeCell<MaybeUninit<T>>
<jamesmunns-irc> and two: the "state" variable, which is probably something like an AtomicU8
<jamesmunns-irc> with the states "UNINITIALIZED" and "INITIALIZED_AND_GIVEN_AWAY"
<jamesmunns-irc> or something
<cr1901> >but with "stack allocated space" Well, it's _not_ on the stack anymore, it's part of the 'static memory, but I appreciate the analogy :P
<jamesmunns-irc> oh
<cr1901> unless we assume that PID0 is never returning
<jamesmunns-irc> that should have read "statically allocated space"
<jamesmunns-irc> sorry
<jamesmunns-irc> (it's 3:30am here)
<cr1901> no worries, if I'm making you tired, why don't I leave question 2 for another time (but still type it out)?
<jamesmunns-irc> oh god I thought we were on question two
<cr1901> lmfao
<jamesmunns-irc> shoot :)
<cr1901> 2a. Could you elaborate on the quoted some more, possibly with a playground example: https://libera.irclog.whitequark.org/rust-embedded/2022-08-12#1660263286-1660263201;
<cr1901> Question 2 was inspired by this comment from me :D https://libera.irclog.whitequark.org/rust-embedded/2022-08-12#32722678;
<cr1901> But this need not be done tonight. I just wanted to type it out before I forgot
<cr1901> get some sleep, and good chat :D
<jamesmunns-irc> > I don't like the notion that we are collectively too stupid to modify a piece of memory from two places at once without a complex set of rules that severely curtails what you can do
<jamesmunns-irc> I mean
<cr1901> I just explained Rust I know
<jamesmunns-irc> it's all about the promises you make
<jamesmunns-irc> Rust PROMISES you won't do that
<jamesmunns-irc> (with references, specifically)
<cr1901> But the idea that DMA makes &mut refs unsound makes me sad :(
<jamesmunns-irc> so like, you can't *really* be upset at the compiler when you *lie* to it like that
<jamesmunns-irc> you: "I will never do X (because rust promises that)!"
<jamesmunns-irc> compiler: okay!
<jamesmunns-irc> you: "does X"
<jamesmunns-irc> compiler: wtf I'm going to pretend that never happened, you promised
<cr1901> UnsafeCell is the escape hatch, but I would prefer not to use it if I don't have to because it throws a wrench in ergonomics
<jamesmunns-irc> So, this might be too big of a PR to be useful
<jamesmunns-irc> but https://github.com/rust-osdev/linked-list-allocator/pull/62 is exactly what I was talking about
<cr1901> For 2a. or 2b.? Or both?
<jamesmunns-irc> TL;DR: The allocator used to mix `usize`, `&[mut] u8`, and `*mut u8`
<jamesmunns-irc> and this made miri... upset
<cr1901> hmmm
<jamesmunns-irc> (unsure when 2a ends and 2b begins, they are the same root issue in my mind)
<jamesmunns-irc> but basically:
<jamesmunns-irc> Yeah, I mean
<cr1901> Same link as above, I forgot 2b. sorry
<jamesmunns-irc> that's the same topic
<cr1901> Well, I'll take a look when I have more bandwidth and get back to you if I have qs
<jamesmunns-irc> the root issue here is that while pointers and references are the same *at runtime*
<jamesmunns-irc> they have VERY DIFFERENT compile time guarantees
<jamesmunns-irc> so the question is, "do you have references that sometimes break the rules"
<jamesmunns-irc> or "do you have pointers that sometimes follow the rules for references"
<cr1901> right, that's fair. And LLVM will sometimes even get rid of references (and turn them into direct updates)
<jamesmunns-irc> but like, you CAN have mutably aliasing pointers!
<jamesmunns-irc> totally fine!
<jamesmunns-irc> But TOTALLY NOT FINE to have mutably aliasing references
<jamesmunns-irc> very not fine, immediately UB
<jamesmunns-irc> in my current opinion, if you are mixing and matching "cursed" and "safe"
<jamesmunns-irc> it is better to make the default "cursed"
<jamesmunns-irc> and provide "islands of safe", instead of "islands of cursed", if that makes sense.
<jamesmunns-irc> Where cursed == pointers, maybe with UnsafeCells.
<cr1901> a lot of ppl in Rust land will disagree with you
* cr1901 has no strong opinion
<jamesmunns-irc> this is speaking SPECIFICALLY when you are mixing the use of references and pointers
<cr1901> ahhh
<cr1901> maybe I should write more code w/ pointers
<cr1901> But that's another night
<jamesmunns-irc> from a holistic standpoint, yes, make tiny isolated walled gardens of unsafe, surrounded by safe code
<jamesmunns-irc> BUT
<jamesmunns-irc> when you're mixing and matching references and pointers
<jamesmunns-irc> it is WAY too easy to accidentally invalidate a borrow and step face first into the rake of UB
<cr1901> haaaaah
<cr1901> invalidate a borrow as in "you committed UB, your borrows don't mean shit to the compiler anymore"
<cr1901> ?*
<jamesmunns-irc> whereas, when you say "this is always lava, but RIGHT HERE I shut the lava gate and lower the bridge, then when the reference drops, I raise the bridge and we are back to laval"
<jamesmunns-irc> uhhh, borrow invalidation has a different, more stacked borrows/pointer provenance kind of meaning
<cr1901> oh, I know jack shit about stacked borrows
<cr1901> Ya know what, why don't we continue this at a later date?
<jamesmunns-irc> totally fine!
<cr1901> It's 3:45am where you are. I had a blast talking
<jamesmunns-irc> :)
<cr1901> and I think you did too
<jamesmunns-irc> always happy to chat!
<cr1901> but both our bandwidths are probably going :P
<jamesmunns-irc> If you get on matrix some day, come hang out in the anachro channel :)
<cr1901> Will keep that in mind. Sleep well!
<jamesmunns-irc> oh, one last example of perils of accidentally invalidating a pointer/borrow
<jamesmunns-irc> okay, nevermind, this is a long one. Let's talk another time :D
<jamesmunns-irc> (but basically going from pointer -> reference it is very easy to go back to "strict reference rules mode", even if you immediately go back to using pointers)
<jamesmunns-irc> like if you wanted to get the address of a subfield
<jamesmunns-irc> let a: *mut SomeStruct = ...;
<jamesmunns-irc> let b = (&mut (*a).field) as *mut SomeField;
<jamesmunns-irc> (*a).other_field = 4;
<jamesmunns-irc> This is UB!
<cr1901> wheeee...
<jamesmunns-irc> you created a reference from your pointer, invalidating it
<jamesmunns-irc> (e.g. ONLY b is valid after the second line)
<jamesmunns-irc> because Rust says that if you have an &mut, it must be EXCLUSIVE access!
<jamesmunns-irc> so your other pointers must now be invalid (for the scope of b)
<jamesmunns-irc> this is why addr_of[_mut] exist. They allow you to get subfield pointers without creating an intermediary reference, which can cause problems
<cr1901> even split_mut is hell to implement properly
<cr1901> b/c of the multiple mut & is UB
<cr1901> Now I actually do have to go unfortunately lol
<jamesmunns-irc> no worries, catch you around :)
jamesmunns-irc has quit [Quit: leaving]
funsafe_ has quit [Quit: funsafe_]
explore has joined #rust-embedded
starblue has quit [Ping timeout: 268 seconds]
starblue has joined #rust-embedded
starblue has quit [Ping timeout: 268 seconds]
starblue has joined #rust-embedded
<re_irc> <GrantM11235> Speaking of cursed atomic stuff, I think reading from an active circular dma buffer is quite similar to using a "seqlock"
<re_irc> <GrantM11235> Which would apparently require new atomic memcpy intrinsics https://github.com/rust-lang/unsafe-code-guidelines/issues/323
<re_irc> <GrantM11235> This is the c++ proposal for atomic memcpy https://wg21.link/p1478
crabbedhaloablut has quit [Remote host closed the connection]
crabbedhaloablut has joined #rust-embedded
explore has quit [Quit: Connection closed for inactivity]
crabbedhaloablut has quit [Remote host closed the connection]
crabbedhaloablut has joined #rust-embedded
gsalazar has joined #rust-embedded
emerent_ has joined #rust-embedded
emerent is now known as Guest1751
emerent_ is now known as emerent
Guest1751 has quit [Killed (osmium.libera.chat (Nickname regained by services))]
explore has joined #rust-embedded
crabbedhaloablut has quit [Quit: No Ping reply in 180 seconds.]
crabbedhaloablut has joined #rust-embedded
creich has joined #rust-embedded
crabbedhaloablut has quit [Remote host closed the connection]
crabbedhaloablut has joined #rust-embedded
<re_irc> <gauteh> I have an SD-card with datafiles, and whenever I start up I search through the numbered data files to find the next free one. This starts to take some time with a lot of data, and it would be a lot faster to just jump 10 100 1000 and scan. Are anyone aware of existing algorithm for doing this? Maybe something like bisecting, except the theoretical max is much higher than what the next free id usually is.
<re_irc> <9names (@9names:matrix.org)> with FAT? can't do binary search unless you _know_ the directory entries are ordered.
<re_irc> if you have confidence your firmware is the only one writing it should be fine
<re_irc> <ryan-summers> This ultimately comes down just an ordered vs. unordered search. If your data is ordered, you can easily do an O(logn) solution, but if it's unordered, you're stuck with O(n)
<re_irc> <gauteh> Its ordered
<re_irc> <ryan-summers> Heck, even "core::slice" supports binary search: https://doc.rust-lang.org/std/primitive.slice.html#method.binary_search
<re_irc> <gauteh> Files have numbers as names
<re_irc> <gauteh> Its expensive to check if the file exists
<re_irc> <ryan-summers> Yeah, but you only have to do log(n) checks
<re_irc> <ryan-summers> Always way faster than N
<re_irc> <pwychowaniec> maybe you could just store the latest id in another file? although that might pose an issue with flash memory wearing too fast
<re_irc> <gauteh> Yep, that's what I'm looking for: is there a crate for that?
<re_irc> <gauteh> pwychowaniec: tends to get corrupted or out of sync, can't trust it
<re_irc> <ryan-summers> Binary search is simple enough that it's honestly probably just better to implement your own. Check out https://shane-o.dev/blog/binary-search-rust
<re_irc> <ryan-summers> The built-in versions are all for searching existing datastructures, but your datastructure doesn't actually exist in rust code, it's just the filesystem
<re_irc> <ryan-summers> So unless you can effectively map your filesystem into an e.g. "slice", your best bet is probably just writing up the algo
explore has quit [Quit: Connection closed for inactivity]
<re_irc> <gauteh> 👍️
<re_irc> <ryan-summers> Quick search also brings up https://docs.rs/binary-search/latest/binary_search/index.html, that may be useful
starblue has quit [Ping timeout: 268 seconds]
starblue has joined #rust-embedded
dc740 has joined #rust-embedded
gsalazar_ has joined #rust-embedded
gsalazar has quit [Read error: Connection reset by peer]
dc740 has quit [Read error: Connection reset by peer]
dc740 has joined #rust-embedded
crabbedhaloablut has quit [Remote host closed the connection]
crabbedhaloablut has joined #rust-embedded
vancz has quit [Ping timeout: 240 seconds]
<re_irc> <usize> Does "cargo flash"/probe-rs support custom targets? I have generate the yaml file for an unsupported TI-MSP432P401R (https://probe.rs/docs/knowledge-base/cmsis-packs/) but I'm unsure where to place it to get "cargo flash"/"cargo embed" to recognize this file. Any ideas?
<re_irc> <usize> * generated
vancz has joined #rust-embedded
brazuca has quit [Quit: Client closed]
<re_irc> <tiwalun> usize: You can specify it on the command line: "--chip-description-path nRF52840_xxAA.yaml"
dc740 has quit [Ping timeout: 268 seconds]
<re_irc> <usize> tiwalun: Thank you, I am getting some parsing errors for the file generated which I'll try to resolve, but "cargo flash" accepts this!
<re_irc> Is there similar syntax for "cargo embed" (or an "Embed.toml" option?)
<re_irc> <tiwalun> usize: Yes, I think it's the "chip-descriptions" option: https://github.com/probe-rs/cargo-embed/blob/812e255529f84d092db55f9e185eaf9ffe6007be/src/config/default.toml#L38
dc740 has joined #rust-embedded
<re_irc> <usize> Thank you, this was it!
vancz has quit []
vancz has joined #rust-embedded
<re_irc> <9names (@9names:matrix.org)> usize: hopefully it doesn't apply for you, but I found the XDS110 debugger on msp432E401Y launchpad didn't work with probe-rs (even though supports CMSIS-DAP v2).
<re_irc> if you have connection issues, might be worth testing with another cmsis-dap probe.
<re_irc> <usize> 9names: Thanks for the heads up! I don't have another cmsis-dap probe, but I'll try this one for sure
vancz has quit []
vancz has joined #rust-embedded
<re_irc> <dirbaio> I was wondering if
crabbedhaloablut has quit [Write error: Broken pipe]
<re_irc> <dirbaio> in "critical-section", it'd be better if the lib publishes the "impl struct" and then the user code does the "set_impl!"
crabbedhaloablut has joined #rust-embedded
<re_irc> <dirbaio> instead of doing both in the lib, gated by a Cargo feature
<re_irc> <dirbaio> oh... and something should provide an impl for "std"
<re_irc> <dirbaio> what should it be? cargo feature in "critical-section" itself?
dc740 has quit [Ping timeout: 244 seconds]
brazuca has joined #rust-embedded
<jr-oss> I guess the bridge was down yesterday, when I asked. Is it up again?
<jr-oss> While looking at embassy signals I noticed these two lines (https://github.com/embassy-rs/embassy/blob/master/embassy-util/src/channel/signal.rs#L41-L42) and wonder why the 2nd one works "impl<T: Send> Sync for Signal<T> {}"
<cr1901> jr-oss: The bridge is still being screwy I'm afraid
<re_irc> <adamgreig> I think I prefer cargo feature to more user boilerplate code
<re_irc> <adamgreig> Could do both if arch crates make the impl public and the feature just runs the macro
<jr-oss> cr1901: Thanks for the info. Looks like I really have to look into that Matrix stuff
<re_irc> <dirbaio> i'm not a fan of having 2 ways though, it's always confusing
<re_irc> <dirbaio> i kinda think features is better too yep
<re_irc> <dirbaio> making the impl struct public could be confusing, with users trying to call methods on it
<re_irc> <dirbaio> * users could try calling the methods on it directly for example
<re_irc> <adamgreig> dirbaio: Yea, though I wonder if the macro can be more flexible for weird dynamic things or something
gsalazar_ has quit [Ping timeout: 268 seconds]
rardiol has joined #rust-embedded
gsalazar has joined #rust-embedded
brazuca20 has joined #rust-embedded
brazuca has quit [Ping timeout: 252 seconds]
brazuca20 is now known as brazuca