ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
IlPalazzo-ojiisa has quit [Quit: Leaving.]
GrantM11235[m] has quit [Quit: Idle timeout reached: 172800s]
vadimcn[m] has quit [Quit: Idle timeout reached: 172800s]
korken89[m] has joined #rust-embedded
<korken89[m]> I've just started a new FPGA project based on ECP5 85k. :) nothing fancy, just a sampling rig to connect 64 DUTs with 3 important signals and one defmt logging UART so it will do RLE encoding of the signals and decode the UART streams, to finally streaming it all over USB 3.
<korken89[m]> So one can test all the DUTs (RF devices) in parallel and time synched to make sure they do TDMA correctly
vrakaslabs[m] has quit [Quit: Idle timeout reached: 172800s]
<korken89[m]> Doodling the gateware to test the sub components and I run Amaranth for this
<korken89[m]> <adamgreig[m]> "ah, I was looking for https://..." <- This one looked interesting though
<korken89[m]> This reminds me that I need to get adamgreig's `ecpdap` working with the rusty probe again 😅 I did not port the minimal JTAG support there was in the HS probe
crabbedhaloablut has joined #rust-embedded
juliand[m] has joined #rust-embedded
<juliand[m]> Does amaranth have better documentation now?
<juliand[m]> I remember trying it out a few months ago on the ice40 and it was a pain to set up, some instructions in the guide didn't work and not much of the "language's" functionality had been documented. Like how you interact with these python modules/components and how to get the toolchain set up etc. Also lacking a couple of more sophisticated examples iirc.
<juliand[m]> Really liked the approach though, and it seems to have some commercial backing. So that's very promising.
<korken89[m]> Not sure tbh, when I upgraded from nmigen it was quite straight forward and I generally spelunk the codebase when I want to know something :/
<juliand[m]> Hmm, that's what I did as well to figure some things out, but that's not a good experience to get you started. However, I am far from an FPGA expert and my experience with VHDL and Verilog is mostly from university... maybe that was my main issue there 😅
<korken89[m]> When I started there was a looot of good tutorials for nMigen, so I have just extrapolated from there :)
<korken89[m]> E.g. there was a guy writing an entire RISC-V from scratch with nMigen, that was really insightful
<korken89[m]> Also they are really helpful in my experience, just ask in #amaranth-lang:matrix.org and you'll get good feedback :)
<korken89[m]> Last time I was really using nMigen there was a IRQ chat, maybe this exists still as well
emerent has quit [Ping timeout: 260 seconds]
emerent_ has joined #rust-embedded
emerent_ is now known as emerent
<juliand[m]> Cool, thank you! Will give it a try soon :)
<korken89[m]> No problem!
<korken89[m]> * parallel and measure time synched
hmw has quit [Quit: Bye.]
<korken89[m]> Also if you didn't know already, this is quite a nice collection so one does not need to install everything https://github.com/YosysHQ/oss-cad-suite-build
hmw has joined #rust-embedded
<korken89[m]> * to install/build everything
<juliand[m]> Yes, that I found already, works like a charm! (Unless you install the arm64 instead of amd64 binaries on an Intel Laptop and wonder why the instruction set is supposedly wrong, lol)
<korken89[m]> xD
Guest7221 has joined #rust-embedded
IlPalazzo-ojiisa has joined #rust-embedded
<thejpster[m]> One final ping on https://github.com/rust-embedded-community/embedded-sdmmc-rs/pull/100 before I just merge it myself.
IlPalazzo-ojiisa has quit [Remote host closed the connection]
IlPalazzo-ojiisa has joined #rust-embedded
<adamgreig[m]> juliand: the docs are being worked on, there's been a lot of renewed development work in amaranth in the last couple of months
<korken89[m]> <adamgreig[m]> "juliand: the docs are being..." <- Because of Glasgow coming? :D
<adamgreig[m]> I think it's more to do with Catherine getting a job that partially involves working on amaranth
<korken89[m]> That's even better!
<adamgreig[m]> Yea! There's weekly amaranth meetings on the matrix/IRC/discord chat at 1800 UTC Mondays
<korken89[m]> I must listen in on that :D
mciantyre[m] has quit [Quit: Idle timeout reached: 172800s]
brazuca has joined #rust-embedded
brazuca has quit [Quit: Client closed]
Jonas[m]1 has quit [Quit: Idle timeout reached: 172800s]
Guest7221 has left #rust-embedded [Error from remote client]
Foxyloxy has quit [Quit: Textual IRC Client: www.textualapp.com]
Guest7221 has joined #rust-embedded
wassasin[m] has quit [Quit: Idle timeout reached: 172800s]
Foxyloxy has joined #rust-embedded
starblue has quit [Quit: WeeChat 3.8]
Guest7221 has left #rust-embedded [Error from remote client]
<AlexNorman[m]> is there something like shared_bus for sharing Delay ? I'm running into the same ownership issue as with SPI
<dirbaio[m]> no
<dirbaio[m]> HALs should make Delay cloneable/copyable
<firefrommoonligh> let dp = unsafe { cortex_m::peripherals::steal() }
<firefrommoonligh> let mut delay = Delay::new(dp, PERIPH_CLOCKSPEED)
<firefrommoonligh> dirbaio[m]: I don't know if HAL is teh spot given it's a Cortex-m functiojnality
<firefrommoonligh> Assuming blocking systick delay
<dirbaio[m]> cortex-m is the "HAL" in this case, just applies to all cortex-m's
<firefrommoonligh> I think of HAL as more things that are not general to all Cortex-M or w/e
<dirbaio[m]> chip-specific HALs also supply Delay impls using the chip-specific timers
<firefrommoonligh> s//`/, s/dp/cp/, s//`/
<firefrommoonligh> s//`/, s/dp/cp/, s//`/
<dirbaio[m]> huh I thought cortex-m was already cloneable, but no
<dirbaio[m]> * thought cortex-m Delay was already
<dirbaio[m]> the reason we should put the "burden" of sharing on the HAL and not the user
<dirbaio[m]> is in many implementations sharing is "free"
<dirbaio[m]> if they use a free-running timer
<dirbaio[m]> you can make all delays look at the same timer just fine. it works even if delays preempt each other
<thejpster[m]> how do people deal with timer wrapping?
<dirbaio[m]> just use u64 :P
<dirbaio[m]> the pain of timer overflow bugs is not worth saving 4 bytes of ram
<dirbaio[m]> * saving 4 bytes of ram is not worth the pain of timer overflow bugs
<thejpster[m]> you assume you have a 64-bit hardware timer
<AlexNorman[m]> i'm using cortex-m delay but I see if I upgrade my hal (rp-2040) there are cloneable timers there
<dirbaio[m]> you can extend any 32bit / 16bit hardware timer to u64 by counting overflows
<thejpster[m]> I see that the cortex-m delay object actually sets the SYSTICK registers and spins until it overflows
<firefrommoonligh> thejpster[m]: Like hardware timers? `Timer` struct with methods to construct, set prescalers, enable interrupts, start/stop etc
<thejpster[m]> wrapping, as in overflow
<firefrommoonligh> ohhh
<dirbaio[m]> to be fair systick is a terrible timer
<thejpster[m]> 2**32 microseconds isn't that long
<thejpster[m]> and SYSTICK is ... 24bit?
<thejpster[m]> it's not good
<thejpster[m]> so then you need an interrupt to catch overflows and increment another counter. And that needs atomics and the interrupt to be set up.
<dirbaio[m]> you can make a shitty free-running timer out of systick by setting it to fire an irq at, say, 1khz, then increment a counter in the irq, and use that as your free-running timer
<thejpster[m]> to be fair, it was designed to provide a 100 Hz tick for your RTOS. Hence the name.
<dirbaio[m]> but then you get only 1ms precision. more precision requires more irq spam which starts to be non viable
<dirbaio[m]> there's a rtic monotonic impl that does that
<dirbaio[m]> and another that uses systick+dwt. dwt as a free-running timer, plus systick to schedule an irq at the next expiration. somewhat kludgy but works
<thejpster[m]> having something own SYSTICK and use it to fire an interrupt when the next event is due is a reasonable approach
<dirbaio[m]> with systick only you can't do it, because you'll "lose" time between it firing and you restarting
<thejpster[m]> oooh, I saw something that used systick and DWT and I wasn't sure what the DWT bit was fore
<thejpster[m]> s/fore/for/
kenny has quit [Quit: WeeChat 4.0.4]
<firefrommoonligh> You could modify what I posted for systick, and I think that's more common, but for some reason it's v tough to do with the current Rust tools. I don't remember why
<firefrommoonligh> I think because the normal inteerrupt flows make it tough to use CORTEX-M peripherals
<dirbaio[m]> my conclusion was systick is not worth the effort. too crappy
<firefrommoonligh> * CORTEX-M peripherals as the interrupt source, but you can use HAL things like timers
<dirbaio[m]> better to use the vendor-specific timers
<dirbaio[m]> which let you set it to freerunning mode, and enable irq on overflow and on arbitrary compares
<thejpster[m]> firefrommoonlight: if the IRQ fires on line 12, I think you get a bad result?
<thejpster[m]> yeah, on the tiva-c I usually set up the 32-bit timer in 64-bit double-wide mode and called it a day
<dirbaio[m]> you can easily create a 64bit timer out of that. Without requiring firing an irq at 1mhz if you need 1mhz precision :P
<thejpster[m]> 64-bits at sysclk is a very long time
<dirbaio[m]> firefrommoonlight: that code has race conditions if you read the timer right as it overflows
<dirbaio[m]> s/read/call/, s/the/`get_timestamp`/, s/timer//
<thejpster[m]> time is hard
<dirbaio[m]> you can get the new tick_count_fm_overflows_s() plus the old elapsed()
<dirbaio[m]> or vice versa
<dirbaio[m]> which will cause you to return a time one "overflow period" in the future (or in the past)
<firefrommoonligh> thejpster[m]: Line 12?
<dirbaio[m]> yep, line 12. between timer.time_elapsed().as_secs(); and tick_count_fm_overflows_s()
<firefrommoonligh> dirbaio[m]: Yea probably. I haven't seen that in practice
<thejpster[m]> 🤦
<dirbaio[m]> lol
<firefrommoonligh> How would you modify it?
<dirbaio[m]> call `get_timestamp` in an infinte loop, asserting each read is `>=` the previous, and you'll see
<firefrommoonligh> I guess I don't really understand actually
<firefrommoonligh> Since it's been rock solid
<firefrommoonligh> How would you do it?
<dirbaio[m]> your timer will jump backward when the race happsn
<dirbaio[m]> s/happsn/happens/
brazuca has joined #rust-embedded
<firefrommoonligh> dirbaio[m]: I thought IRQ line 12!
<dirbaio[m]> line 12 in the snippet you pasted
<firefrommoonligh> Yea tracking now. How would you fix it?
<firefrommoonligh> And/or what do you do to track time?
<firefrommoonligh> * track time/overflows?
<dirbaio[m]> solution I found is to set irq on overflow, and on half overflow. then count "half overflow" periods
<dirbaio[m]> with that, you can "compensate" for the race
<dirbaio[m]> it's a bit convoluted, if someone knows of a better solution I'm all ears
<firefrommoonligh> Interesting. I'll think on this and see if I can apply that
<firefrommoonligh> I use the code snippet I posted in many firmwares
<dirbaio[m]> a common solution I've seen is:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/SQqipOrlWaItILOwOZYFlumg>)
<firefrommoonligh> Usually for reporting uptime for node health tracking etc, and to track health of various systems, ie if you don't get a good reading within X time mark as a fault or w/e
<dirbaio[m]> but that's actually incorrect
<firefrommoonligh> dirbaio[m]: > <@dirbaio:matrix.org> a common solution I've seen is:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/mXdkBTFrgBbXQtOYODLXPyrG>)
<dirbaio[m]> because the "overflow count" is incremented by the interrupt, which might be delayed (e.g if we're calling now() within a critical section)
<dirbaio[m]> so it is possible that BOTH reads are stale
<firefrommoonligh> So, how often you you expect to hit these edge cases?
<dirbaio[m]> the "count half overflows" trick is guaranteed correct as long as irqs are delayed for less than a half overflow period
<firefrommoonligh> I'm just surprised because it sounds reasonable, but I haven't seen any odd behavior despite using the naive approach extensively
<dirbaio[m]> for example if you use a 16bit timer at 32khz, an overflow period is 2s. so it's guaranteed correct as long as you don't have any critical section longer than 1s
<dirbaio[m]> firefrommoonligh: uh it's not "how often does it fail", it's "how can we guarantee it can never fail"
<dirbaio[m]> otherwise you're playing with fire :D
<firefrommoonligh> That makes sense
<firefrommoonligh> I guess this comes down to project requirements
<firefrommoonligh> Good to know the failure pt; thx for identifying
<firefrommoonligh> thejpster: To turn this around, how have you been handling timer wraps?
ryan-summers[m] has quit [Quit: Idle timeout reached: 172800s]
<thejpster[m]> either using a 64-bit hardware timer, or using the rtic Monotonic crate and letting RTIC handle everything
<dirbaio[m]> i've also seen bugs like this in rtic monotonics :D
<dirbaio[m]> (reported them ofc)
<dirbaio[m]> extending timers is incredibly tricky 🥲
<dirbaio[m]> yeah best is when the hw has a 64bit timer. rp2040 also has one.
<dirbaio[m]> but these are the exceptions, it's almost always 16/24/32bit
<firefrommoonligh> <thejpster[m]> "either using a 64-bit hardware..." <- What is the significance of a 64-bit timer here?
<dirbaio[m]> if the hardware does it for you, you don't have to count overflows in software
<firefrommoonligh> I've been mainly using STM32s. They have a few 96-bit timers, but I haven't used them
<firefrommoonligh> dirbaio[m]: Ohh gotcha, so you can just let it keep counting up
<firefrommoonligh> That's an elegant way to handle it
<dirbaio[m]> 69bit timers where
<dirbaio[m]> * 96bit timers where
<firefrommoonligh> HRTIM I think? > 6 16-bit timing units (each one with an independent counter and 4 compare units)
<firefrommoonligh> I'm not sure if that means you can do 96-bit continous counts?
<dirbaio[m]> ah, no idea
<dirbaio[m]> there's one in eth for ptp which is 64bit iirc
<firefrommoonligh> Maybe not though, it's hard to tell from a skim
<dirbaio[m]> using these "special purpose" ones seems cursed though. only the regular TIMx is present in all chips
<firefrommoonligh> STM32 timers have a lot of functionality I don't really understand and haven't used
<firefrommoonligh> I just know how to do the stuff like start/stop/read/interrupt, and the ADC/DAC triggering
<firefrommoonligh> I guess I dove a bit into Burst DMA land which is a bit weird
brazuca has quit [Quit: Client closed]
<firefrommoonligh> Thinking through the race condition more... I've been setting up the prescalers with a balance between precision and not overflowing a ton. I think the overflow periods I've been working with have been 1-2s. So maybe the time interval vulnerable to a race is just small compared to that, so it's a statistically unlikely even. Not sure
<firefrommoonligh> Haven't run the numbers
<JamesMunns[m]> is ALL of your code okay if there is a spurious rollover?
<firefrommoonligh> s/even/event/
<JamesMunns[m]> like, STATISTICALLY it wont happen often, but if it does, will your software handle it?
<firefrommoonligh> I think so, but haven't considered this until Dirbaio pointed it out
<JamesMunns[m]> This is a big source of bugs I've seen that only present in the field, in larger quantities of use
<JamesMunns[m]> like "works on the bench and in all tests"
<firefrommoonligh> I've mainly been using it validate system status, ie making sure each system has reported a successful result x time ago
<firefrommoonligh> And a reporting uptime to other nodes
<JamesMunns[m]> and "fails multiple times per weeks/months once in the field"
<firefrommoonligh> Yea if you don't realize this is a trap it could cause mysterious bugs
<dirbaio[m]> uhhhhhh
<JamesMunns[m]> "Better Embedded System Software" has like a whole chapter about this.
<dirbaio[m]> "time only goes forward" is such a basic assumption that it's easy to unconciously depend on it
<dirbaio[m]> like, if you measure the time duration of something, you assume it won't be negative
<dirbaio[m]> then cast the number of us to a u32
<dirbaio[m]> and boom, bug
<JamesMunns[m]> this is the source of the "must reboot plane ever 50 days" bug you see in avionics too :p
<JamesMunns[m]> s/ever/every/
<firefrommoonligh> Concur
<firefrommoonligh> WOuld be good to have a rock solid solution here
<firefrommoonligh> (I haven't yet dove into what Dirbaio posted)
<dirbaio[m]> it's easier to put in the effort once to get a correct time impl than to audit every single piece of code to know it's okay with time going back lol
<JamesMunns[m]> (32-millis rolls over every 49.7 days)
<JamesMunns[m]> * (32-bit millis rolls
<firefrommoonligh> What I posted is I think a min viable one, in that it's conceptually easy to follow, so likely the form of a rock solid would be based on something lik eit
Foxyloxy has quit [Quit: Textual IRC Client: www.textualapp.com]
<dirbaio[m]> or even easier, use a lib that has already solved it for you, don't reinvent the wheel
brazuca has joined #rust-embedded
<dirbaio[m]> like embassy-time or rtic monotonics 🤣
<firefrommoonligh> (I didn't post the code that converts ticks to seconds, but you can imagine what it would look like)
<adamgreig[m]> the ptp 64 bit timer is where you can do the "read top, read bottom, read top again, compare, repeat if different" and get away with it, which i guess is why you see people trying the same pattern where "top" is a variable that an interrupt increments instead
<firefrommoonligh> JamesMunns[m]: Was that the actual airplane bug?
<firefrommoonligh> Sounds like a very apt analogy
<JamesMunns[m]> firefrommoonligh: I mean whenever you see something that says "must be rebooted every 50 days", that's 99.999% a "32-bit milliseconds timer bug".
<adamgreig[m]> 32-bit milliseconds, or, floating point milliseconds
<adamgreig[m]> pick your poison
<JamesMunns[m]> oh god
<JamesMunns[m]> I know someone uses floats for time
<firefrommoonligh> Mine is vulnerable to both
<JamesMunns[m]> that's one of the worst things I can think of though.
<firefrommoonligh> (Well, f32 seconds and u32 ns)
<JamesMunns[m]> firefrommoonligh: Quick what's 2^24 seconds?
<JamesMunns[m]> So it'll work for the first 194 days :p
<JamesMunns[m]> then it goes WEIRD
<adamgreig[m]> the floating point time thing is most famously from the patriot missiles, where actually the "must reboot" period was more like 100 hours
<dirbaio[m]> yea floats for time is also a bit 💀
<dirbaio[m]> your time delta precision goes down as uptime goes up
<firefrommoonligh> JamesMunns[m]: Oh thank god. Future problem
<JamesMunns[m]> you have a 24-bit mantissa so it's lossless until 194 days
<adamgreig[m]> oh yea, 28 people died from this one
<firefrommoonligh> dirbaio[m]: > <@dirbaio:matrix.org> yea floats for time is also a bit 💀
<firefrommoonligh> > your time delta precision goes down as uptime goes up
<firefrommoonligh> Yea. I was thinking if you really care about precise time to that level over a long interval, use the RTC instead
<adamgreig[m]> use integers instead, I think
<adamgreig[m]> don't use floats for comparing big and small numbers, or big and close-together numbers
<dirbaio[m]> for the RTC you have to do date/time math which is even worse
<adamgreig[m]> don't use floats for long-running incrementing counters or things like that either, though I don't think that was exactly the issue with the patriot missiles
<firefrommoonligh> Makese sense re precision error accumulation
<JamesMunns[m]> adamgreig[m]: I think it was accumulated errors in summing floating point numbers, but I'd have to go look
<JamesMunns[m]> > The conversion of 100 hours in tenths of a second (3600000) to floating point introduced an undetectable error resulting in the missile guidance software incorrectly locating the SCUD missile.
<adamgreig[m]> I think that's a nice high level summary but the details were a bit fiddlier
<JamesMunns[m]> Going to go get dinner now, but here's the gov report: https://www.gao.gov/assets/imtec-92-26.pdf
<adamgreig[m]> the code had two functions for converting the integer tick count into floating point for doing subsequent maths on
<adamgreig[m]> one was more accurate than another
<adamgreig[m]> the difference between the two resulting floating point conversions caused the issue
<firefrommoonligh> V cool link!
<adamgreig[m]> but yea, the problem only arose because the magnitude of the (integer) system time was eventually big enough that the two routines differed enough to cause problems
<adamgreig[m]> had they rebooted it every 20 hours or so you'd never have observed an issue, I guess. perhaps it still wouldn't have been able to take out the scud missile in question though
<firefrommoonligh> Patriot has a not-so-great track record unfortunately
<adamgreig[m]> i've been using a ptp-timer-based 64 bit system time recently and it's very pleasing that the systime is the same between all bits of hardware to within a few nanoseconds and the resolution is 1ns too
<adamgreig[m]> but unfortunately it can in principle go backwards 💀
<adamgreig[m]> more typically it just jumps forward an awful lot though
Foxyloxy has joined #rust-embedded
M9names[m] has joined #rust-embedded
<M9names[m]> Jumping forwards a lot is also *fun*.
<M9names[m]> I had a lot of trouble with app timing in Linux before I found out about CLOCK_MONOTONIC
Foxyloxy has quit [Quit: Textual IRC Client: www.textualapp.com]
Foxyloxy has joined #rust-embedded
brazuca has quit [Quit: Client closed]
crabbedhaloablut has quit []
Foxyloxy has quit [Quit: Textual IRC Client: www.textualapp.com]
Foxyloxy has joined #rust-embedded
Guest7221 has joined #rust-embedded
vollbrecht[m] has joined #rust-embedded
<vollbrecht[m]> <adamgreig[m]> "i've been using a ptp-timer-..." <- is this wired synchronization ? i am trying to implement wireless time sync on mac layer for my esp32c3 currently to get ptp timer. They have an hardware tsf counter and there are some that uses that for the synchronization
<adamgreig[m]> yea, wired ptp
<vollbrecht[m]> > <@adamgreig:matrix.org> i've been using a ptp-timer-based 64 bit system time recently and it's very pleasing that the systime is the same between all bits of hardware to within a few nanoseconds and the resolution is 1ns too
<vollbrecht[m]> * is this wired synchronization ? i am trying to implement wireless time sync on mac layer for my esp32c3 currently to get ptp timer. They have an hardware tsf counter and there are some papers that show how one can use it
<vollbrecht[m]> unfortunately i can only use the tsf counter and not the super duper internal FTM timer. FTM timers in wifi can sync <1ns cause they are used for positioning, but no driver expose it
Jonas[m]1 has joined #rust-embedded
<Jonas[m]1> I had the idea that I should try out some examples from nrf-softdevice with Embassy. I want to do it in a stand-alone project. But I don't know how to add nrf-softdevice to Cargo.toml. I tried with nrf-softdevice = { version = "0.1.0", features = ["nightly", "defmt", "nrf52840", "s140", "critical-section-impl"] } but only version 0.0.0 seem to exists at crates.io. Is there any way to import it as a dependency or do I need
<Jonas[m]1> to copy code from the repo to include in my project? I tried to follow the readme on https://github.com/embassy-rs/nrf-softdevice . Any suggestions?
<Jonas[m]1> I am getting errors about nrf-softdevice-s140 - I seem to miss the asm macro. Is that something I should have in any Cargo.toml. The error message suggests use core::arch::asm; but this is in a dependency, I think...
<dirbaio[m]> are you passing --target?
<dirbaio[m]> either manually, or in .cargo/config.toml
<Jonas[m]1> dirbaio[m]: `target = "thumbv7em-none-eabihf"`
<Jonas[m]1> Jonas[m]1: should I use some other target?
<dirbaio[m]> Jonas[m]1: that's ok
<dirbaio[m]> dirbaio[m]: make sure you're using this nightly version https://github.com/embassy-rs/embassy/blob/main/rust-toolchain.toml#L4
<dirbaio[m]> dirbaio[m]: if it still doesn't work, please post the full compiler error
<Jonas[m]1> dirbaio[m]: I do that, I got many errors before switching to nightly. But now should I be on lighty, and I have done rustup and installed the target again.
<Jonas[m]1> Jonas[m]1: It is many of these: ``` Compiling nrf-softdevice-s140 v0.1.1... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/qadMgDXDnURzgFsHGIAgJRFs>)
<dirbaio[m]> Jonas[m]1: post the very first error
<Jonas[m]1> dirbaio[m]: that is the one
<Jonas[m]1> Jonas[m]1: ah... wait
<Jonas[m]1> Jonas[m]1: I have set it like this in Cargo: `nrf-softdevice-s140 = "0.1.1"`
<Jonas[m]1> Jonas[m]1: I should probably use a patch instead?
<dirbaio[m]> Jonas[m]1: ah, yes
<Jonas[m]1> dirbaio[m]: How should I write in Cargo?
<Jonas[m]1> dirbaio[m]: Now I got: `= note: rust-lld: error: duplicate symbol: _critical_section_1_0_acquire`
<Jonas[m]1> ...probably a cargo thing as well? I have set feature `"critical-section-impl"` on `nrf-softdevice`. Should I add it to `nrf-softdevice-s140` as well?
<dirbaio[m]> Jonas[m]1: disable critical-section-single-core in cortex-m
<Jonas[m]1> dirbaio[m]: got it building! Thanks!
brazuca has joined #rust-embedded
<firefrommoonligh> Thinking about that Patriot report James posted again... It puts "safety critical" into perspective. I can easily see how the naive implementation of timer + overflow starts as OK for project requirements. You know, what's the worst that could happen? Node uptime is reported erroneously once every few months or w/e due to a timing consilience or f32 error accum. Then you forget about it. You have this timing code that you've
<firefrommoonligh> used before, and it works well. Then later you end up programming something safety critical (An anti-TBM SAM being a perfect example), probably with multiple stages in bewteen and bam, there is a bug like that and it's a problem
<firefrommoonligh> * report James and Adam posted again...
brazuca has quit [Quit: Client closed]
<adamgreig[m]> or ariane 5: extremely safety-conscious programming, carefully re-used some known-good older code from ariane 4, sadly a different trajectory for the new vehicle meant a particular floating-point value could no longer be converted to an i16, so instead an error value was put into the i16, which the control software took to be a real value, went full deflection on the nozzle gimbal, destroyed the rocket
<adamgreig[m]> the problem was converting a float to an i16 but maybe the real problem was not having Result<T> ;)
<adamgreig[m]> as usual it's always a tiny bit more complicated than that, several of the conversions did have checks and protections, and it was written in Ada so it's not like they didn't have this available
<dirbaio[m]> in-band error values like that are cance
<dirbaio[m]> s/cance/cancer/
<adamgreig[m]> but the vital field that was too big did not have such checks, probably because they were provably not required for ariane 4
<adamgreig[m]> "
<adamgreig[m]> The reason for the three remaining variables, including the one denoting horizontal bias, being unprotected was that further reasoning indicated that they were either physically limited or that there was a large margin of safety, a reasoning which in the case of the variable BH turned out to be faulty. It is important to note that the decision to protect certain variables but not others was taken jointly by project partners at
<adamgreig[m]> several contractual levels."
<adamgreig[m]> and then obviously several other things in the error-handling and redundancy chain went wrong too
<adamgreig[m]> anyone have any guesses why riscv32 i, imac, and imc are tier 2, but im is tier 3?
esden[cis] has joined #rust-embedded
<esden[cis]> I bet I am not the only one asking this but my google fu is not strong enough it seems. Does rustup not install a gdb that supports cross compilation targets? I tried multiple different ways but I keep getting:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/EDRxgHKBRNXemSTngRUVBnGI>)
<adamgreig[m]> I didn't think rust installed a gdb at all, where's your rust-gdb coming from?
<adamgreig[m]> these days you should be able to use gdb-multiarch and thus just "gdb"
<esden[cis]> it is from rustup...
<adamgreig[m]> huh 🤔
<adamgreig[m]> I think rust-gdb is just a wrapper around normal gdb that provides some rust pretty-printing, but I also thought new gdb had rust printing built in anyway, so might not be necessary
<esden[cis]> but yes, gdb-multiarch would work, but then I rely on the system providing it
<esden[cis]> having it be part of rustup would prevent issues with windows users and what not...
<adamgreig[m]> I think the answer should be "however rust is building/configuring gdb for the rust-gdb binary, it should enable more targets", rather than have some extra gdb you download with rustup for other targets
<adamgreig[m]> but I can't even find where rust-gdb is coming from, the rust-lang/rust repo only has the bash wrapper script https://github.com/rust-lang/rust/blob/master/src/etc/rust-gdb which seems to be current but clearly isn't what's in .cargo/bin
<esden[cis]> ~/.rustup/toolchains/nightly-2023-06-28-x86_64-unknown-linux-gnu/bin/rust-gdb is another instance of it
<esden[cis]> but you are probably right that it is just a wrapper
<esden[cis]> And yes, having the gdb that it is offering be configured to enable more targets is definitely a better way than installing a dozen binaries. No argument there.
<esden[cis]> I am just concerned that when I share the project with non linux users that it will end up in a saga of extra installation steps. :)
<adamgreig[m]> ah, those binaries are all just rustup actually
<adamgreig[m]> that shell script is indeed what's being run
<adamgreig[m]> so it's literally just wrapping your already-installed system gdb
<esden[cis]> ohh great... so it "assumes" the system has gdb ...
<adamgreig[m]> indeed, if I sudo mv (which gdb){,2} then rust-gdb errors out
<esden[cis]> that is probably not a great assumption on non linux systems :D
<adamgreig[m]> yea, it's literally just a little wrapper script
<adamgreig[m]> if you had gdb-multiarch installed, I guess rust-gdb would just work :p
<adamgreig[m]> what are you using it for? these days I think very few people are using gdb for embedded rust, compared to the tooling available with probe-rs for flashing and rtt... but I appreciate that's not so helpful for bmp
<esden[cis]> I want to use it with BMP... as gdb is the main use trajectory
<adamgreig[m]> yea, I realised halfway through writing that sentence that it was probably for bmp :P
<adamgreig[m]> I think your best bet is probably to suggest either installing gdb-multiarch (eg from their linux repos) or downloading arm's arm-none-eabi-gdb for windows?
<esden[cis]> for c or c++ using GDB to upload and run firmware is a no brainer as it is included with every single stupid toolchain
<esden[cis]> rust seems to be different ;)
<adamgreig[m]> hmm
<adamgreig[m]> it's included in arm's gcc distribution
<adamgreig[m]> but on most linuxes you install gcc and gdb as quite separate concerns, right?
<adamgreig[m]> that doesn't help you, of course, but I can sort of see why rust doesn't bundle gdb
<adamgreig[m]> it's a big weird thing to include in your compiler, and if anything it would probably bundle lldb instead since it's based on llvm, and presumably lldb isn't much use to you anyway...
<esden[cis]> Ok probably correct. The thing is. rustup/cargo make it easier in theory for a person to just use those and not have to figure out how to install extra tools on windows for example.
<adamgreig[m]> gdb's not even licence-compatible with rust, so...
<adamgreig[m]> yea, for sure
<adamgreig[m]> probably a part of why using probe-rs tooling is popular too, in theory it just works in the same way
<esden[cis]> if it was an extra command for rustup I am fine with that
<esden[cis]> but it should not assume that gdb or gdb-multiarch is there I think...
<adamgreig[m]> you could probably make a cargo command to download/install/run gdb
<esden[cis]> rust tends to be pretty good about this and in this case it is not
<adamgreig[m]> I think it's more like "it includes this convenient wrapper bash script in case you want to use gdb (also rust-lldb) that adds some better pretty printing"
<adamgreig[m]> rather than "requires/assumes you have gdb installed"
<esden[cis]> mhh... yeah ... gdb is not a first class citizen in the ecosystem unlike gcc... is lldb even included? I should check
<adamgreig[m]> I was just looking, it seems like no
<adamgreig[m]> shame as otherwise cargo-binutils could probably do it for you
<esden[cis]> all right, at least I know that I am not missing something obvious. :)
kenny has joined #rust-embedded