ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
dgoodlad has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dgoodlad has joined #rust-embedded
starblue3 has quit [Ping timeout: 245 seconds]
starblue3 has joined #rust-embedded
dgoodlad has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dgoodlad has joined #rust-embedded
dgoodlad has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dgoodlad has joined #rust-embedded
dgoodlad has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dgoodlad has joined #rust-embedded
dgoodlad has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dgoodlad has joined #rust-embedded
dgoodlad has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dgoodlad has joined #rust-embedded
dgoodlad has quit [Quit: Textual IRC Client: www.textualapp.com]
duderonomy has quit [Ping timeout: 245 seconds]
duderonomy has joined #rust-embedded
<re_irc> async fn process_next_message(&mut self) {
<re_irc> <@louis.renuart.qteal:matrix.org> Hello, I have been with this compilation problem
<re_irc> // Send outgoing messages waiting to be sent
<re_irc> self.send_message(self.msg_channel_out.recv().await).await;
<re_irc> debug!("process_next_message(): ");
<re_irc> // Receive new message from broker and add them to the queue
<re_irc> match self.receive_message().await {
<re_irc> Ok(message) => {
<re_irc> self.msg_channel_in.send(message).await;
<re_irc> }
<re_irc> Err(e) => {
<re_irc> error!("process_next_message(): {:?}", e);
<re_irc> }
<re_irc> }
<re_irc> }
<re_irc> error[E0502]: cannot borrow `self.msg_channel_in` as immutable because it is also borrowed as mutable
<re_irc> --> rust/src/lib.rs:200:17
<re_irc> |
<re_irc> 198 | match self.receive_message().await {
<re_irc> | ----------------------
<re_irc> | |
<re_irc> | mutable borrow occurs here
<re_irc> | argument requires that `*self` is borrowed for `'static`
<re_irc> 199 | Ok(message) => {
<re_irc> 200 | self.msg_channel_in.send(message).await;
<re_irc> | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ immutable borrow occurs here
<re_irc> <@louis.renuart.qteal:matrix.org> Hello, I have been stuck with this compilation problem
<re_irc> async fn process_next_message(&mut self) {
<re_irc> self.send_message(self.msg_channel_out.recv().await).await;
<re_irc> debug!("process_next_message(): ");
<re_irc> // Send outgoing messages waiting to be sent
<re_irc> // Receive new message from broker and add them to the queue
<re_irc> match self.receive_message().await {
<re_irc> Ok(message) => {
<re_irc> self.msg_channel_in.send(message).await;
<re_irc> }
<re_irc> Err(e) => {
<re_irc> error!("process_next_message(): {:?}", e);
<re_irc> }
<re_irc> }
<re_irc> }
<re_irc> error[E0502]: cannot borrow `self.msg_channel_in` as immutable because it is also borrowed as mutable
<re_irc> --> rust/src/lib.rs:200:17
<re_irc> |
<re_irc> 198 | match self.receive_message().await {
<re_irc> | ----------------------
<re_irc> | |
<re_irc> | mutable borrow occurs here
<re_irc> | argument requires that `*self` is borrowed for `'static`
<re_irc> 199 | Ok(message) => {
<re_irc> 200 | self.msg_channel_in.send(message).await;
<re_irc> | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ immutable borrow occurs here
<re_irc> can anyone help me out ?
<re_irc> The goal here is to be able to send queued message
<re_irc> and then receive new ones, to be processed later
<re_irc> <@argjend_:matrix.org> Hey Guys. How do i convert a u32 variable into a String ? Im using a no_std environment.
<re_irc> <@louis.renuart.qteal:matrix.org> argjend_: use the '.parse()' method
<re_irc> <@ryan-summers:matrix.org> argjend_: "let my_str: heapless::String<64> = format!("{}", my_u32)"
<re_irc> <@louis.renuart.qteal:matrix.org> ah no sorry that's the opposite
<re_irc> <@ryan-summers:matrix.org> Don't think "parse" will work without "alloc"
<re_irc> <@diondokter:matrix.org> : "format!" uses alloc too, right?
<re_irc> <@ryan-summers:matrix.org> Not that I'm aware of? I believe it uses the return type as the storage mechanism
<re_irc> <@diondokter:matrix.org> What!?
<re_irc> <@ryan-summers:matrix.org> Maybe I'm wrong, but I never use alloc and I'm pretty sure I use "format!()" all the time
<re_irc> <@ryan-summers:matrix.org> Yeah, otherwise you couldn't use the "log::{info, warn, error}" macros in non "alloc" and I use those all the time
<re_irc> <@louis.renuart.qteal:matrix.org> Maybe you mean defmt ?
<re_irc> <@diondokter:matrix.org> And if you remove the no_std you'll get an error where it returns String and not the stack string type
<re_irc> <@ryan-summers:matrix.org> What does "log" use under the hood then? Just the format traits?
<re_irc> <@joelimgu_:matrix.org> Hello, I'm new here. I'm Joel, I've been doing Rust for a few years, mostly on embedded.
<re_irc> For the format!() macro it doesnt use alloc if the parameters are const form what I know. But you cant use it with variables
<re_irc> <@diondokter:matrix.org> There is the "write!" macro
<re_irc> <@ryan-summers:matrix.org> And no, I do not mean defmt. I have not personally used it yet
<re_irc> <@ryan-summers:matrix.org> Ah that's waht I'mn thinking
<re_irc> <@ryan-summers:matrix.org> What's the difference between the two? Didn't realize there was one
<re_irc> <@diondokter:matrix.org> let mut my_str = heapless::String<64>::new();
<re_irc> write!(my_str, "{}", my_num).unwrap();
<re_irc> <@ryan-summers:matrix.org> : Huh, that sounds... weird? Why does "write!()" fork fine but "format!()" doesnt?
<re_irc> <@ryan-summers:matrix.org> * work
<re_irc> <@juliand:fehler-in-der-matrix.de> https://doc.rust-lang.org/beta/alloc/fmt/fn.format.html core::fmt does not have the format macro, but core::alloc does
<re_irc> <@joelimgu_:matrix.org> The proble is that without alloc rust cant create Strings just &str, and those need to have a known size at compile time, so with const it can but not with variables
<re_irc> <@joelimgu_:matrix.org> * problem
<re_irc> <@ryan-summers:matrix.org> : A "heapless::String" has a compile-time known size, and is not dynamic
<re_irc> <@joelimgu_:matrix.org> I had a similar porblem last week tryng to convert a rust str into a C_string.
<re_irc> <@joelimgu_:matrix.org> : yes, with a heapless string it should be possible, but it isnt implementet, at least tha I know of
<re_irc> <@joelimgu_:matrix.org> * that
<re_irc> <@ryan-summers:matrix.org> I suspect this might be because of where the macros in rust are exported at. The "core::write!()" macro exists, but "format!()" isn't in core. Strange, never knew that
<re_irc> <@ryan-summers:matrix.org> So yeah, replace my case of "format!()" with "write!()", but all the other semantics are the same. Thanks
<re_irc> <@juliand:fehler-in-der-matrix.de> : It is in core, but core::alloc
<re_irc> <@diondokter:matrix.org> : See the code I wrote above
<re_irc> <@ryan-summers:matrix.org> Yeah you can definitely use "write!()" with heapless::String
<re_irc> <@ryan-summers:matrix.org> Due to this added bound: https://docs.rs/heapless/latest/heapless/struct.String.html#impl-Write-for-String%3CN%3E
<re_irc> <@ryan-summers:matrix.org> TIL :) Is there any meaningful difference between "write!()" and "format!()"? Is it just what traits they require implementation of?
<re_irc> <@ryan-summers:matrix.org> I guess with format!() you don't have to declare and pass in the object-to-be-written-into first
<re_irc> <@diondokter:matrix.org> Not sure about the exact implementation, but format! is just write! that uses String which it creates and returns
<re_irc> <@ryan-summers:matrix.org> Weird, why can't it infer the return type instead?
<re_irc> <@diondokter:matrix.org> Ha, that would be fun
<re_irc> <@juliand:fehler-in-der-matrix.de> : write: "The arguments will be formatted according to the specified format string into the output stream provided."
<re_irc> format: "The format function takes an Arguments struct and returns the resulting formatted string."
<re_irc> So I think you're right. That is the main difference.
<re_irc> <@diondokter:matrix.org> But it'd have to know how to construct the return type
<re_irc> <@ryan-summers:matrix.org> I mean technically wouldn't it just be something like "pub fn format<T: Write>(blahblahblah) -> T"? Not sure if macros can be generic though
<re_irc> <@ryan-summers:matrix.org> That might be the kicker...
<re_irc> So at some point it'll have to do "let mut buffer = _::new()", but that's not valid syntax and "new" may not be the constructor
<re_irc> <@diondokter:matrix.org> Yeah, the macro likely cannot infer the return type.
<re_irc> <@ryan-summers:matrix.org> Maybe "pub fn format<T: Write + Default>() -> T"? But yeah, starts getting messier for sure
<re_irc> <@ryan-summers:matrix.org> And not sure if that's better than what currently exists
<re_irc> <@joelimgu_:matrix.org> I have a problem too, I need to know the test coverage of my code, I am compiling for the target armv7a-none-eabi, I've tried using ```RUSTFLAGS="-C instrument-coverage" cargo build````
<re_irc> <@joelimgu_:matrix.org> * build``` but I get can't find crate for `profiler_builtins
<re_irc> <@joelimgu_:matrix.org> * "RUSTFLAGS="-C instrument-coverage" cargo build"
<re_irc> <@joelimgu_:matrix.org> * profiler_builtins
<re_irc> <@joelimgu_:matrix.org> * profiler_builtins. Any ideas?
<re_irc> <@k900:conduit.0upti.me> You can't really measure coverage on embedded targets easily
<re_irc> <@k900:conduit.0upti.me> You need somewhere for the instrumentation data to go
<re_irc> <@joelimgu_:matrix.org> I know it can't be easy, but whats the best way to do it?
<re_irc> <@joelimgu_:matrix.org> I havent found any examples and I hoped that there should be a semi-automatic way to do it...
<re_irc> <@ryan-summers:matrix.org> I've only done coverage metrics by running test cases on the host unfortunately. I have no clue how you could do it directly on target
<re_irc> <@ryan-summers:matrix.org> Might be able to write something that uses the JTAG adapter to transmit coverage info, but that would be a big undertaking
<re_irc> <@k900:conduit.0upti.me> Sounds like something you could potentially do via defmt
<re_irc> <@9names:matrix.org> PC sampling, usually? if you're using a target with ITM it's probably not to expensive...
<re_irc> <@k900:conduit.0upti.me> But I don't think anyone actually did it
<re_irc> <@ryan-summers:matrix.org> ITM could do it as well, but you start hitting bandwidth limitations _fast_ with ITM
<re_irc> <@ryan-summers:matrix.org> And not all devices have ITM
<re_irc> <@k900:conduit.0upti.me> : I think rustc's coverage instrumentation is always push
<re_irc> <@ryan-summers:matrix.org> ITM generally will have trouble even keeping up with just transmitting ISR information. Running real-time full line coverage would be an insane ask
<re_irc> <@9names:matrix.org> K900: really? probably explains why tooling for it isn't so great though
<re_irc> <@k900:conduit.0upti.me> Well the builtin rustc stuff is
<re_irc> <@joelimgu_:matrix.org> Yes, in C its usually transmitted at the end by JTAG or by doing memory dumbs at breakpoints and analysing it latter, I just hopped something existed already! But TY! I'll keep searching and maybie write something of my own
<re_irc> <@k900:conduit.0upti.me> You can always do PC sampling externally
<re_irc> <@ryan-summers:matrix.org> : ITM is worth looking into for post-processing for sure
<re_irc> <@ryan-summers:matrix.org> On-target testing is sadly not sure prevalent in embedded to begin with
<re_irc> <@ryan-summers:matrix.org> Let alone rust
<re_irc> <@ryan-summers:matrix.org> On-target testing is sadly not super prevalent in embedded to begin with
<re_irc> <@k900:conduit.0upti.me> I'm not sure about ITM, might be too slow
<re_irc> <@k900:conduit.0upti.me> Unless you also save some storage on the device
<re_irc> <@diondokter:matrix.org> I believe some chips also have ETM. Never used it, but it's more extensive tracing AFAIK. It has more gpio's too, so maybe more bandwidth?
<re_irc> <@ryan-summers:matrix.org> It's 1000% too slow lol, you would have to bottleneck the processor to use it
<re_irc> <@ryan-summers:matrix.org> Even with embedded trace memory, you could only run for e.g. 1ms and then offload the data (did this with Stabilizer)
<re_irc> <@k900:conduit.0upti.me> Maybe if you do some serious bit packing fuckery
<re_irc> <@ryan-summers:matrix.org> ETM support is in probe-rs CLI btw if you want to try it out
<re_irc> <@k900:conduit.0upti.me> And have enough spare RAM to keep a buffer
<re_irc> <@ryan-summers:matrix.org> You don't specify the format of ITM, it's spec'd by arm
<re_irc> <@k900:conduit.0upti.me> It could work
<re_irc> <@k900:conduit.0upti.me> Maybe
<re_irc> <@ryan-summers:matrix.org> But you _can_ drain the ETM to ram
<re_irc> <@ryan-summers:matrix.org> And then offload to probe later
<re_irc> <@ryan-summers:matrix.org> But still, it's an insane amount of memory just to trace ISR entrace/exit, let alone every line
<re_irc> <@joelimgu_:matrix.org> : Ho; so yeah, I'll take a look, It can't be slower than dumping hundrets of Mbs of memory though JTAG every singme test
<re_irc> <@joelimgu_:matrix.org> * sinle
<re_irc> <@joelimgu_:matrix.org> * single
<re_irc> <@ryan-summers:matrix.org> It's pretty quick. What processor are you using? The H7 has ETM support, but ETM is generally only available on the higher end cores with more coresight peripherals
<re_irc> <@joelimgu_:matrix.org> * hundreds
<re_irc> <@ryan-summers:matrix.org> * STM32H7 line
<re_irc> <@ryan-summers:matrix.org> FYI the command is "probe-rs-cli itm <ms> <data-sink>", so "probe-rs-cli itm 10 memory" to do 10ms of ISR intel into the ETM
<re_irc> <@joelimgu_:matrix.org> : I dont remember what processosr it is exaclty, I'll investigate, thanks!!!
<re_irc> <@ryan-summers:matrix.org> Can also write it directly to SWO if you have no ETM, but you will _quickly_ hit bandwidth limitations
<re_irc> <@ryan-summers:matrix.org> Like, within 1-2 calls
<re_irc> <@like2wise:matrix.org> Interestingly reading along; What are the software tools to explore traces?
<re_irc> <@ryan-summers:matrix.org> There's RTIC-scope currently, but these are all meant to be profiling tools, not really line coverage metrics
<re_irc> <@ryan-summers:matrix.org> I've been wanting to do nice visibility on tracing and profile for stabilizer, but has been a low priority for a while
<re_irc> <@ryan-summers:matrix.org> I know there was an ITM viewer in probe-rs for a while using chromium webtrace, but i think it's deprecated
<re_irc> <@like2wise:matrix.org> In theory would the ETM allow us to replay the as-happened execution trace?
<re_irc> <@ryan-summers:matrix.org> Yeah, that's the purpose of the ITM + ETM :)
<re_irc> <@ryan-summers:matrix.org> You get clock-cycle correct profiling
<re_irc> <@ryan-summers:matrix.org> But it would be for a limited duration obviously
<re_irc> <@like2wise:matrix.org> replay not on the STM32, but elsewhere. Yeah OK, but profiling does not always mean step-by-step reproduction of the execution trace.
<re_irc> <@ryan-summers:matrix.org> Yeah, ITM + ETM then offloads the data from the processor through the debug probe, so you have the ITM data as a binary stream on the host for analysis later
<re_irc> <@like2wise:matrix.org> Nice to explore when you have time left in your project or are inbetween projects, indeed :-)
<re_irc> <@ryan-summers:matrix.org> Right now, the probe-rs-cli will just print out all the events + timestamps for you, but you can write your own program on top of it for analysis too
<re_irc> <@like2wise:matrix.org> (Currently tracing an FPGA post-execution, talking about offloading MBytes of data over JTAG...)
<re_irc> <@diondokter:matrix.org> : Yeah I came with that solution too
<re_irc> <@anand21:matrix.org> Hi all,
<re_irc> Is there any support for Renesas RH850 mcu?
<re_irc> <@whitequark:matrix.org> given it's Renesas I feel like answering "I would hope not"
<re_irc> <@whitequark:matrix.org> but snark aside, is there even an open source C compiler for that architecture?
<re_irc> <@whitequark:matrix.org> G3KH is a totally custom Renesas CPU core, isn't it?
<re_irc> <@whitequark:matrix.org> (I highly suggest everyone who likes cursed CPU architectures to look at this document, it's delightful as Renesas' usually are: https://www.renesas.com/us/en/document/mas/rh850g3kh-users-manual-software?language=en)
<re_irc> <@k900:conduit.0upti.me> I think there might be a cursed GCC branch
<re_irc> <@k900:conduit.0upti.me> But definitely no LLVM
<re_irc> <@whitequark:matrix.org> cute mnemonic
<re_irc> <@anand21:matrix.org> I see we do have some PAC available for renesas mcu
<re_irc> <@k900:conduit.0upti.me> Oh it's actually in mainline GCC
<re_irc> <@k900:conduit.0upti.me> : Yeah those are ARM
<re_irc> <@k900:conduit.0upti.me> Not V850 or the other like ten cursed in-house cores Renesas has
<re_irc> <@whitequark:matrix.org> ohhh it's v850
<re_irc> <@k900:conduit.0upti.me> I think RH850 is just what they call the latest V850 version
<re_irc> <@k900:conduit.0upti.me> With more stuff
<re_irc> <@k900:conduit.0upti.me> But still, no LLVM on v850 either
<re_irc> <@k900:conduit.0upti.me> So no Rust for now at least
<re_irc> <@k900:conduit.0upti.me> Also, just in general, if you want to use non-vendor tools, you probably don't want to buy hardware with weird vendor tools
<re_irc> <@ryan-summers:matrix.org> As an aside, what's the appeal of people using renesas parts? I've never used one personally
<re_irc> <@diondokter:matrix.org> The new Arduino Uno uses the Renesas RA4M1
<re_irc> The only things that's special about it that I know is that it has 5V IO. Something you don't really see for ARM mcu's afaik
<re_irc> <@diondokter:matrix.org> This is an ARM Cortex-m4
<re_irc> <@k900:conduit.0upti.me> : They're popular in automotive for some reason
<re_irc> <@whitequark:matrix.org> kickbacks?
<re_irc> <@whitequark:matrix.org> the only person Ive ever talked about used Renesas parts because someone got kickbacks, and they drove her insane
<re_irc> <@whitequark:matrix.org> apparently they have even worse ones that they don't sell to public
<re_irc> <@whitequark:matrix.org> featuring things like "the chip support package is AES encrypted and you have to use their GCC with an AES decryptor in the preprocessor"
<re_irc> <@whitequark:matrix.org> by the time I talked to her, if I said "Renesas" two rooms across, she would experience basically a PTSD flashback, and I think you can see why
<re_irc> <@whitequark:matrix.org> it was very sad honestly
<re_irc> <@whitequark:matrix.org> my personal experience with Renesas is thankfully better because I wasn't shipping stuff based on their devices, but reverse engineering designs with Renesas parts didn't make me very happy either :p
<re_irc> <@whitequark:matrix.org> (not because it's hard to do, but because in the process you learn things about Renesas parts, and now you have to live with that knowledge)
<re_irc> <@k900:conduit.0upti.me> I know some people that got a kickback to ship some shiny new Renesas thing in a prototype and ended up running everything on a computer and just using the Renesas board as an IO extender
<re_irc> <@k900:conduit.0upti.me> That is, a normal computer
<re_irc> <@k900:conduit.0upti.me> I think it might have been the very first V850?
<re_irc> <@whitequark:matrix.org> _told you_
<re_irc> <@whitequark:matrix.org> I've also heard some... interesting things about their FAEs
<re_irc> <@therealprof:matrix.org> Renesas being a Japanese company I wouldn't expect anything but the best -- technology wise.
<re_irc> <@whitequark:matrix.org> you would be disappointed
<re_irc> <@k900:conduit.0upti.me> I'd say it's pretty typical for Japanese companies to make _very weird shit_
<re_irc> <@whitequark:matrix.org> Renesas is the dumping ground for all the silicon Hitachi, Mitsubishi, and NEC didn't love enough to sell under their own names
<re_irc> <@k900:conduit.0upti.me> Like I'm not going to say the hardware is _bad_ because I genuinely don't know
<re_irc> <@whitequark:matrix.org> and while their designs are so variable it would be hard to make any meaningful aggregate statement, a lot of it is genuinely terrible
<re_irc> <@k900:conduit.0upti.me> But it's weird enough that it doesn't matter if it's actually good
<re_irc> <@therealprof:matrix.org> K900: Oh I definitely wouldn't argue with that.
<re_irc> <@whitequark:matrix.org> and they do absolutely rely on kickbacks to get design wins.
<re_irc> <@whitequark:matrix.org> which isn't what you do if your silicon is good, generally speaking.
<re_irc> <@therealprof:matrix.org> A lot of companies do rather crappy things when it comes to money, good products or not.
<re_irc> <@whitequark:matrix.org> oh wait Renesas bought Dialog? :(
<re_irc> <@whitequark:matrix.org> I hate that GreenPAK by Dialog (previously Silego) is now under their name...
<re_irc> <@therealprof:matrix.org> ... and then blame it on the shareholder value model.
<re_irc> <@whitequark:matrix.org> > GreenPAK™ was a Renesas Electronics' family of mixed-signal integrated circuits and development tools.
<re_irc> <@whitequark:matrix.org> > was
<re_irc> <@whitequark:matrix.org> did they discontinue it :(
<re_irc> <@whitequark:matrix.org> hm, no, seems active
<re_irc> <@whitequark:matrix.org> (GP4 was the first FPGA on market with a fully OSS toolchain we made with Andrew Zonenberg back in the day)
<re_irc> <@whitequark:matrix.org> (yes, FPGA; their programmable devices are very small but LUT-based. GP4 is something like 12 to 16 luts iirc)
<re_irc> <@whitequark:matrix.org> * iirc. thats twelve to sixteen individual luts yes)
<re_irc> <@k900:conduit.0upti.me> Now I wonder what the smallest FPGA you can buy now is
IlPalazzo-ojiisa has joined #rust-embedded
<re_irc> <@thejpster:matrix.org> > The only things that's special about it that I know is that it has 5V IO. Something you don't really see for ARM mcu's afaik
<re_irc> Yeah :( There's a couple of Nuvoton Cortex-M parts that have 5V I/O, but not much else.
<re_irc> <@thejpster:matrix.org> The STM32 is sorta kinda 5V tolerant, but that won't help drive all those existing Arduino shields that are expecting to get 5V signals out of the Arduino. And level shifters are a pain (and really need a direction pin to work properly, and that's all inside the MCU).
<Darius> most 5V logic will accept 3.3V (with reduced noise margin)
<Darius> STM32 GPIOs are generally 5V tolerant with the exception of ADC inputs
<Darius> you can also get auto sensing level shifters (although they are dark magic when things go wrong)
<re_irc> <@like2wise:matrix.org> Yes, been there. They are nice, work both directions, but need a clear driver and listening (high-impedance) side to work, otherwise the black magic kicks in. Debugged such hardware once, learned a lot.😀
<re_irc> <@avery71:matrix.org> : I've been staring at the software side of things, but I know the arduino uno r4 is cortex m, I feel like they would want it to be compatible with all the shields
<re_irc> <@halfbit:matrix.org> Do renesas have lots of weird errata like atmel sams? I swear the sam e70 I was using earlier this year had a lot of strange behaviors for peripherals
<re_irc> <@avery71:matrix.org> I've yet to actually get my hands on one, I've mainly just been trying to write patches for the SVD based on what I see in the datasheet
<re_irc> <@avery71:matrix.org> : If it's anything like the ra4m1, then it's just bad. I'm currently working on patches
<re_irc> <@eldmgr:matrix.org> Hi, is there any crate which implements the server (announcing) side of mdns, compatible with embassy?
Dr_Who has joined #rust-embedded
<re_irc> Would someone from the Working Group like to write up some platform-support info for the "thumbv*m-unknown-none" targets? They pre-date the policy of including those for each new target and so there's no info about them at all, despite surely being one of the most popular "no_std" freestanding targets.
emerent_ has joined #rust-embedded
emerent has quit [Killed (platinum.libera.chat (Nickname regained by services))]
emerent_ is now known as emerent
<re_irc> <@chaosprint:matrix.org> Is there any project that uses Rust to write a menu on STM32?
<re_irc> <@like2wise:matrix.org> Define "write a menu". You mean GUI like menu of options on a pixel display?
<re_irc> <@avery71:matrix.org> or a command line using a UART?
<re_irc> <@chaosprint:matrix.org> likewise: yes
<re_irc> <@burrbull:matrix.org> : https://github.com/rust-lang/rust/pull/112988
<re_irc> <@dirbaio:matrix.org> yup! the author asked me to test it with embassy, worked fine 🙃
<re_irc> <@firefrommoonlight:matrix.org> : : I've done it for an EPD
<re_irc> <@firefrommoonlight:matrix.org> But this is def an X/Y question if such a thing exists
<re_irc> <@chaosprint:matrix.org> : would you mind share the repo?
<re_irc> <@chaosprint:matrix.org> * sharing
<re_irc> <@firefrommoonlight:matrix.org> I don't think it will be very instructional; the large majority of the code there has to do with EPD refresh-rate optomization
<re_irc> <@firefrommoonlight:matrix.org> And almost surely doesn't apply to the general case (not much does) or whatever specific case you have but haven't m entioned
<re_irc> <@firefrommoonlight:matrix.org> These things also tend to be specific to the GUI framework or display API in use
<re_irc> <@firefrommoonlight:matrix.org> It is also likely to have little to do with STM32, and a lot to do with the display
<re_irc> <@firefrommoonlight:matrix.org> The STM32 side is probably DMAing display buffers or write commands over SPI etc
<re_irc> <@whitequark:matrix.org> : mistag?
<re_irc> <@firefrommoonlight:matrix.org> huh?
<re_irc> <@like2wise:matrix.org> "-EBADE"
<re_irc> <@dngrs:matrix.org> : https://github.com/Yandrik/kolibri has buttons, might be enough?
Dr_Who has quit [Ping timeout: 240 seconds]
dc740 has joined #rust-embedded
dc740 has quit [Remote host closed the connection]
dc740 has joined #rust-embedded
IlPalazzo-ojiisa has quit [Remote host closed the connection]
IlPalazzo-ojiisa has joined #rust-embedded
<re_irc> <@stephen:crabsin.space> I have a few hundred bytes of data that I'd like to compress as efficiently as possible. I don't particularly care about CPU efficiency since it's such a small amount of data. Anyone have any insight on what I should use? In the past I've juse used gzip on its highest setting
<re_irc> <@stephen:crabsin.space> I imagine gzip has some kind of header that says "hi, I'm gzip's best compression" and I'm wasting bytes there
<re_irc> <@dirbaio:matrix.org> gzip is deflate with a header and checksum
<re_irc> <@dirbaio:matrix.org> so maybe use raw deflate
<re_irc> <@dirbaio:matrix.org> but that saves like 8 bytes iirc
<re_irc> <@adamgreig:matrix.org> yea, i've happily used deflate for compressing quite small data to avoid the gzip header
<re_irc> <@like2wise:matrix.org> : If you have good knowledge of the (possible) redundancy in that data, or a data model, you might be able to pick the most optimal compression.
<re_irc> <@adamgreig:matrix.org> there's a bunch of interesting alternatives like brotli or lzma or bzip2 or something
<re_irc> <@adamgreig:matrix.org> but deflate is not bad for text and very widely implemented
<re_irc> <@adamgreig:matrix.org> (lzma is used by xzip/.xz files and 7zip, brotli is a modern take on it I guess it's fair to say)
<re_irc> <@adamgreig:matrix.org> I also looked at zstd which is another modern compression algorithm, but it's mostly designed to be faster than deflate with similar performance
<re_irc> <@adamgreig:matrix.org> (potentially much much much faster)
<re_irc> <@dirbaio:matrix.org> check code size, "modern" compression formats compress more but the decompressor code is bigger so it might not be worth it
<re_irc> <@dirbaio:matrix.org> I once tried zstd and it was way too big
<re_irc> <@adamgreig:matrix.org> yea, same
<re_irc> <@adamgreig:matrix.org> for my semi-embedded use case i ended up concluding deflate was best
<re_irc> <@dirbaio:matrix.org> perhaps it's because there's no good code-size-optimized impls yet though
<re_irc> <@adamgreig:matrix.org> it was very close to the same performance as zstd and brotli but much simpler, smaller code size
<re_irc> <@adamgreig:matrix.org> I didn't care at all about compression time because my data was at most like 1kB
<re_irc> <@dirbaio:matrix.org> for deflate decompression "uzlib" is deliciously tiny
<re_irc> <@dirbaio:matrix.org> shame it's C though
<re_irc> <@dirbaio:matrix.org> I once tried rusting it but couldn't match the code size :(
<re_irc> <@adamgreig:matrix.org> I ended up with miniz_oxide for rust deflate
<re_irc> <@adamgreig:matrix.org> and python stdlib's zlib for compressing, lol
<re_irc> <@dirbaio:matrix.org> :D
<re_irc> <@stephen:crabsin.space> ...and I meant to post this in a different channel. Sorry
<re_irc> <@stephen:crabsin.space> * Sorry!
<re_irc> <@like2wise:matrix.org> We can join there too :-)
<re_irc> <@adamgreig:matrix.org> "import zlib; data = bytes(range(256)); compressor = zlib.compressobj(level=9, wbits=-10); compressed = compressor.compress(data); compressed += compressor.flush()" for reference, no header at all, a small window size for small compressed data which helps embedded decompression without much memory
<re_irc> <@dirbaio:matrix.org> what's the best crate to read ELF files nowadays? theres so many...
<re_irc> <@dirbaio:matrix.org> object, goblin, xmas-elf, elf ...?
<re_irc> <@mabez:matrix.org> object is used by rustc, if you see that as endorsement. It can be a bit tricky to use, particularly as a writer, but reading its okay. We use xmas-elf in espflash and that seems to work well there. Can't comment on goblin.
<re_irc> <@dirbaio:matrix.org> xmas-elf seems less maintained
<re_irc> <@dirbaio:matrix.org> but i'm going to "borrow" lots of code from cargo-call-stack and that uses xmas-elf
<re_irc> <@dirbaio:matrix.org> so maybe I should use that
<re_irc> <@mabez:matrix.org> I think xmas_elf is in a spot where it does what it needs to, I don't think it aims to completely cover the ELF spec therefore there isn't much active development
<re_irc> <@mabez:matrix.org> What are you working on by the way?
<re_irc> <@dirbaio:matrix.org> yeah, makes sense
<re_irc> <@dirbaio:matrix.org> i'm trying to make a tool similar to "cargo-call-stack" but that doesn't require "cargo" integration
<re_irc> <@dirbaio:matrix.org> "cargo-call-stack" builds your bin with special flags and a special rustc wrapper
<re_irc> <@dirbaio:matrix.org> so to actually use it in CI you need to build twice
IlPalazzo-ojiisa has quit [Remote host closed the connection]
<re_irc> <@dirbaio:matrix.org> and there's no guarantee that the binary in both builds is the same. So maybe the "cargo-call-stack"-analyzed one passes the checks but the one you actually deploy doesn't
<re_irc> <@dirbaio:matrix.org> so what I want is: do regular "cargo build", then run the tool on the already-built elf, the same one you deploy
<re_irc> <@dirbaio:matrix.org> i've found some flags that cause LTO to embed the llvm bitcode into the final linked .elf (alongside the machine code) so I think it should be doable
<re_irc> <@dirbaio:matrix.org> 🤷
<re_irc> <@mabez:matrix.org> Ah I see! I would be very much interested in something like that
<re_irc> <@dirbaio:matrix.org> plus "cargo-call-stack" has a homebrew parser for the llvm IR text format 😱
<re_irc> <@dirbaio:matrix.org> I'm going to try using the binary format with "llvm-sys" :
<re_irc> <@dirbaio:matrix.org> -:
<re_irc> <@mabez:matrix.org> : Do you still need to pass "-Zstack-sizes" when building the binary?
<re_irc> <@dirbaio:matrix.org> yes
<re_irc> <@dirbaio:matrix.org> " -Cembed-bitcode=yes -Clinker-plugin-lto -Clink-arg=-mllvm=-lto-embed-bitcode=optimized -Clink-arg=-mllvm=-stack-size-section -Zemit-stack-sizes"
<re_irc> <@dirbaio:matrix.org> 🤪
<re_irc> <@mabez:matrix.org> It would be really nice to see a stand alone tool and one that uses the llvm internal library for parsing, I believe many of the issues/PRs open on cargo-call-stack are related to changes in ir formatting or missing parser code completely
<re_irc> <@dirbaio:matrix.org> yeah
<re_irc> <@dirbaio:matrix.org> I sent a PR with some fixes once
<re_irc> <@dirbaio:matrix.org> and now it's broken again 🤪
<re_irc> <@mabez:matrix.org> : How does embedding lto bitcode help here by the way? It seems like the original only uses "embed-bitcode", I'm guessing these are newer options?
<re_irc> <@dirbaio:matrix.org> and that parser is not nice to debug, it doesn't give you file:line of failure
<re_irc> <@dirbaio:matrix.org> (there's code that's supposed to do that but it doesn't work, I didn't try troubleshooting it..)
<re_irc> <@dirbaio:matrix.org> : the original?
<re_irc> <@dirbaio:matrix.org> * original cargo-call-stack?
<re_irc> <@mabez:matrix.org> Sorry, cargo call stack
<re_irc> <@dirbaio:matrix.org> it uses -Cemit=llvm-ir to get rustc to spit out the text-format .ll
<re_irc> <@dirbaio:matrix.org> -Cembed-bitcode=yes embeds it in binary-format in the elf instead
<re_irc> <@dirbaio:matrix.org> "-Clink-arg=-mllvm=-lto-embed-bitcode=optimized" is the lld equivalent of "-Cembed-bitcode=yes"
<re_irc> "-Clink-arg=-mllvm=-stack-size-section" is the lld equivalent of "-Zemit-stack-sizes"
<re_irc> <@dirbaio:matrix.org> they're needed because i'm using linker-plugin-lto to get lld to do the lto instead of rustc
<re_irc> <@dirbaio:matrix.org> because otherwise rustc would put the ir in the intermediate .o/.rlib but the linker would strip it away
<re_irc> <@dirbaio:matrix.org> not sure if there's a way to get the linker to keep them
<re_irc> <@mabez:matrix.org> Ah I see, that's the bit I was misunderstanding, thanks
<re_irc> <@dirbaio:matrix.org> and you still need the rustc ones because because _reasons_ "compiler-bultins" is codegen'd separately, not in the main LTO step
<re_irc> <@dirbaio:matrix.org> and yeah the linker is still stripping the compiler-builtins ir from the final .elf
<re_irc> <@dirbaio:matrix.org> grrr
<re_irc> <@dirbaio:matrix.org> this might need a linker wrapper, I'm not sure yet
<re_irc> <@dirbaio:matrix.org> i'd love to avoid it
<re_irc> <@dirbaio:matrix.org> OTOH a linker wrapper would allow it to work without LTO I think
<re_irc> <@dirbaio:matrix.org> no idea
<re_irc> <@dirbaio:matrix.org> * I have no idea what I'm doing lols
<re_irc> <@avery71:matrix.org> : https://xkcd.com/2501/
<re_irc> <@dirbaio:matrix.org> Running `target/debug/stackcheck -i ../akiles/firmware/ak-application/target/thumbv7em-none-eabi/release/application`
<re_irc> Hello, world!
<re_irc> Illegal instruction (core dumped)
<re_irc> <@dirbaio:matrix.org> <3 C
<re_irc> <@dirbaio:matrix.org> * C ❤️
<re_irc> <@dirbaio:matrix.org> welp. I guess this is the downside of using LLVM proper instead of a handmade parser 🙃
<re_irc> <@whitequark:matrix.org> wait, what's the context?
<re_irc> <@dirbaio:matrix.org> trying to parse the llvm bitcode of a compiled rust program
<re_irc> <@dirbaio:matrix.org> and it just ... explodes
<re_irc> <@avery71:matrix.org> So the IL?
<re_irc> <@dirbaio:matrix.org> i'm having exactly this issue https://gitlab.com/taricorp/llvm-sys.rs/-/issues/23
<re_irc> <@dirbaio:matrix.org> LLVMGetOrdering is returning 3 which is an invalid enum value :(
<re_irc> <@dirbaio:matrix.org> closed 11mo ago 🤪
<re_irc> <@dirbaio:matrix.org> trying to use the "llvm-ir" crate