ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
rardiol has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
lehmrob has quit [Ping timeout: 265 seconds]
lehmrob has joined #rust-embedded
rardiol has joined #rust-embedded
dc740 has quit [Remote host closed the connection]
rardiol has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
lehmrob has quit [Ping timeout: 268 seconds]
<re_irc> <@eldruin:matrix.org> "embedded-hal" "1.0.0-alpha.10" (https://github.com/rust-embedded/embedded-hal/releases/tag/v1.0.0-alpha.10) along with new releases of "embedded-hal-async", "embedded-hal-bus" and "embedded-hal-nb" have just been published! Thanks goes to as well as everyone else involved in the discussions πŸŽ‰
lehmrob has joined #rust-embedded
<re_irc> <@ryan-summers:matrix.org> Does probe-run support a TOML configuration file for specifying specific probes (like Embed.local.toml)?
<re_irc> <@eldruin:matrix.org> hmm, the chat client is giving me some errors. I hope you saw that there are new "embedded-hal" (https://github.com/rust-embedded/embedded-hal/releases/tag/v1.0.0-alpha.10) and related crates alpha releases
<re_irc> <@ryan-summers:matrix.org> Very weird. I remember seeing it literally like 15 minutes ago, but now it's gone
IlPalazzo-ojiisa has joined #rust-embedded
lehmrob has quit [Ping timeout: 265 seconds]
rardiol has joined #rust-embedded
<re_irc> <@maarten2000ha:matrix.org> Are there people here who are able to build the esp edf template on an m1 chip? I am having some issues with it for quite some time and I want to get into embedded programming but it’s quite difficult if I can’t even build the project.
<re_irc> βœ… rust installed, βœ… cargo-generate, ldproxy, espup, espflash, cargo-espflash installed, βœ… nightly toolchain installed and set as default, βœ…espup installation and export, βœ… clean project, ❌ able to build project. gist link of the latest error https://gist.github.com/maarten2000ha/4a0beadb43e204cb98ca9f6b96e38dd8
<re_irc> <@maarten2000ha:matrix.org> Are there people here who are able to build the esp edf template on an m1 chip? I am having some issues with it for quite some time and I want to get into embedded programming but it’s quite difficult if I can’t even build the project. βœ… rust installed, βœ… cargo-generate, ldproxy, espup, espflash, cargo-espflash installed, βœ… nightly toolchain installed and set as default, βœ…espup installation and export, βœ…...
<re_irc> ... clean project, ❌ able to build project. gist link of the latest error https://gist.github.com/maarten2000ha/4a0beadb43e204cb98ca9f6b96e38dd8
rardiol has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
tafa has quit [Quit: ZNC - https://znc.in]
tafa has joined #rust-embedded
emerent has quit [Ping timeout: 260 seconds]
emerent has joined #rust-embedded
<re_irc> <@thejpster:matrix.org> You could try the esp-rs room?
rardiol has joined #rust-embedded
cr1901 has quit [Read error: Connection reset by peer]
cr1901 has joined #rust-embedded
<re_irc> <@dngrs:matrix.org> : from your log it looks like you're building for xtensa. As far as I know xtensa support is not yet merged in "normal" nightly, and you need espressif's fork to build. Might be worth double checking the instructions (https://esp-rs.github.io/book/installation/installation.html)
<re_irc> <@dngrs:matrix.org> (skip the RISC-V section and go straight to Xtensa)
IlPalazzo-ojiisa has quit [Quit: Leaving.]
IlPalazzo-ojiisa has joined #rust-embedded
<re_irc> <@dngrs:matrix.org> heads up for anyone experiencing defmt issues, seems to me it's currently best to pin it to "defmt="=0.3.2"" (otherwise you get wire format v4, and "probe-run" hasn't yet been updated to support that)
fooker has quit [Quit: WeeChat 3.7.1]
<re_irc> <@heniluci:matrix.org> Hey, hope you are all doing well!
<re_irc> I've been working on my own embedded rust project (a "no_std" KYBER implementation), and have been trying to run time consistency tests on my micro-controller. As such, I'm currently trying to validate that the number of clock cycles are constant for each of my critical functions, and I can measure the clock cycle time directly through cortex_m::Peripherals
<re_irc> However, I can't find any bench-marking framework which supports measuring this, or even any projects that take measurements (be they clock or real-world time) of a micro-controller's runtime.
<re_irc> As such, are there any good libraries/examples on profiling execution time on a micro-controller, and if not, are there any examples on how to write your own bench-marking framework for an embedded system?
fooker has joined #rust-embedded
rardiol has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
<re_irc> <@thejpster:matrix.org> I think you want CYCCNT. Which chip are you using?
markov_twain has joined #rust-embedded
<re_irc> <@heniluci:matrix.org> : I'm using an M4 chip; I've been measuring the time
<re_irc> let mut peripherals = Peripherals::take().unwrap();
<re_irc> peripherals.DWT.enable_cycle_counter();
<re_irc> peripherals.DWT.set_cycle_count(0);
<re_irc> accum += 1;
<re_irc> time = peripherals.DWT.cyccnt.read();
<re_irc> This does get me an individual measurement, however, I'm consistently running into problems where it gets optimized away 😒. Moreover, I've been trying to write a proper function testing framework, but that's been more difficult than expected
<re_irc> <@heniluci:matrix.org> * moreover, I believe I'm using that counter via the "cortex_m" crate
<re_irc> <@heniluci:matrix.org> This has been my shot at writing a proper benchmarking function; however, it's a bit dirty, and I've been finding it consistently optimizes away my benchmarks, so it's not working rn 😒
<re_irc> use core::hint::black_box;
<re_irc> use cortex_m::Peripherals;
<re_irc> #[inline(never)]
<re_irc> #[no_mangle]
<re_irc> fn benchmark<const TRIALS: usize, const ARG_LIST_SIZE: usize, F, T, U>(
<re_irc> function: F,
<re_irc> args_list: &[T; ARG_LIST_SIZE],
<re_irc> ) -> [[u32; TRIALS]; ARG_LIST_SIZE]
<re_irc> where
<re_irc> F: Fn(&T) -> U,
<re_irc> U: Copy,
<re_irc> {
<re_irc> let mut peripherals = Peripherals::take().unwrap();
<re_irc> let mut results = [[0u32; TRIALS]; ARG_LIST_SIZE];
<re_irc> let mut elapsed_cycles;
<re_irc> let mut result;
<re_irc> for trial_index in 0..TRIALS {
<re_irc> for (arg_index, args) in args_list.iter().enumerate() {
<re_irc> peripherals.DWT.enable_cycle_counter();
<re_irc> // Run the function and measure the cycles
<re_irc> peripherals.DWT.set_cycle_count(0);
<re_irc> result = function(args);
<re_irc> elapsed_cycles = peripherals.DWT.cyccnt.read();
<re_irc> // Prevent the compiler from optimizing away the benchmark code
<re_irc> black_box(elapsed_cycles);
<re_irc> black_box(result);
<re_irc> // hprintln!("Total elapsed clock cycles: {:}", elapsed_cycles);
<re_irc> results[arg_index][trial_index] = elapsed_cycles;
<re_irc> }
<re_irc> }
<re_irc> results
<re_irc> }
<re_irc> <@jamesmunns:beeper.com> Which specific M4 you are using is useful, some vendors (like ST I think?) sometimes need a specific sequence to use DWT if the debugger is not attached
<re_irc> <@heniluci:matrix.org> : I'm using an XMC4500 microcontroller
<re_irc> <@heniluci:matrix.org> XMC4500 Relax Lite Kit
<re_irc> <@heniluci:matrix.org> oh, and as a disclaimer, I can get this working on individual functions
<re_irc> a) Is there an out-of-the-box solution that already solves this?
<re_irc> <@heniluci:matrix.org> my big issue rn is:
<re_irc> b) If not, then how can I stop my own benchmark function from optimizing itself to death?
<re_irc> <@jamesmunns:beeper.com> > I can get this working on individual functions
<re_irc> As in you see non-zero DWT counts?
<re_irc> <@heniluci:matrix.org> yep; see
<re_irc> #![cfg_attr(not(test), no_std)]
<re_irc> // pick a panicking behaviour
<re_irc> #[cfg(not(test))]
<re_irc> #![cfg_attr(not(test), no_main)]
<re_irc> #![cfg_attr(test, allow(unused_imports))]
<re_irc> #[cfg(debug_assertions)]
<re_irc> use panic_semihosting as _;
<re_irc> // release profile: minimize the binary size of the application
<re_irc> #[cfg(not(test))]
<re_irc> #[cfg(not(debug_assertions))]
<re_irc> use panic_abort as _; // requires nightly
<re_irc> use cortex_m::Peripherals;
<re_irc> // #[cfg(not(test))]
<re_irc> #[entry]
<re_irc> fn main() -> ! {
<re_irc> let mut peripherals = Peripherals::take().unwrap();
<re_irc> peripherals.DWT.enable_cycle_counter();
<re_irc> let mut accum = 0;
<re_irc> let mut time;
<re_irc> loop {
<re_irc> // NOTE: inspecting the assembly of this shows that rust *does* shuffle the ordering of instructions for optimal code execution
<re_irc> peripherals.DWT.set_cycle_count(0);
<re_irc> accum += 1;
<re_irc> time = peripherals.DWT.cyccnt.read();
<re_irc> hprintln!(
<re_irc> "Clock count {}: {} -> {}. Time taken is {}",
<re_irc> accum,
<re_irc> 0,
<re_irc> time,
<re_irc> time
<re_irc> );
<re_irc> // Sometimes, the first instruction will take a few more/less cycles
<re_irc> // if (time_2).abs_diff(time_1 + 6) > 1 {
<re_irc> // panic!("Warning! Time was not 6!")
<re_irc> // }
<re_irc> // panic!("Done!")
<re_irc> }
<re_irc> }
<re_irc> This yields
<re_irc> <@heniluci:matrix.org> (p.s. sorry, got my messages the wrong way round πŸ˜…)
<re_irc> <@jamesmunns:beeper.com> I dunno what you return, but you could either return "[[(u32, T); TRIALS]; ARG_LIST_SIZE]" and print it/check they are all equal to make sure it is used, or pass in a single "&mut T" or "&mut MaybeUninit<T>" and do a "write_volatile()" to it
<re_irc> <@jamesmunns:beeper.com> but I don't know anything out of the box for this, sadly
<re_irc> <@heniluci:matrix.org> : why would the "write_volatile()" solve it in this situation?
<re_irc> <@heniluci:matrix.org> (sorry if this ends up being a bit of a noob question πŸ˜…)
<re_irc> <@jamesmunns:beeper.com> volatile writes can't be elided by the compiler
<re_irc> <@jamesmunns:beeper.com> I guess black_box should have the same effect, but adding volatile writes is usually the other thing I do to make sure the compiler doesn't elide something
<re_irc> <@grantm11235:matrix.org> black_box should work, just be sure to use it for your inputs and your outputs
<re_irc> <@jamesmunns:beeper.com> Yes, they are already using it on the output of their function, and I'm aware of black box.
<re_irc> <@jamesmunns:beeper.com> (I didn't know it was stable though now!)
<re_irc> <@grantm11235:matrix.org> : oops, I guess I should have read more carefully
<re_irc> <@jamesmunns:beeper.com> You are right though! They are not using it on the inputs, which the docs suggest
<re_irc> <@jamesmunns:beeper.com> heniluci you could try "result = function(black_box(args));" too :p
<re_irc> <@heniluci:matrix.org> : ooo, good call
<re_irc> <@heniluci:matrix.org> I also tried out something else in the meantime
<re_irc> <@heniluci:matrix.org> and that _may_ have worked?
<re_irc> <@heniluci:matrix.org> #[inline(never)]
<re_irc> where
<re_irc> F: Fn(&T) -> U,
<re_irc> fn benchmark_single<F, T, U>(function: F, args: &T, peripheral: &mut Peripherals) -> u32
<re_irc> {
<re_irc> peripheral.DWT.enable_cycle_counter();
<re_irc> // Run the function and measure the cycles
<re_irc> peripheral.DWT.set_cycle_count(0);
<re_irc> black_box(function(args));
<re_irc> peripheral.DWT.cyccnt.read()
<re_irc> }
<re_irc> // #[no_mangle]
<re_irc> #[inline(never)]
<re_irc> fn benchmark<const TRIALS: usize, const ARG_LIST_SIZE: usize, F, T, U>(
<re_irc> function: F,
<re_irc> args_list: &[T; ARG_LIST_SIZE],
<re_irc> ) -> [[u32; TRIALS]; ARG_LIST_SIZE]
<re_irc> where
<re_irc> F: Fn(&T) -> U,
<re_irc> U: Copy,
<re_irc> {
<re_irc> let mut peripherals = Peripherals::take().unwrap();
<re_irc> let mut results = [[0u32; TRIALS]; ARG_LIST_SIZE];
<re_irc> let mut elapsed_cycles: u32;
<re_irc> for trial_index in 0..TRIALS {
<re_irc> for (arg_index, args) in args_list.iter().enumerate() {
<re_irc> elapsed_cycles = benchmark_single(&function, args, &mut peripherals);
<re_irc> // Prevent the compiler from optimizing away the benchmark code
<re_irc> black_box(elapsed_cycles);
<re_irc> // hprintln!("Total elapsed clock cycles: {:}", elapsed_cycles);
<re_irc> results[arg_index][trial_index] = elapsed_cycles;
<re_irc> }
<re_irc> }
<re_irc> results
<re_irc> }
<re_irc> <@heniluci:matrix.org> tried splitting it into 2 functions, and used "#[inline(never)]" to avoid any mixing
<re_irc> <@heniluci:matrix.org> but have only tested it against a _very_ basic function
<re_irc> <@heniluci:matrix.org> : can confirm it _does_ also work with this, but it also increases overhead?
<re_irc> trial code is:
<re_irc> fn my_function_with_arg(x: &u32) -> u32 {
<re_irc> // hprintln!("Run function!");
<re_irc> // An example function to be benchmarked
<re_irc> }
<re_irc> x + 2
<re_irc> // Run the benchmark
<re_irc> #[entry]
<re_irc> fn main() -> ! {
<re_irc> hprintln!("Starting main!");
<re_irc> let results_1 = benchmark::<3, 4, _, _, _>(my_function_with_arg, &[1, 2, 3, 4]);
<re_irc> // let results_2 = benchmark::<10, 3, _, _, _>(my_function_with_args, &[(1, 2), (2, 3), (3, 4)]);
<re_irc> // ::<10, 3, , (u32, u32), u32>
<re_irc> hprintln!("Ending benchmark! results were {:?}", results_1);
<re_irc> loop {
<re_irc> panic!("Done!");
<re_irc> }
<re_irc> }
<re_irc> <@heniluci:matrix.org> : with the change it measures from 4 clock cycles to 6 clock cycles
<re_irc> <@heniluci:matrix.org> > [[4, 4, 4], [4, 4, 4], [4, 4, 4], [4, 4, 4]]
<re_irc> > But all measurements are consistent!
<re_irc> <@jamesmunns:beeper.com> hard to tell if that's a good thing (more accurate) or a bad thing (more overhead) without looking at the asm
<re_irc> <@heniluci:matrix.org> true
<re_irc> <@heniluci:matrix.org> seems like I need to try a few crazier functions to see if I can push it to the limit
<re_irc> <@heniluci:matrix.org> also
<re_irc> <@heniluci:matrix.org> I don't know if it works with multiple arguments
<re_irc> <@heniluci:matrix.org> do you know if there's a way to unpack tuples into function arguments?
<re_irc> <@heniluci:matrix.org> or something equivilent to that
<re_irc> <@jamesmunns:beeper.com> nah, rust doesn't have splatting
<re_irc> <@jamesmunns:beeper.com> you take a fn arg tho, so you could do it with a closure
<re_irc> <@jamesmunns:beeper.com> like
<re_irc> let results_1 = benchmark::<3, 4, _, _, _>(|(a, b)| { my_function_with_arg(a, b) }, &[(1, 1), (2, 2), (3, 3), (4, 4)]);
<re_irc> <@heniluci:matrix.org> : nice, will give that a try
<re_irc> <@jamesmunns:beeper.com> (specifically, you are taking Fn not fn, the former is a trait, which closures and functions both implement, the latter of which is specifically a function pointer)
<re_irc> <@jamesmunns:beeper.com> no idea how that'll affect your benchmarking, but syntax wise you can :D
<re_irc> <@heniluci:matrix.org> hmmm, this does seem to panic on my microcontroller
<re_irc> <@heniluci:matrix.org> not sure why
<re_irc> <@jamesmunns:beeper.com> That.... is not expected :p
<re_irc> <@jamesmunns:beeper.com> nothing in a closure should be a panic by itself
<re_irc> #![cfg_attr(test, allow(unused_imports))]
<re_irc> <@heniluci:matrix.org> #![feature(test)]
<re_irc> #![cfg_attr(not(test), no_main)]
<re_irc> // pick a panicking behaviour
<re_irc> #![cfg_attr(not(test), no_std)]
<re_irc> #[cfg(not(test))]
<re_irc> #[cfg(debug_assertions)]
<re_irc> use panic_semihosting as _;
<re_irc> // release profile: minimize the binary size of the application
<re_irc> #[cfg(not(test))]
<re_irc> #[cfg(not(debug_assertions))]
<re_irc> use panic_abort as _; // requires nightly
<re_irc> use cortex_m_rt::entry;
<re_irc> use cortex_m_semihosting::hprintln;
<re_irc> use core::hint::black_box;
<re_irc> use cortex_m::Peripherals;
<re_irc> #[inline(never)]
<re_irc> fn benchmark_single<F, T, U>(function: F, args: &T, peripheral: &mut Peripherals) -> u32
<re_irc> where
<re_irc> F: Fn(&T) -> U,
<re_irc> {
<re_irc> peripheral.DWT.enable_cycle_counter();
<re_irc> // Run the function and measure the cycles
<re_irc> peripheral.DWT.set_cycle_count(0);
<re_irc> black_box(function(args));
<re_irc> peripheral.DWT.cyccnt.read()
<re_irc> }
<re_irc> // #[no_mangle]
<re_irc> #[inline(never)]
<re_irc> fn benchmark<const TRIALS: usize, const ARG_LIST_SIZE: usize, F, T, U>(
<re_irc> function: F,
<re_irc> args_list: &[T; ARG_LIST_SIZE],
<re_irc> ) -> [[u32; TRIALS]; ARG_LIST_SIZE]
<re_irc> where
<re_irc> F: Fn(&T) -> U,
<re_irc> U: Copy,
<re_irc> {
<re_irc> let mut peripherals = Peripherals::take().unwrap();
<re_irc> let mut results = [[0u32; TRIALS]; ARG_LIST_SIZE];
<re_irc> let mut elapsed_cycles: u32;
<re_irc> for trial_index in 0..TRIALS {
<re_irc> for (arg_index, args) in args_list.iter().enumerate() {
<re_irc> elapsed_cycles = benchmark_single(&function, args, &mut peripherals);
<re_irc> // Prevent the compiler from optimizing away the benchmark code
<re_irc> black_box(elapsed_cycles);
<re_irc> // hprintln!("Total elapsed clock cycles: {:}", elapsed_cycles);
<re_irc> results[arg_index][trial_index] = elapsed_cycles;
<re_irc> }
<re_irc> }
<re_irc> results
<re_irc> }
<re_irc> // An example function to be benchmarked
<re_irc> fn my_function_with_args(x: &u32, y: &u32) -> u32 {
<re_irc> x + y
<re_irc> // hprintln!("Run function!");
<re_irc> }
<re_irc> // An example function to be benchmarked
<re_irc> fn my_function_with_arg(x: &u32) -> u32 {
<re_irc> // hprintln!("Run function!");
<re_irc> x + 2
<re_irc> }
<re_irc> // Run the benchmark
<re_irc> #[entry]
<re_irc> fn main() -> ! {
<re_irc> hprintln!("Starting main!");
<re_irc> let results_1 = benchmark::<3, 4, _, _, _>(my_function_with_arg, &[1, 2, 3, 4]);
<re_irc> hprintln!("Ending benchmark! results were {:?}", results_1);
<re_irc> let results_2 = benchmark::<3, 4, _, _, _>(
<re_irc> |(a, b)| my_function_with_args(a, b),
<re_irc> &[(1, 1), (2, 2), (3, 3), (4, 4)],
<re_irc> );
<re_irc> hprintln!("Ending benchmark! results were {:?}", results_2);
<re_irc> loop {
<re_irc> hprintln!("End!");
<re_irc> panic!("Done!")
<re_irc> }
<re_irc> }
<re_irc> This is my total code rn; it's panicing when it moves into the first loop of the "result_2" iteration
<re_irc> <@jamesmunns:beeper.com> do you have some kind of array size index?
<re_irc> <@heniluci:matrix.org> and commenting out all measurement code doesn't solve it, so it's something with running this function
<re_irc> <@jamesmunns:beeper.com> you could try using iter zip instead of indexing
<re_irc> <@heniluci:matrix.org> : yh, the 4 in "benchmark::<3, 4, _, _, _>" denotes that it accepts a list of 4 arguments
<re_irc> <@heniluci:matrix.org> needed it cuz it uses an array for the results, so it needs to know the length at compile time
<re_irc> <@heniluci:matrix.org> I can make it inferred by adding "#![feature(generic_arg_infer)]", but that doesn't solve the closure problem
<re_irc> <@jamesmunns:beeper.com> Yeah, I can't really browse the inline code here very well. it might be easier to see in a gist. What panic do you get? my only guess is that you have some kind of indexing error somewhere
<re_irc> <@heniluci:matrix.org> maybe it's to do with borrowing the closure?
<re_irc> <@heniluci:matrix.org> "elapsed_cycles = benchmark_single(&function, args, &mut peripherals);" I'm borrowing the function here
<re_irc> <@jamesmunns:beeper.com> those should be compile errors not runtime errors.
<re_irc> <@heniluci:matrix.org> true
<re_irc> <@jamesmunns:beeper.com> fn benchmark<const TRIALS: usize, const ARG_LIST_SIZE: usize, F, T, U>(
<re_irc> function: F,
<re_irc> args_list: &[T; ARG_LIST_SIZE],
<re_irc> where
<re_irc> ) -> [[u32; TRIALS]; ARG_LIST_SIZE]
<re_irc> F: Fn(&T) -> U,
<re_irc> U: Copy,
<re_irc> {
<re_irc> let mut peripherals = Peripherals::take().unwrap();
<re_irc> let mut results = [[0u32; TRIALS]; ARG_LIST_SIZE];
<re_irc> let mut elapsed_cycles: u32;
<re_irc> for (arg, trials) in args_list.iter().zip(results.iter_mut()) {
<re_irc> for trial in trials.iter_mut() {
<re_irc> trial = black_box(
<re_irc> benchmark_single(&function, args, &mut peripherals)
<re_irc> );
<re_irc> }
<re_irc> }
<re_irc> results
<re_irc> }
<re_irc> <@jamesmunns:beeper.com> or something like that
<re_irc> <@heniluci:matrix.org> "panicked at 'called "Option::unwrap()"on a"None" value', examples/time_bench.rs:45:47"
<re_irc> <@heniluci:matrix.org> error is
<re_irc> <@heniluci:matrix.org> oh
<re_irc> <@heniluci:matrix.org> OH
<re_irc> <@heniluci:matrix.org> I don't think you can unwrap the peripheral twice!
<re_irc> <@jamesmunns:beeper.com> "let mut peripherals = Peripherals::take().unwrap();"
<re_irc> <@heniluci:matrix.org> that's it
<re_irc> <@jamesmunns:beeper.com> lol no
<re_irc> <@jamesmunns:beeper.com> good catch :p
<re_irc> <@jamesmunns:beeper.com> either pass it in, or use the unsafe "steal" instead
<re_irc> <@jamesmunns:beeper.com> (or just pass in the DWT as an arg to benchmark)
<re_irc> <@heniluci:matrix.org> : oooh, will try that out
<re_irc> <@heniluci:matrix.org> otherwise, could turn it into a struct which implements a few methods
<re_irc> <@heniluci:matrix.org> and make it global?
<re_irc> <@jamesmunns:beeper.com> probably don't make it global
<re_irc> <@heniluci:matrix.org> : what would stop a user from invoking and creating 2 testing frameworks then?
<re_irc> <@heniluci:matrix.org> that both try to unwrap peripherals
<re_irc> <@heniluci:matrix.org> maybe also make it unwrap?
<re_irc> <@jamesmunns:beeper.com> unwrapping peripherals is usually a "just once at top of main" thing.
<re_irc> <@jamesmunns:beeper.com> if you write a function that takes the DWT periph, pass it in, or use something like "let dwt = unsafe { &*DWT::ptr() };"
<re_irc> <@jamesmunns:beeper.com> which is like "steal()" for a single peripheral
<re_irc> <@jamesmunns:beeper.com> if you write a function that takes the DWT periph, pass it in, or use something like "let dwt = unsafe { &*DWT::PTR };"
<re_irc> <@grantm11235:matrix.org> By the way, it is possible to make a function that is generic over the number of arguments in another function, but it requires several nightly-only features https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=a1113c9e34369ac10009bc208b22d64f
<re_irc> <@peter9477:matrix.org> heniluci: This may not be relevant, and I didn't read back thoroughly here, but I did notice you were not enabling trace on DCB first, and at least on the chip I'm using that appears to be required. I have this code when I start the cycle counter... note the first line and the comment.
<re_irc> cp.DCB.enable_trace(); // required before enable_cycle_counter() per docs
<re_irc> cp.DWT.enable_cycle_counter();
<re_irc> <@peter9477:matrix.org> (If it's actually working without that for you, then I guess this is not relevant for your situation.)
<re_irc> <@heniluci:matrix.org> : I'll double check in case it does help
<re_irc> <@peter9477:matrix.org> I think someone earlier mentioned something to the effect that you may need that "if not using debugging (SWD)"... this is probably why I have that, since I do this for stats like you even when not on the probe.
<re_irc> <@heniluci:matrix.org> : but I know mentioned that some vendors need a specific sequence before using DWT; perhaps it's this you're talking about?
<re_irc> <@peter9477:matrix.org> yup :)
<re_irc> <@jamesmunns:beeper.com> : Yeah, this is what I meant
<re_irc> <@peter9477:matrix.org> I'm guessing... unfortunately I didn't link to the docs that told me that.
<re_irc> <@jamesmunns:beeper.com> I think most (all?) debuggers will enable it when they attache
<re_irc> <@jamesmunns:beeper.com> * attach
<re_irc> <@peter9477:matrix.org> Would that be fairly universal Cortex-M4 behaviour, or vendor-specific?
<re_irc> <@jamesmunns:beeper.com> you run into the problem when you want to run the same code when the debugger isn't attached
<re_irc> <@jamesmunns:beeper.com> : I know the sequence varies a bit chip to chip. I remember some working without that call to enable, while some needed it
<re_irc> <@peter9477:matrix.org> I can't see it mentioned at all in nRF52840 datasheet, so I assume not particular vendor-specific. I must have read it in ARM docs somewhere.
<re_irc> <@peter9477:matrix.org> * would assume not particularly
<re_irc> <@jamesmunns:beeper.com> (when the debugger is not attached)
<re_irc> <@jamesmunns:beeper.com> https://github.com/rtic-rs/rtic/issues/123
<re_irc> <@jamesmunns:beeper.com> Seems I ran into that on the nrf52 in 2018 :p
<re_irc> <@peter9477:matrix.org> Ah, found it mentioned here at least... maybe never did find it in ARM docs: https://docs.rs/cortex-m/0.7.7/cortex_m/peripheral/struct.DWT.html#method.enable_cycle_counter
<re_irc> <@peter9477:matrix.org> > Enables the cycle counter
<re_irc> > The global trace enable (DCB::enable_trace) should be set before enabling the cycle counter, the processor may ignore writes to the cycle counter enable if the global trace is disabled (implementation defined behaviour).
IlPalazzo-ojiisa has quit [Remote host closed the connection]