<AdamHott[m]>
I ran a successful build for a micro:bit V2 project based on this example: https://github.com/nrf-rs/microbit/tree/main/examples/gpio-hal-printbuttons But when I use this command "cargo embed --features v2 --target thumbv7em-none-eabihf" I get "Error failed to flash" - "No loadable ELF sections were found." I think my project can't find my memory.x file but I don't know what to change. My repo where I'm having problems is here:
<spinfast[m]>
pretty close to a pac like interface I think for unmappable register blocks (e.g. spi/i2c devices) with chiptool
<spinfast[m]>
maybe it'll be a thing... let mut wm8960 = Wm8960::new(&mut spi_bus); wm8960.lvol().modify(100); wm8960.rvol().modify(100); perhaps coming to a driver soon
<spinfast[m]>
feels good to me tbh
<spinfast[m]>
* me tbh, but needs more fallibility given the spi/i2c aspect
<tiwalun[m]>
But you need to link the `link.x` script from the cortex-m crate, so you need to the entry in `.cargo/config` from memory.x to link.x.
<tiwalun[m]>
Adam Hott If you use `arm-none-eabi-readelf -e`, you can see the actual sections and program headers, which is a bit more helpful.
<tiwalun[m]>
I would also recommend to use .cargo/config.toml instead of just .cargo/config, it makes the format clearer.
<tiwalun[m]>
And you probably need the `--nmagic` flag for the linker. You should be able to use the build script from the `cortex-m-quickstart` repo, just remove the part with the `memory.x`.
crabbedhaloablut has quit [Read error: Connection reset by peer]
crabbedhaloablut has joined #rust-embedded
crabbedhaloablut has quit [Ping timeout: 255 seconds]
crabbedhaloablut has joined #rust-embedded
<JamesMunns[m]>
<adamgreig[m]> "if we already know what Duration..." <- π delay munns? What did I ever do to deserve that
<JamesMunns[m]>
I've always thought of "pnumo" to remember si prefix order (pico, nano, micro, milli, ΓΈ/no prefix), never combined it like that to get so close to my name :p
diondokter[m] has quit [Quit: Idle timeout reached: 172800s]
<marmrt[m]>
<adamgreig[m]> "meh, what do you do on the..." <- As you are allowed to delay for longer than specified, the default should just never return.
romancardenas[m] has quit [Quit: Idle timeout reached: 172800s]
IlPalazzo-ojiisa has joined #rust-embedded
nihalpasham[m] has joined #rust-embedded
<nihalpasham[m]>
A question that's been on mind - any thoughts?
<nihalpasham[m]>
> It compiles rust code to spirv that can run on the gpu
<nihalpasham[m]>
yeah, I mentioned this one in my post. Its a pretty interesting project but it's more of a custom solution for a specific need i.e. adapting what the rust-gpu team is doing for gpus to other kinds of HW (lowering Rust IRs to SPIRV and optimize), we would probably end up rewriting or duplicating effort and sort of from scratch for say a TPU.
<nihalpasham[m]>
* yeah, I mentioned this one in my post. Its a pretty interesting project but it's more of a custom solution for a specific need i.e. adapting what the rust-gpu team is doing for gpus to other kinds of HW (lowering Rust IRs to SPIRV and optimize), we would probably end up rewriting or duplicating effort from scratch for say a TPU.
<K900>
It's all LLVM on the backend
<K900>
The SPIRV backend is out of tree, but the MLIR stuff is likely getting into mainline LLVM eventually
<K900>
And from there it should be pretty easy to make a Rust target for
<nihalpasham[m]>
K900: LLVM is mostly well-suited for CPU code-gen. Although, it does have some support for gpus, from what I know, its not that great. Hence SPIR-V and others.
<K900>
Tell that to AMD who is using LLVM as the primary codegen backend in their GPU drivers
<K900>
On all platforms including Windows
<nihalpasham[m]>
K900: MLIR is actually a sub-project with the LLVM but it plays very well llvm as its been donated to the LLVM foundation or group
<nihalpasham[m]>
> <@k900:0upti.me> The SPIRV backend is out of tree, but the MLIR stuff is likely getting into mainline LLVM eventually
<nihalpasham[m]>
* MLIR is actually a sub-project within LLVM but it plays very well with llvm as its been donated to the LLVM foundation or group
<nihalpasham[m]>
K900: I dont know much about AMD but I do see that a lot of HPC stuff and AI/ML compilers are all custom and may or may not rely on LLVM.
<K900>
It doesn't matter if they ingest MLIR
<nihalpasham[m]>
K900: this is true and I believe this why a lot of new hardware is directly targeting MLIR (that's been gained many (vendor-specific) dialects in a very short span of time). The backend used for code-gen can be their own or something more standardized.
<nihalpasham[m]>
> <@k900:0upti.me> It doesn't matter if they ingest MLIR
<nihalpasham[m]>
* this is true and I believe this why a lot of new hardware is directly targeting MLIR (that's been gained many (vendor-specific) dialects in a very short span of time). The backend used for code-gen can be their own or something more standardized or widely adopted like SPIR-V
starblue has quit [Ping timeout: 276 seconds]
<nihalpasham[m]>
* this is true and I believe this is why a lot of new hardware is directly targeting MLIR (that's been gained many vendor-specific dialects in a very short span of time). The backend used for code-gen can be their own or something more standardized or widely adopted like SPIR-V
starblue has joined #rust-embedded
Guest7221 has joined #rust-embedded
Guest7221 has quit [Remote host closed the connection]
<nihalpasham[m]>
* this is true and I believe this is why a lot of new hardware is directly targeting MLIR (that's gained many vendor-specific dialects in a very short span of time). The backend used for code-gen can be their own or something more standardized or widely adopted like SPIR-V
Guest7221 has joined #rust-embedded
starblue has quit [Ping timeout: 268 seconds]
Guest7221 has quit [Remote host closed the connection]
Guest7221 has joined #rust-embedded
Guest7221 has quit [Remote host closed the connection]
Guest7221 has joined #rust-embedded
Guest7221 has quit [Remote host closed the connection]
Guest7221 has joined #rust-embedded
starblue has joined #rust-embedded
chrysn[m] has joined #rust-embedded
<chrysn[m]>
Other than users going "hm, what's that and why is it not logging", is there any harm expected from sprinkling defmt logging over a library? (By default the non-error messages are elided anyway; not sure whether I'd even make my panics go through defmt, or whether they're OK because I only use fixed strings there anyway).
<JamesMunns[m]>
"any harm" => the linker symbol tricks defmt uses don't work on osx and windows, iirc, so you likely want to ensure they are completely compiled out on those platforms
<JamesMunns[m]>
it creates linker names like defmt.{ lol this is json }.1, which aren't valid for darwin targets, for example
<chrysn[m]>
Thanks; that'll be tricky to test. (Maybe I'll just make defmt optional and switch crate::info! etc to no-ops then).
<chrysn[m]>
That's neat; limiting myself to things that work both on defmt and fmt is something I can probably do.
Guest7221 has left #rust-embedded [Error from remote client]
fu5ha[m] has joined #rust-embedded
<fu5ha[m]>
<K900> "It's all LLVM on the backend" <- No it's not. Rust-gpu doesn't use llvm at all, it replaces it with entirely custom spirv generation
<K900>
I meant MLIR, sorry
<fu5ha[m]>
Ah
gauteh[m] has quit [Quit: Idle timeout reached: 172800s]
t-moe[m] has joined #rust-embedded
<t-moe[m]>
log macros that either forward to defmt or log ;)
<t-moe[m]>
dirbaio[m]: At least for me it works to not depend on defmt in the application if it is not used. (given that you also remove defmt.x from linking)
<t-moe[m]>
But yes... I should probably clarify the readme and add an example how to use it
<dirbaio[m]>
kay so you do need to depend on both defmt-or-log and defmt
<dirbaio[m]>
this is the reason I didn't make this into a separate crate π₯²
<JamesMunns[m]>
This isn't *your* problem, btw t-moe, it's just an unfortunate side effect of how proc macros work, that you can't expand into code that uses a dep that the expanded-into crate doesn't depend on.
<dirbaio[m]>
just copypasted fmt.rs into every single crate :D
<dirbaio[m]>
* just copypasted fmt.rs into every single crate instead :D
<dirbaio[m]>
I think this might be fixable in defmt though
<JamesMunns[m]>
Yeah, I think you could have more verbose versions of all the macros that take a path to expand into
<dirbaio[m]>
you can make a non-proc macro that expands to a proc macro call and pass $crate to it
<dirbaio[m]>
log does that I think
<dirbaio[m]>
doesn't help with the derives though
<JamesMunns[m]>
like defmt::log!(path::to::rexported::defmt, ...)
<dirbaio[m]>
it just sucks
<JamesMunns[m]>
then in defmt-or-log you could use the more verbose methods in your expansion
<dirbaio[m]>
no, defmt::log!(...) which expands to $crate::log_proc!($crate, ...) and then the proc macro can use the $crate token in its output
<dirbaio[m]>
this'd go into defmt itself
<t-moe[m]>
hang on. i can't follow that fast... (i'm rather new with proc macros and stuff...)
<dirbaio[m]>
then all the defmt macros will magically keep working if reexported
<dirbaio[m]>
yeah you do need that for attr macros or derives
<dirbaio[m]>
the $crate trick only works for function-like macros
<JamesMunns[m]>
maybe the proc-to-decl macro sandwich is enough, but if the info! macro expands in user code, it'd still need to be able to reference defmt, which means it must be visible in the end user
<dirbaio[m]>
not with $crate
<JamesMunns[m]>
TIL
<JamesMunns[m]>
I knew about $crate, I didn't know that it was allowed to refer to indirectly included crates, that's neat.
<dirbaio[m]>
yeah it always refers to "the crate that defined this macro" even if it's not visible from the code that calls the macro
<dirbaio[m]>
I just wish there was an equivalent for proc macros π
<JamesMunns[m]>
dirbaio[m]: Yeah, I knew it allowed to reference stuff *not in scope*, but I didn't know that reached even into *indirect deps*.
<JamesMunns[m]>
(I avoid macros as much as possible, so I am less familiar with all the intricacies there :D )
<t-moe[m]>
You're right, that the end-user-crate has to depend on defmt.
<t-moe[m]>
But I designed this primarily for usage in application crate, where you would probably depend on defmt anyways since you need to link it....
<t-moe[m]>
The crate also goes a bit further, by adding the FormatOrDebug trait, which also helped me to simplify my application quite a bit...
<jessebraham[m]>
FYI the GitHub Release for `embedded-hal@1.0.0-rc.2` was missed :)
<jessebraham[m]>
@dario:matrix.org I guess haha
<vollbrecht[m]>
thats the wrong dario
<jessebraham[m]>
Oops
<jessebraham[m]>
s/dario/dirbaio/
<jessebraham[m]>
Thanks haha
<dirbaio[m]>
ah yes
<dirbaio[m]>
annoying that they're not automatic
<dirbaio[m]>
I always do the tag then forget
<vollbrecht[m]>
i think the github releas was missed since the last 5 releases or so, every-time it was later than the crates release
<vollbrecht[m]>
not only dirbaio who missed it
<vollbrecht[m]>
better not make the mistake on the 1.0 release :D
<vollbrecht[m]>
* it was days later than
StephenD[m] has joined #rust-embedded
<StephenD[m]>
In rtic on an stm32, what's the recommended way to get the elapsed time between two lines of code? Basically I need to run a while loop until I hit a timeout condition. I assume rtic monotonics help me here
Guest7221 has left #rust-embedded [Error from remote client]
Guest7221 has joined #rust-embedded
Guest7221 is now known as nex8192
<ryan-summers[m]>
<StephenD[m]> "In rtic on an stm32, what's..." <- I believe the normal approach would be to use an API _on_ the monotonic. I don't think RTIC necessarily provides you one directly
<ryan-summers[m]>
I.e. you give RTIC the monotonic for scheduling, but you can use it yourself as well
oleidinger[m] has quit [Quit: Idle timeout reached: 172800s]
nex8192 has left #rust-embedded [Error from remote client]
nex8192 has joined #rust-embedded
crabbedhaloablut has quit []
<StephenD[m]>
<ryan-summers[m]> "I.e. you give RTIC the monotonic..." <- A dumb question but how do I get the monotonic back from rtic?
IlPalazzo-ojiisa has quit [Quit: Leaving.]
<StephenD[m]>
Alright it looks like I can do monotonics::now() and there's some rtic logic that makes that return an instance. The trouble is I need to do that in a submodule. I could just pass in a closure that calls that now method, just curious if that's the recommended approach
<thejpster[m]>
<dirbaio[m]> "annoying that they're not..." <- They can be automatic - you just need to add the right action to the GHA workflow
<thejpster[m]>
I use softprops/action-gh-release@v1 but cargo-dist uses something else I think?
<thejpster[m]>
I think whatever cargo-dist generates even parses your CHANGELOG.md and fills in the release notes for you, saving you from doing it manually.
<thejpster[m]>
I didn't write that release note. I just pushed a tag.
<diondokter[m]>
Ah, through the powers of 'adding more generics'β’οΈ I was able to remove generic_const_exprs. Though the API is a bit worse now...
<thejpster[m]>
I didn't write the action either, cargo-dist did.
<danielb[m]>
<thejpster[m]> "*cough* espressif *cough*" <- can we modify this to only include their Rust devs, please? π
<thejpster[m]>
ha ha
JamesSizeland[m] has joined #rust-embedded
<JamesSizeland[m]>
<dirbaio[m]> "we're evaluating the nrf21540 at..." <- Do we think there will be any impetus to revisit embedded BLE in the near future? I'm trying to work out how to add it into rust projects, particularly on nrf52 and not sure what the best option is.
<JamesMunns[m]>
<JamesSizeland[m]> "Do we think there will be any..." <- You've seen nrf-softdevice, right?