ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
IlPalazzo-ojiisa has quit [Remote host closed the connection]
<re_irc> <aja23> Hi all, I have a linker/project structure question (thank you in advance)!
<re_irc> I have a project that's currently arranged something like this:
<re_irc> |- .cargo
<re_irc> project
<re_irc> <dngrs (spookyvision@{github,cohost})> for better rendering put pastes either in triple backticks, as in
<re_irc> ```
<re_irc> here
<re_irc> content
<re_irc> <dngrs (spookyvision@{github,cohost})> with the triple backticks you can also add syntax highlighting if appropriate, e.g. with
<re_irc> ```rust
<re_irc> rust();
<re_irc> some();
<re_irc> <dngrs (spookyvision@{github,cohost})> as to your actual problem, let me dig something out…
<re_irc> <aja23> Hi all, I have a linker/project structure question (thank you in advance)!
<re_irc> project
<re_irc> |- .cargo
<re_irc> I have a project that's currently arranged something like this:
<re_irc> <aja23> Hi all, I have a linker/project structure question (thank you in advance)!
<re_irc> project
<re_irc> |- .cargo
<re_irc> I have a project that's currently arranged something like this:
<re_irc> <dngrs (spookyvision@{github,cohost})> so, this article (https://apollolabsblog.hashnode.dev/rust-ffi-and-bindgen-integrating-embedded-c-code-in-rust) mostly talks about integrating a C library, but if you check out its "build.rs" file it there also mentions how to add a search path so "memory.x" can be found
<re_irc> <dngrs (spookyvision@{github,cohost})> some embassy examples also toss "memory.x" around, e.g. this one (https://github.com/embassy-rs/embassy/blob/master/examples/nrf52840/build.rs)
<re_irc> < (@grantm11235:matrix.org)> You only need a "memory.x" in your binary crates, not in your core library crate
<re_irc> <aja23> Thank you both! I'll experiment with separated application, lib and bootloader code with a memory.x in bootloader and application. Should the library have its own cargo.toml and then another cargo.toml co-ordinating the whole build, or should the library still sit as the "highest level" (i.e. cargo.toml in the root of the directory)
<re_irc> <aja23> Might be wrong thinking here as I imagine the highest level .toml wouldn't really have any dependencies, it'd just be a workspace definition, not sure if that's accepted practice?
<re_irc> <aja23> Thank you both! I'll experiment with separated application, lib and bootloader code with a memory.x in bootloader and application. Should the library have its own cargo.toml and then another cargo.toml co-ordinating the whole build? Or should the library still sit as the "highest level" (i.e. cargo.toml in the root of the directory)?
genpaku has quit [Remote host closed the connection]
genpaku has joined #rust-embedded
<re_irc> <dngrs (spookyvision@{github,cohost})> it ... kinda depends. There's dependency inheritance, but it's not universally liked. I recommend reading the chapter on cargo workspaces (https://doc.rust-lang.org/cargo/reference/workspaces.html)
<re_irc> <Félix the Newbie> When I write a keyboard matrix, is there a difference between the rows being input or output (and the opposite for columns)?
<re_irc> <Félix the Newbie> I think I've only seen columns as input
<re_irc> < (@omar_u8:matrix.org)> Félix the Newbie: Im not too familiar with keyboard circuit matrices in particular, though I would think if the circuit follows a standard structure so to speak, and you are trying to detect a button press, both would need to be inputs, no? In the sense that you are trying to match the column level with the corresponding row to identify the button.
<re_irc> < (@omar_u8:matrix.org)> If the hardware/config is different, I figure you need to fall back on some sort of schematic/datasheet.
<re_irc> <Félix the Newbie> Yeah, I'll stay which what works, then. Thanks.
<re_irc> < (@grantm11235:matrix.org)> If there aren't any diodes, you can definitely swap your inputs and outputs. I'm not sure how it would work with diodes, it might work if you also invert the logic of your inputs and outputs
<re_irc> < (@grantm11235:matrix.org)> Because the outputs are scanned through one at a time and the inputs can be read in all at once, if you have a 5x10 grid, it is faster to configure it with 5 outputs and 10 inputs
Rahix has quit [Quit: ZNC - https://znc.in]
Rahix has joined #rust-embedded
<re_irc> <aja23> dngrs (spookyvision@{github,cohost}): Thanks dngrs, would the alternative be to just list out the full dependency list in each workspace members cargo.toml?
crabbedhaloablut has quit [Remote host closed the connection]
crabbedhaloablut has joined #rust-embedded
<re_irc> <paologentili> Hi guys, I'm new so embedded rust so I'm getting used to the setup and tools. I read a few comment above that the general suggestion is to use probe-run/rust-embed instead of GDB but I have a problem: with the simplest hprintln(Hello) example I cannot get those tools to work. probe-run ends in "debug information for address 0x80007ac is missing", rust-embed "No RTT header info was present in the ELF file". With...
<re_irc> ... gdb I can see that print working. In Cargo.toml I have debug=true in both dev and release profile and cortex-m version is 0.6.7. Any idea on what to check to fix this?
<re_irc> < (@9names:matrix.org)> hprintln is using semihosting to print, but "probe-run", "probe-rs-cli run" and "cargo embed" expect you to use "rtt" or "defmt-rtt"
<re_irc> < (@9names:matrix.org)> the easiest way to get started with defmt if your hal doesn't have an example using it is to use start from the knurling-rs app-template and adapt it to your needs:
<re_irc> < (@9names:matrix.org)> -use
<re_irc> <paologentili> > the easiest way to get started with defmt if your hal doesn't have an example using it is to use start from the knurling-rs app-template and adapt it to your needs:
<re_irc> I see, thanks. I'll give it a try.
<re_irc> <aja23> dngrs (spookyvision@{github,cohost}): I'm beginning to see why people don't like this. I'm getting build errors about unresolved import despite having it declared in multiple cargo.tomls
<re_irc> <aja23> cool, so it works when the syntax is this:
<re_irc> version = "*"
<re_irc> features = ["const-fn"]
<re_irc> [dependencies.cortex-m]
crabbedhaloablut has quit [Remote host closed the connection]
crabbedhaloablut has joined #rust-embedded
<re_irc> <aja23> I figured out I can bypass these shenanigans by making the workspace.toml virtual i.e. doesn't specify any dependencies
<re_irc> <aja23> on a different tack, I bumped my rust version to 1.66.1 while attempting to deal with this and now I'm getting the following compilation error:
<re_irc> proc macro "entry" not expanded: Cannot create expander for c:\WorkingFolderTemp\lvl2serial-rustport\target\debug\deps\cortex_m_rt_macros-3ccf418285ef69be.dll: unsupported ABI "rustc 1.66.1 (90743e729 2023-01-10)"
<re_irc> has anyone else experienced cortex-m-rt macro errors with 1.66.1?
<re_irc> <Félix the Newbie> : It's 6x7, so roughly the same
<re_irc> <Nitin> Hi
<re_irc> I would like to implement the cross-core communication and data sharing 2 cores (Cortex-M0 and M4).
<re_irc> For this I have created the shared memory between the cores. I'm thinking to go with Atomic variables as they are safe to share between threads but I'm not sure about Multiple cores. Are there any better approaches for multi-core
<re_irc> <Nitin> Hi
<re_irc> I would like to implement the cross-core communication and data sharing 2 cores (Cortex-M0 and M4).
<re_irc> For this I have created the shared memory between the cores. I'm thinking to go with Atomic variables as they are safe to share between threads but I'm not sure about Multiple cores. Are there any better approaches for multi-core
<re_irc> <Nitin> Hi
<re_irc> I want to implement the cross-core communication and data sharing 2 cores (Cortex-M0 and M4).
<re_irc> For this, I have created the shared memory between the cores. I'm thinking of going with Atomic variables as they are safe to share between threads. But I'm not sure they are safe in Multiple cores. Are there any better approaches for multi-core
<re_irc> <Nitin> Hi
<re_irc> I want to implement the cross-core communication and data sharing 2 cores (Cortex-M0 and M4).
<re_irc> For this, I have created the shared memory between the cores. I'm thinking of going with Atomic variables as they are safe to share between threads. But I'm not sure they are safe in Multiple cores.
<re_irc> Are there any better approaches for multi-core?
<re_irc> < (@jamesmunns:beeper.com)> Atomics should be safe across cores, though I've seen a lot of M4/M0 pairs where the M0 doesn't support atomic CAS operations (so things like compare_and_swap or fetch_add won't work)
<re_irc> < (@jamesmunns:beeper.com)> Sorta depends what data you plan to share, and how you plan to share it :)
<re_irc> <Nitin> That's true. For thumbv6m they are only load and store operation are supported.
<re_irc> <Nitin> : : I didn't understand this one
<re_irc> < (@jamesmunns:beeper.com)> So yeah, you'll probably need to be a bit careful of how you share the data. It's not too bad to have something like a shared flag that one core sets, and the other core clears, or basic ring buffers can be done with just load and store operations
IlPalazzo-ojiisa has joined #rust-embedded
<re_irc> < (@jamesmunns:beeper.com)> But sharing data more generally will be a challenge. Some dual core packages like that will have some kind of mutex/semaphore hardware you could use to implement the "critical-section" crate's traits, which would make things easier.
<re_irc> <Nitin> I need to look into it. I'm using PSoC6 µC no idea they have such stuff
<re_irc> <Nitin> : Currently I'm thinking of this approach
<re_irc> <Nitin> Do you know any examples for such implementations?
<re_irc> < (@jamesmunns:beeper.com)> Not off the top of my head. Multicore is less common in embedded rust (at least the examples I've seen), the RP2040 is probably the most common, it's dual-core cortex-m0+
<re_irc> <Nitin> : ah ok. I saw that implementation
<re_irc> < (@jamesmunns:beeper.com)> The biggest issue is that you need to do something more like "IPC"/inter process communication, because you're really running a separate program on each core
<re_irc> < (@jamesmunns:beeper.com)> so you can't really share a normal "static" between the two, like you would between the main function and an interrupt
<re_irc> <Nitin> My implementations is kind of different. I'm running 2 instanaces of RTIC on both cores
<re_irc> < (@jamesmunns:beeper.com)> yup, totally fine! You just can't exactly share a "heapless" queue across the two programs, for example
<re_irc> <Nitin> Ok. Seems like the only way it to use IPC
<re_irc> < (@chemicstry:matrix.org)> well, you technically can use "heapless" queue across two programs, but you need to make its internal data struct to be "repr(C)" so that it has identical layout and create a special ram section in linker script so that it ends up in the same RAM location. But then again, you can't send any non-"repr(C)" structs across the queue, because Rust can randomise layout
<re_irc> < (@chemicstry:matrix.org)> * layout. So yeah, it's tricky
<re_irc> < (@jamesmunns:beeper.com)> That's what I meant by "you can't exactly..." :)
<re_irc> < (@korken89:matrix.org)> If you have a stable binary representation for what leaves one core and arrives at the other is stable, given you have hardware message passing
<re_irc> < (@richarddodd:matrix.org)> Is there a 'standard' way of communicating between two cores in a situation like this?
<re_irc> < (@korken89:matrix.org)> +then it
<re_irc> < (@korken89:matrix.org)> Generally 2 approaches, a hardware queue or via shared memory
<re_irc> < (@korken89:matrix.org)> As stated before, as long as you know where something is in memory and the layout is known (C interfaces more or less) it's not that complicated
<re_irc> < (@korken89:matrix.org)> One can have a look at "microamp", there shared memory was used to talk between RTICs running on multiple cores
<re_irc> < (@korken89:matrix.org)> And it did all the linker magic automatically
<re_irc> < (@korken89:matrix.org)> ("microamp" is a tool)
<re_irc> < (@korken89:matrix.org)> * PoC tool for multicore RTIC)
<re_irc> < (@richarddodd:matrix.org)> So you need 1) both cores to use the same location in memory for she SHM, 2) both cores to use the same memory layout, or share byte arrays, and 3) atomics to ensure the cores access the data correctly.
<re_irc> < (@korken89:matrix.org)> Seems about it
<re_irc> < (@korken89:matrix.org)> Synchronization is a whole can of worms by itself
<re_irc> < (@korken89:matrix.org)> Depending on how the cores are interlinked
<re_irc> < (@korken89:matrix.org)> But in the simple MCUs it's not really a problem as they don't have cache
<re_irc> < (@korken89:matrix.org)> Cortex-M0+ goes under the simple flag :)
<re_irc> < (@korken89:matrix.org)> * banner
<re_irc> < (@richarddodd:matrix.org)> Don't you get the guarantees you need with atomics, for example adding a barrier to ensure that a particular instruction is 'seen' on all cores before the next instruction is executed.
<re_irc> < (@korken89:matrix.org)> I think so, at least for Cortex-M
<re_irc> < (@korken89:matrix.org)> But I have no idea how it generalizes
<re_irc> < (@korken89:matrix.org)> "microamp" was tested mostly on a Cortex-R with a Cortex-M companion core
<re_irc> < (@korken89:matrix.org)> There barriers was enough
<re_irc> < (@richarddodd:matrix.org)> I've seen some boards with multiple processors on them - I'm guessing in these cases you need hardware support for synchronization.
<re_irc> < (@korken89:matrix.org)> I've not tested on of these M7+M0 MCUs from NXP, would be cool to use them
<re_irc> <Nitin> : How can we ensure the Synchronization between core in this case?
<re_irc> < (@korken89:matrix.org)> Give it a try with e.g. an "AtomicBool" and "SeqCst" ordering
<re_irc> < (@korken89:matrix.org)> See if it is visible "fast"
<re_irc> < (@korken89:matrix.org)> Could also be worth consulting the arch manual and see what guarantees the "dmb"/"dsb" instructions give
<re_irc> < (@korken89:matrix.org)> It was a bit too long ago I poked at "microamp" now :P
<re_irc> < (@korken89:matrix.org)> TLDR, memory ordering is complex :D
<re_irc> <ukrwin> Hello. Somebody tried to implement projects like OpenCat in Rust?
<re_irc> < (@richarddodd:matrix.org)> I've read that blog series, it seems like you need AMBA4 ACE to guarantee that the barrier is respected, otherwise although on a single core you're ok, the ordering is not guaranteed to propagate to other parts of the system. The example the blog post gives is DMA configuration being moved after the store that starts a transaction.
<re_irc> < (@richarddodd:matrix.org)> Unless you disable caching for a particular CPU. Yeah OK I'm confused and accept that it is complex 🤯
<re_irc> < (@richarddodd:matrix.org)> ukrwin: no but I really want to make something like it for schools to teach programming/electronics. I definitely can't commit time rn tho.
<re_irc> <ukrwin> : Thank you for your reply. I just finished Discovery book with Microbit2, and I wanted to create something fun, like OpenCat for my nephews. But there is certainly not enough skills to manage this in any short amount of time, this is why I asked.
<re_irc> < (@richarddodd:matrix.org)> I think the issue with robotics is that there is a lot of knowledge required beyond the programming/electronics. There's necessarily some complex mechanics as well.
<re_irc> < (@richarddodd:matrix.org)> And then possibly some machine learning.
<re_irc> <ukrwin> : Yeah, I agree. But that would be great to have such example in Rust embedded ecosystem.
<re_irc> < (@halfbit:matrix.org)> : you don't need atomics necessarily, you do need to understand any cache flushing/invalidation work though prior to reading shared data
<re_irc> < (@halfbit:matrix.org)> e.g. m7 core fires off IPI to notify other core (interrupt it), at that point a shared message needs to be in memory both can read is really the only absolute requirement
<re_irc> < (@korken89:matrix.org)> I think this will be your friend then: https://doc.rust-lang.org/core/sync/atomic/fn.fence.html :D
<re_irc> < (@halfbit:matrix.org)> this is how Zephyr manages to do SMP anyways, keeps all kernel structs in uncached memory, atomics aren't necessarily vital to that
<re_irc> < (@halfbit:matrix.org)> * memory (coherent),
<re_irc> < (@jamesmunns:beeper.com)> : Also, check that the shared memory is ACTUALLY coherent like that, which I wouldn't say is a given for CM4/CM0 combos.
<re_irc> < (@halfbit:matrix.org)> those openamp scenarios can be a quirky
<re_irc> < (@halfbit:matrix.org)> I haven't dealt with that myself directly, indirectly the riscv support for polarfire is partially smp, partially amp
<re_irc> < (@halfbit:matrix.org)> and I saw some of the work going towards that
<re_irc> < (@jamesmunns:beeper.com)> Yeah, I don't know how well-specified that is by the arm spec, so it's sorta vendor-specific how things will or won't work.
<re_irc> < (@halfbit:matrix.org)> like there's 4 identical riscv cores, then this supervisor core alongside those with fewer features (no atomics, single hart, supervisor only)
<re_irc> < (@halfbit:matrix.org)> the 4 smp cores share some cache I think and the supervisor doesn't, its very quirky
<re_irc> < (@halfbit:matrix.org)> and of course the supervisor is the one that starts the others from what I recall
<re_irc> <dngrs (spookyvision@{github,cohost})> aja23: Depends on what you mean by full.. if everything is a shared dependency then yes, everything
starblue has quit [Ping timeout: 264 seconds]
dequbed has quit [Quit: bye!]
dequbed has joined #rust-embedded
emerent has quit [Ping timeout: 252 seconds]
emerent has joined #rust-embedded
WSalmon has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
WSalmon has joined #rust-embedded
dc740 has joined #rust-embedded
dc740 has quit [Remote host closed the connection]
starblue has joined #rust-embedded
<re_irc> < (@jessebraham:matrix.org)> Is there going to be a new "riscv_rt" release? I'm getting errors now that "riscv@0.9.0" has been yanked ☹️
<re_irc> <aja23> Hi all, last quick check, is anyone else hitting issues with Rust 1.66.1 with not being able to expand the entry and interrupt macros from cortex-m-rt?
<re_irc> < (@dkhayes117:matrix.org)> : The latest "riscv-rt" should already depend on "v0.10"
<re_irc> < (@jessebraham:matrix.org)> https://crates.io/crates/riscv-rt/0.10.0/dependencies
<re_irc> < (@jessebraham:matrix.org)> Say's "^0.9" here
<re_irc> < (@jessebraham:matrix.org)> * Says
<re_irc> < (@dkhayes117:matrix.org)> Yeah ,you're right. I'll get started an update.
<re_irc> < (@jessebraham:matrix.org)> Thanks!
<re_irc> < (@9names:matrix.org)> aja23: Try building someone else's project. It's more likely that you've configured something wrong.
GenTooMan has quit [Ping timeout: 252 seconds]
<re_irc> < (@dkhayes117:matrix.org)> : "riscv-rt v0.11.0" has been published :)
<re_irc> < (@adamgreig:matrix.org)> aja23: might be worth a "cargo clean" if you haven't already
<re_irc> <aja23> Just cleaned all the packages individually and then the parent workspace as well didn't help. I'll try walking back a version and see if helps
<re_irc> < (@jessebraham:matrix.org)> : Thanks so much!
<re_irc> < (@dkhayes117:matrix.org)> You're welcome, and sorry for the breakage :(
<re_irc> < (@jessebraham:matrix.org)> It happens haha
starblue has quit [Ping timeout: 265 seconds]
GenTooMan has joined #rust-embedded
starblue has joined #rust-embedded
IlPalazzo-ojiisa has quit [Remote host closed the connection]