neceve has quit [Ping timeout: 256 seconds]
<re_irc> <@nihal.pasham:matrix.org> dngrs:matrix.org: Just woke up. Thanks, I'll look into `bytemuck`. But come to think of it
<re_irc> <@nihal.pasham:matrix.org> `from raw parts` might make more sense, considering a transmutation reinterprets bits of one type as another and may give us a different `len`
<re_irc> <@nihal.pasham:matrix.org> Example: reinterpreting a `slice of arrays` with `len 3` i.e. & mut[[u8; 512]] as & mut [u8] may give us a slice of bytes of `len 3` instead of `len 3*512`
<re_irc> <@nihal.pasham:matrix.org> I say *may, as I've not tested either
starblue has quit [Ping timeout: 252 seconds]
starblue has joined #rust-embedded
<re_irc> <@luojia65:matrix.org> I'm working with some SoC manufacturer in China to introduce (in the future) a new database-like format to describe all peripherals, buses and cores under RISC-V, C-sky etc. They may export SoC describe files from verilog or chisel source conveniently. I expect it to describe any peripheral with name and some parameters only, instead of all registers and fields
<re_irc> <@luojia65:matrix.org> (Yes, there is already SVD, but it's not close enough to meet modular software requirement)
<re_irc> <@9names:matrix.org> luojia65:matrix.org: So like device tree?
<re_irc> <@nihal.pasham:matrix.org> nihal.pasham:matrix.org: So, as I suspected `from_raw_parts` works but `transmute` does not - [link to a working example](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=18ba8a77430517442feb6093590a570b) and I think this is safe too as it ticks all usage conditions
<re_irc> <@luojia65:matrix.org> Somehow, yes, it's similar to it
bpye has quit [Quit: The Lounge - https://thelounge.chat]
bpye has joined #rust-embedded
<re_irc> <@disasm-ewg:matrix.org> SiFive uses device tree to define a set of peripherals and their basic properties. At least it looks better than brand new unpopular standard for this :)
<re_irc> <@davidgoodenough:matrix.org> To be precise, Linux uses device tree for all CPUs that do not use acpi, and sifive have followed that rule. And device trees are used by other OSs in the Unix world like the bsds
<re_irc> <@disasm-ewg:matrix.org> I mean, they use device tree even for MCUs :)
<re_irc> <@disasm-ewg:matrix.org> It's just a format for describing peripherals
<re_irc> <@matoushybl:matrix.org> Zephyr also relies on it heavily
neceve has joined #rust-embedded
neceve has quit [Read error: Connection reset by peer]
crabbedhaloablut has quit [Remote host closed the connection]
crabbedhaloablut has joined #rust-embedded
<re_irc> <@dngrs:matrix.org> let's say I want to cycle through four DMA buffers, what's a good data structure to hold them? I was thinking a `heapless:Deque<Option<DmaBuf>, 4>`
<re_irc> <@dngrs:matrix.org> on a second thought I don't need the `Option` part
<re_irc> <@thebutlah:matrix.org> hi everyone, I have a naive question. Why do so many embedded libraries focus on being non-allocating? It seems as though every platform that has good rust support supports alloc, so why is it important for crates to not use alloc? It seems that the real consideration is only to avoid needing `std`, since thats the stuff that actually needs an OS
<re_irc> <@dirbaio:matrix.org> not using alloc is nice because now all your RAM usage is predictable at compile time
<re_irc> <@dirbaio:matrix.org> you can't have "out of memory" errors at runtime
<re_irc> <@newam:matrix.org> thebutlah:matrix.org: I could use alloc for most projects, but heap allocation is forbidden at my workplace, and it is a habit I got into.
<re_irc> <@newam:matrix.org> Alloc also makes execution time harder to predict, which can be a problem for meeting strict real-time requirements.
<re_irc> <@dirbaio:matrix.org> (except stack overflow, but there are tools to calculate max stack usage)
<re_irc> <@dirbaio:matrix.org> and it also avoids fragmentation
<re_irc> <@newam:matrix.org> also alloc is still sort of nightly (the OOM handler is), and a lot of people don't want to touch nightly toolchains.
<re_irc> <@newam:matrix.org> That all being said, you could do alloc. It is perfectly fine for a lot of applications. I think a lot of libraries are non-alloc because then more people can use them.
kehvo has quit [Quit: WeeChat 3.2.1]
<re_irc> <@dirbaio:matrix.org> or because their authors themselves want no-alloc :P
<re_irc> <@thebutlah:matrix.org> gotcha so its kinda the same concept as regular rust libraries supporting no_std - it enables more use cases. Just as embedded users want rust libraries to support no-std, some rust users only want no-alloc libraries, because that way they can be sure of their memory usage, or because they need bounded latencies on their memory accesses.
<re_irc> <@thebutlah:matrix.org> For the realtime bounded latency argument, isn't the allocated memory on the same physical hardware as the static memory? Its all RAM at the end of the day. Is there a reason that the allocator owned memory is non-deterministic whereas the static stuff isn't? Surely with a good choice of memory allocator you can opt for one that is deterministic in its latency
<re_irc> <@newam:matrix.org> thebutlah:matrix.org: that works until you need to handle OOM
<re_irc> <@dirbaio:matrix.org> I think the reason is malloc/free themselves can take a different amount of time depending on the heap state
<re_irc> <@dirbaio:matrix.org> edepends on which allocator you use yeah
<re_irc> <@dirbaio:matrix.org> using a bump allocator you will OOM if you allocate memory in a loop
<re_irc> <@dirbaio:matrix.org> you need an allocator that can recycle memory
<re_irc> <@dngrs:matrix.org> My take on no alloc is: most of the time you don't notice it missing anyway
<re_irc> <@dngrs:matrix.org> And if you do, there's always headless :V
<re_irc> <@dngrs:matrix.org> Or byte-slab etc
<re_irc> <@dirbaio:matrix.org> or atomic-pool
<re_irc> <@newam:matrix.org> I go for the big hammer approach and buy a micro with more RAM :D
kehvo has joined #rust-embedded
<re_irc> <@t0mh:matrix.org> Has anyone worked with the nrf5340? Trying to run the example at https://github.com/nrf-rs/nrf-hal/tree/master/examples/blinky-button-demo but getting an RTT error
<re_irc> <@newam:matrix.org> What error?
<re_irc> <@t0mh:matrix.org> Error Error attaching to RTT: RTT control block not found in target memory.
<re_irc> <@t0mh:matrix.org> - Make sure RTT is initialized on the target, AND that there are NO target breakpoints before RTT initalization.
<re_irc> <@t0mh:matrix.org> - For VSCode and probe-rs-debugger users, using `halt_after_reset:true` in your `launch.json` file will prevent RTT
<re_irc> <@t0mh:matrix.org> initialization from happening on time.
<re_irc> <@t0mh:matrix.org> - Depending on the target, sleep modes can interfere with RTT.
<re_irc> <@newam:matrix.org> Hmm, that one shouldn't occur. Try updating cargo-embed if it isn't the latest version?
<re_irc> <@adamgreig:matrix.org> does the code seem to run, besides rtt not working?
<re_irc> <@t0mh:matrix.org> cargo embed is on v0.12 which I think it the latest?
<re_irc> <@t0mh:matrix.org> And I don't think the code is running
<re_irc> <@t0mh:matrix.org> If I flash a 'secure mode' bootloader as described here https://github.com/nrf-rs/nrf-hal/tree/master/nrf5340-app-hal then get the following error:
<re_irc> <@t0mh:matrix.org> WARN probe_rs::session > Could not clear all hardware breakpoints: ArchitectureSpecific(RegisterReadError { address: 12, name: "DRW", source: ArchitectureSpecific(FaultResponse) })
<re_irc> <@t0mh:matrix.org> Error failed attaching to target
<re_irc> <@t0mh:matrix.org> Caused by:
<re_irc> <@dirbaio:matrix.org> I've used nrf5340 successfully, but running everything in S mode (which nrf-hal doesn't support)
<re_irc> <@dirbaio:matrix.org> maybe the SPM messes up the debug connection somehow
<re_irc> <@dirbaio:matrix.org> can you paste the probe-run logs with `-vv`?
<re_irc> <@adamgreig:matrix.org> if the code doesn't seem to be running your real problem is probably before the debugger fails to find rtt, and actually is that programming or running isn't happening
kehvo has quit [Quit: WeeChat 3.3]
kehvo has joined #rust-embedded
<re_irc> <@t0mh:matrix.org> adamgreig: Yeah possibly. I'm using the example code referenced above, having changed out the GPIO pin numbers and the HAL used to the nrf5340 development board.
<re_irc> <@t0mh:matrix.org> Done some more digging, it seems the Rust code is never being called. The SPM outputs
<re_irc> <@t0mh:matrix.org> *** Booting Zephyr OS build v2.7.0-ncs1 ***
<re_irc> <@t0mh:matrix.org> Flash regions Domain Permissions
<re_irc> <@t0mh:matrix.org> 00 19 0x00000 0x50000 Secure rwxl
<re_irc> <@dirbaio:matrix.org> what do you have in your memory.x? it should match the flash/ram regions there
<re_irc> <@dirbaio:matrix.org> this looks wrong
<re_irc> <@dirbaio:matrix.org> SPM: NS MSP at 0xffffffff
<re_irc> <@dirbaio:matrix.org> SPM: NS reset vector at 0xffffffff
<re_irc> <@dirbaio:matrix.org> it seems it's not flashing, or flashing at the wrong location
<re_irc> <@t0mh:matrix.org> The memory.x is the one included with the hal https://github.com/nrf-rs/nrf-hal/blob/master/nrf5340-app-hal/memory.x
<re_irc> <@dirbaio:matrix.org> hm taht should be fine
<re_irc> <@dirbaio:matrix.org> RAM doesn't match, but it's smaller than what the SPM grants so it should still wor
<re_irc> <@dirbaio:matrix.org> check why is it not flashing
<re_irc> <@dirbaio:matrix.org> it could be 2 things
<re_irc> <@dirbaio:matrix.org> - the nrf53 has a very annoying protection where on each boot the firmware has to enable the debug port, if it doesn't it's bicked and you must mass-erase. This is per-core, if app unlocks but net doesn't then net debug won't work
<re_irc> <@dirbaio:matrix.org> - probe-rs tries to initialize both cores (app+net) even if you're just flashing app. initializing net fails if it's locked or powered down
<re_irc> <@dirbaio:matrix.org> try flashing with probe-rs-cli
<re_irc> <@dirbaio:matrix.org> if that fails, you know it's a flashing issue
<re_irc> <@dirbaio:matrix.org> then remove everything related to the `net` core from here https://github.com/probe-rs/probe-rs/blob/master/probe-rs/targets/nRF53_Series.yaml
<re_irc> <@dirbaio:matrix.org> then try again
<re_irc> <@jamesmunns:beeper.com> dngrs:matrix.org: Byte slab works well for this, or heapless pool
<re_irc> <@jamesmunns:beeper.com> Ah, I see that was already suggested :)
<re_irc> <@jamesmunns:beeper.com> Bbqueue is potentially a good choice, for more raw streamy data