troth has quit [Ping timeout: 268 seconds]
crabbedhaloablut has quit [Remote host closed the connection]
crabbedhaloablut has joined #rust-embedded
troth has joined #rust-embedded
troth has quit [Ping timeout: 256 seconds]
troth has joined #rust-embedded
<re_irc> <@tmplt:matrix.org> Hello, I'd like to bring some attention to https://github.com/rust-embedded/itm/issues/36. In short, `itm-decode` is a reimplementation of `itm` with more features (timestamped packets, for example), more granular errors, and handling of synchronization packets. An `itmdump` equivalent is also shipped. The API is currently being refactored to mimic that of `itm` to follow Rust API guidelines.
<re_irc> <@adamgreig:matrix.org> the `itm` crate is not well maintained currently
<re_irc> <@adamgreig:matrix.org> not to mention hasn't had a release in forever
<re_irc> <@adamgreig:matrix.org> what would you like to do?
<re_irc> <@tmplt:matrix.org> Either I create a PR to `itm` that replaces it and a v0.4.0 is released, or cargo-rtic-scope/itm-decode replaces it whole-sale (delete/archive the itm repo, move itm-decode to embedded-rust). I'd prefer the latter so that git history is retained for itm-decode, but I rename it to `itm` and release v0.4.0 just the same.
<re_irc> <@tmplt:matrix.org> I mean to maintain it. But it's also a dependency for cargo-rtic-scope and probe-rs.
<re_irc> <@adamgreig:matrix.org> would you be interested in joining the wg's cortex-m team to help maintain it thereafter? (and possibly also the itm/etm functionality in the cortex-m crate...)
<re_irc> <@adamgreig:matrix.org> (being in the team doesn't come with any actual obligation to maintain things beyond a nagging sense of responsibility)
<re_irc> <@tmplt:matrix.org> Yes, I'd be interested. But I have not touched ETM any. My work has been on the DWT, ITM and TPIU peripheral thus fas.
<re_irc> <@adamgreig:matrix.org> re your two options above, both sound fine, perhaps with a link in the new readme to refer visitors to the old pre-0.4 archived itm repo
<re_irc> <@adamgreig:matrix.org> in theory it's also possible to do a git subtree merge and bring your existing git history into the current repo with a pr
<re_irc> <@adamgreig:matrix.org> hmm, do I mean subtree merge? I think I mean a different git thing
<re_irc> <@adamgreig:matrix.org> anyway you can add all your existing commits in a way that preserves all the history, but it might not be worth the bother
<re_irc> <@adamgreig:matrix.org> is your itm-decode crate ready for such a thing already, or would you want to do this later?
<re_irc> <@tmplt:matrix.org> I'll look into it. `itm-decode` will copy the `Stream<R>` API in any case, and if there turns out to be a regsession it wouldn't hurt to have the pre-0.4 checkouts at hand.
<re_irc> <@tmplt:matrix.org> adamgreig: It'll be ready after some API refactoring. I'll create the PR when it's ready.
<re_irc> <@adamgreig:matrix.org> we'll still have the old repo renamed and archived anyway
<re_irc> <@adamgreig:matrix.org> well, it's easy to try it on a local copy, see how it looks I guess
<re_irc> <@adamgreig:matrix.org> you want something like `git clone itm; cd itm; git remote add itm-decode; git fetch itm-decode; git merge --allow-unrelated-histories itm-decode/master`
<re_irc> <@adamgreig:matrix.org> then solve all the merge conflicts by checking out your version of the file, `git rm` any remaining original files
<re_irc> <@adamgreig:matrix.org> and you should end up with a repository that only contains your files, all with the same content as your repo, and all with their full git history from you
<re_irc> <@tmplt:matrix.org> adamgreig: Should we copy `itm` to `itm-deprecated` beforehand then?
<re_irc> <@adamgreig:matrix.org> if it doesn't work or you don't like it, just renaming the existing repo and moving yours is fine too
<re_irc> <@adamgreig:matrix.org> well, if the merge does work we can leave itm where it is and you'd pr your new commits into it
<re_irc> <@adamgreig:matrix.org> if it idoesn't, we'll rename it itm-old or whatever and then transfer your one in
<re_irc> <@adamgreig:matrix.org> either way we'll need a vote by the cortex-m team
<re_irc> <@adamgreig:matrix.org> (but i'm in favour, pending actually checking your new code in any detail)
<re_irc> <@tmplt:matrix.org> Gotcha, I'll refactor and fixup `itm-decode` and ping you when it's ready for a vote (presuming the re-implementation must be reviewed in detail).
<re_irc> <@adamgreig:matrix.org> 👍️ sounds good to me, thanks!
<re_irc> <@adamgreig:matrix.org> don't feel like you necessarily need to be exactly API-matching or anything either, use whatever api design you think is best
<re_irc> <@adamgreig:matrix.org> it's a bonus if it's easy to adapt from the current api but it's quite possible better api designs are possible in the 3 years since that one
starblue has quit [Ping timeout: 265 seconds]
starblue has joined #rust-embedded
<re_irc> <@dngrs:matrix.org> grantm11235:matrix.org: that might be an option! need to look into the driver details. Currently its api is centered around flushing an entire frame. (it's indeed as firefrommoonlight said, 96x64 RGB565 )
<re_irc> <@firefrommoonlight:matrix.org> https://cdn-shop.adafruit.com/datasheets/SSD1331_1.2.pdf
<re_irc> <@firefrommoonlight:matrix.org> You could also update a smaller buffer as you pass words down the SPI pipe
<re_irc> <@firefrommoonlight:matrix.org> More complex code-wise, but might be able to solve your RAM issue
<re_irc> <@firefrommoonlight:matrix.org> or switch to an MCU with more ram
<re_irc> <@firefrommoonlight:matrix.org> Then flush only once you've passed each sub-buffer
<re_irc> <@firefrommoonlight:matrix.org> (Being able to keep the whole buffer in memory makes the code comparatively simple)
aquijoule_ has joined #rust-embedded
<re_irc> <@firefrommoonlight:matrix.org> Check out the "Draw Line" and "Draw rectangle" commands for another approach
<re_irc> <@firefrommoonlight:matrix.org> That would solve the mem issue entirely depending on what you're drawing
<re_irc> <@firefrommoonlight:matrix.org> It really is nice being able to take advantage of the embedded-graphics lib though...
<re_irc> <@firefrommoonlight:matrix.org> which you need a full buffer for
aquijoule__ has quit [Ping timeout: 256 seconds]
<re_irc> <@adamgreig:matrix.org> i thought embedded-graphics did not require a full buffer (although life is easy if you have one)
<re_irc> <@firefrommoonlight:matrix.org> It might not! Could be a bad assumption; I've always done it with full buffer
<re_irc> <@adamgreig:matrix.org> it's a bit tricky if you can't draw single pixels I guess, the only required method for a draw target is draw_iter where you get a iter of pixel positions and colours and have to draw them
<re_irc> <@adamgreig:matrix.org> I believe it's very specifically designed to work without needing a local framebuffer
<re_irc> <@adamgreig:matrix.org> but yea, not sure about how easy it would be if you can't draw single pixels on your screen anyway...
<re_irc> <@firefrommoonlight:matrix.org> Nice
<re_irc> <@adamgreig:matrix.org> you might need a full local buffer anyway in that case? unless the screen supports read-back I guess but it would be super slow
emerent has quit [Ping timeout: 265 seconds]
emerent has joined #rust-embedded
<re_irc> <@dngrs:matrix.org> adamgreig: e-g does not, but the ssd1331 crate [does](https://github.com/jamwaffles/ssd1331/blob/master/src/display.rs#L90)
<re_irc> <@dngrs:matrix.org> firefrommoonlight:matrix.org: absolutely an option. I just want to do everything I can get away with on my stm32f1 boards and keep the f4 ones for preciousssss projects at the moment. It's not like you can order any new ones 😬
<re_irc> <@dngrs:matrix.org> firefrommoonlight:matrix.org: not really an option - I'm pumping full frames of raw pixels over UART.
<re_irc> <@dngrs:matrix.org> at this point my easiest quick win by far is decreasing `defmt` buffer size, already started a knurling internal discussion about that :V
<re_irc> <@dngrs:matrix.org> ```rust
<re_irc> <@dngrs:matrix.org> #[stable(feature = "tau_constant", since = "1.47.0")]
<re_irc> <@dngrs:matrix.org> is the stdlib throwing shade or is this feature enabled by default 🤔
<re_irc> <@dngrs:matrix.org> pub const TAU: f32 = 6.28318530717958647692528676655900577_f32;
troth has quit [Ping timeout: 256 seconds]
<re_irc> <@adamgreig:matrix.org> lol, was it complaining that you defined the constant that already existed?
troth has joined #rust-embedded
PyroPeter has quit [Ping timeout: 268 seconds]
<re_irc> <@grantm11235:matrix.org> Why do we have the error traits in e-h instead of just using `Into<ErrorKind>`?
PyroPeter has joined #rust-embedded
<re_irc> <@grantm11235:matrix.org> i.e. replace `type Error: crate::spi::Error;` with `type Error: core::fmt::Debug + Into<crate::spi::ErrorKind>;`
<re_irc> <@grantm11235:matrix.org> I guess `AsRef` would be a better fit since it doesn't consume `self`
<re_irc> <@grantm11235:matrix.org> Actually, I want to require that `&Self::Error: Into<ErrorKind>`, but I'm not sure if that is possible
troth has quit [Ping timeout: 260 seconds]
troth has joined #rust-embedded
<re_irc> <@firefrommoonlight:matrix.org> https://doc.rust-lang.org/core/f32/consts/constant.TAU.html
<re_irc> <@firefrommoonlight:matrix.org> Love it
<re_irc> <@firefrommoonlight:matrix.org> TIL Pi is AKA Archimedes' constaqnt
troth has quit [Ping timeout: 264 seconds]
<re_irc> <@dngrs:matrix.org> adamgreig: just found this by accident, not having any actual problems
troth has joined #rust-embedded
aquijoule_ has quit [Remote host closed the connection]
<re_irc> <@disasm-ewg:matrix.org> esden:matrix.org: If svdtools works for you, then good, otherwise it should be easy to write a simple script that embeds additional peripherals (defined in separate SVDs) into another SVD
<re_irc> <@disasm-ewg:matrix.org> dngrs:matrix.org: Take a look at LiteX, it even generates SVD for you :)
<re_irc> <@jamesmunns:beeper.com> dngrs (spookyvisiongithub) talk to Jonas and Jorge
<re_irc> <@jamesmunns:beeper.com> Probe-run can do basic stack painting/canaries for you already to determine max stack usage
<re_irc> <@jamesmunns:beeper.com> adamgreig: Btw, what Adam was suggesting here is called "stack painting"
<re_irc> <@jamwaffles:matrix.org> dngrs:matrix.org: Could you do a simple form of RLE? embedded-graphics provides ways of saying "fill this rectangle", which you could do for contiguous pixels of the same colour
<re_irc> <@jamwaffles:matrix.org> Also, feel free to jump over to #rust-embedded-graphics:matrix.org for more graphics related goodness
troth has quit [Ping timeout: 260 seconds]
troth has joined #rust-embedded
fabic has joined #rust-embedded
fabic_ has joined #rust-embedded
fabic has quit [Ping timeout: 268 seconds]
crabbedhaloablut has quit [Remote host closed the connection]
crabbedhaloablut has joined #rust-embedded
richardeoin has quit [Ping timeout: 260 seconds]
richardeoin has joined #rust-embedded
<re_irc> <@dngrs:matrix.org> disasm-ewg:matrix.org: aye! have at some point at least opened tabs about all of these things 😬 just trying to stay on top of further developments. Also didn't know LiteX generates SVDs, that's pretty awesome!
<re_irc> <@dngrs:matrix.org> jamwaffles:matrix.org: that .... might actually do something interesting! definitely keeping it in mind
<re_irc> <@dngrs:matrix.org> (maybe even go one step further and send only non-black pixels - at least my current animations, while rather non-RLE-y, are against some black background)
<re_irc> <@dngrs:matrix.org> jamwaffles:matrix.org: speaking of, I've finally gotten around to creating a [PR](https://github.com/rahul-thakoor/embedded-graphics-web-simulator/pull/13) for my lingering speed improvements in embedded-graphics-web-simulator (which coincidentally speeds things up by using rects). And it got merged, wheeeee
<re_irc> <@dngrs:matrix.org> speaking about speed improvements, what's the go-to approach for porting code that does a ton of `sin()/cos()` to integer math? (I hear lookup tables aren't a good fit for current MCUs anymore)
crabbedhaloablut has quit [Remote host closed the connection]
crabbedhaloablut has joined #rust-embedded
fabic_ has quit [Ping timeout: 265 seconds]
<Lumpio-> They aren't?
<re_irc> <@dngrs:matrix.org> Maybe only true for desktop class CPUs? Definitely there though
<Lumpio-> Maybe on MCUs that have external flash memory..? but idk
<Lumpio-> I'd imagine a LUT is still pretty fast on most MCUs
<Lumpio-> But I wouldn't know, curious to know about alternatives too if they're faster
<re_irc> <@dngrs:matrix.org> Ok, gonna try!
<re_irc> <@dngrs:matrix.org> (I specifically meant LUTs for sin/cos)
<re_irc> <@dngrs:matrix.org> Actually, since my main operation is a cos(sin(a) + cos(b) + c) I might get away with lutting the entire thing... am quite low on ram though already. Maybe if I sacrifice precision
<Lumpio-> Three variables makes for a big LUT
<Lumpio-> If you want to do the whole thing
<re_irc> <@dngrs:matrix.org> Right, hence the precision--
<Lumpio-> Can't you put tables in flash
<re_irc> <@dngrs:matrix.org> Could maybe(!) do 5 bit each or so
<re_irc> <@dngrs:matrix.org> irc_libera_lumpio-:psion.agg.io: Worth a try!
<Lumpio-> That's where I'd put them on your average MCU because you usually have way more flash than RAM
<Lumpio-> But depends on the chip ofc
<re_irc> <@dngrs:matrix.org> STM32F1
<re_irc> <@dngrs:matrix.org> Have around 32kb flash left
<Lumpio-> But I mean you can calculate all that with just a 1/8 circle sine table for example
<Lumpio-> Because of symmetry and the fact that cos is just sin with an offset (or vice versa)
<Lumpio-> If you can change your units so that you work in units of 2pi (so 0 is 0 and 1 is 2pi) or something similar, the lookup will likely just be a couple masks and shifts
<Lumpio-> But even if they aren't you can probably do it decently fast
<kehvo> Further you can do parabolic approximation in code and only store the difference (error) to actual sine function to build rather precise LUT
<re_irc> <@dngrs:matrix.org> Right, symmetry
<Lumpio-> If you care more about memory efficiency than speed there's iterative solutions too. But depends on the task at hand of course.
<re_irc> <@dngrs:matrix.org> I even made use of it before and then forgot about it again 🙄 anyway the 3 bars into 1 thing was also me thinking speedup from less lookups
<re_irc> <@dngrs:matrix.org> It's definitely speed bound ATM. <1 fps without fpu, 30 or so with (on an F411)
<Lumpio-> How many ops per frame is it
<Lumpio-> Approximately
<re_irc> <@dngrs:matrix.org> 5*96*64 of the nested sin cos thing if I'm not completely mistaken
<Lumpio-> Well that's a lot, what are you doing even
<re_irc> <@dngrs:matrix.org> this: https://www.shadertoy.com/view/XtlBWN
<re_irc> <@dngrs:matrix.org> line 22 is the hot spot
<re_irc> <@dngrs:matrix.org> and because of the loop in line 29 it gets executed 5 times (`detail = 5.0`)
<Lumpio-> Whoa, trippy
<re_irc> <@dngrs:matrix.org> right? I love it
<re_irc> <@dngrs:matrix.org> it's essentially penrose tiling
<Lumpio-> Now it makes more sense
<Lumpio-> The expression that is
<re_irc> <@dngrs:matrix.org> I also like this one a lot but it's too many pixels for a small mcu display: https://www.shadertoy.com/view/lllfW4
<re_irc> <@dngrs:matrix.org> have to cheat a little with the vertical seam in the center lower half because there is no true repetition ever
nohit has quit [Remote host closed the connection]
dreamcat4 has quit [Ping timeout: 260 seconds]
SanchayanMaity has quit [Ping timeout: 246 seconds]
darknighte has quit [Ping timeout: 260 seconds]
edm has quit [Ping timeout: 246 seconds]
<Lumpio-> Hmm I'm now kind of curious to implement it myself :D Just for fun
<re_irc> <@dngrs:matrix.org> I can send you the rust version if interested (it has a few arithmetic errors ATM, probably because GLSL wraps/saturates differently)
<re_irc> <@dngrs:matrix.org> https://dpaste.org/foin
<re_irc> <@dngrs:matrix.org> `foin`, heh.
<Lumpio-> No I think I'm going to make an optimized version 8)
fabic_ has joined #rust-embedded
fabic_ has quit [Ping timeout: 246 seconds]
edm has joined #rust-embedded
SanchayanMaity has joined #rust-embedded
nohit has joined #rust-embedded
<re_irc> <@dngrs:matrix.org> irc_libera_kehvo:psion.agg.io: I've never heard about that approach, it does sound intriguing
dreamcat4 has joined #rust-embedded
darknighte has joined #rust-embedded
<Lumpio-> welp finishing this is a bit annoying but
<Lumpio-> dngrs: Did you notice that sth and cth can be pre-calculated?
<Lumpio-> They're just sin(i * PI / detail) where i is a number from 1 to detail
<Lumpio-> You can just precalculate those into arrays of length of 5.
<Lumpio-> Then you only need one cos() per pixel
<Lumpio-> Or well, one cos() per detail level
<Lumpio-> But still it's 3x less!
<re_irc> <@dngrs:matrix.org> irc_libera_lumpio-:psion.agg.io: I had not. nice one!
<re_irc> <@dngrs:matrix.org> that's what you get when writing shaders ... every pixel is fresh and there is no such thing as global state
<re_irc> <@dngrs:matrix.org> strictly speaking detail can be a float, you get interesting results with non integer values, but 5.0 is really where it's at for me personally
<Lumpio-> It's still an integer number of steps
<Lumpio-> Just with a scaling factor
<re_irc> <@dngrs:matrix.org> true
<re_irc> <@jamwaffles:matrix.org> dngrs:matrix.org: Nice! 1000x is an impressive speed boost lol
<re_irc> <@jamwaffles:matrix.org> dngrs:matrix.org: Take a look at the TGA format if you do go down that path - it has a really simple RLE impl with maybe some inspiration in `tinytga`.
<re_irc> <@dngrs:matrix.org> good point, why reinvent the wheel!
<re_irc> <@dngrs:matrix.org> jamwaffles:matrix.org: I might have been off by a factor of 10 but it definitely -is- a lot faster. Still trying to figure out why setting a single pixel is so abysmally slow in the first place though (after all, not every draw is composed of big rects). An adventure for another day…
<re_irc> <@jamwaffles:matrix.org> dngrs:matrix.org: We used [`micromath`](https://crates.io/crates/micromath) in embedded-graphics - seems to give a good balance between accuracy and perf.
<re_irc> <@esden:matrix.org> disasm-ewg:matrix.org: It is currently a good enough stopgap. But it would be even better if we could have an svd like yaml format, to write definitions in that can then be used to generate SVD files or even CSR modules to use in verilog projects. This would bring a closer parity between litex projects and plain verilog projects. (not everyone wants to buy into Litex) Litex could then also generate the yaml...
<re_irc> ... files. Long term goal could tthen be that svd2rust could dirrectly read the yaml files removing the SVD indirection. We could definitely use a lot of the internal svdtools plumbing for that. But I am not sure when/if I come around to it. For now I think the most reasonable approach is to make it easier to generate SVD files from scratch using svdtools. This is not as much of a yakstack to ddeal with. :)
<re_irc> <@dngrs:matrix.org> jamwaffles:matrix.org: oh, that does have integer sin? Should have looked closer, as I'm using it already :o
<re_irc> <@jamwaffles:matrix.org> Oh, nah just `f32` unfortunately
<re_irc> <@jamwaffles:matrix.org> But I think it supports the `fixed` crate which might give even more perf
<re_irc> <@jamwaffles:matrix.org> (we use that too)
<re_irc> <@burrbull:matrix.org> esden:matrix.org: svd-rs (part of svd-parser) already have serde support
<re_irc> <@dirbaio:matrix.org> esden:matrix.org: I ran into annoying SVD limitations for embassy-stm32, and ended up creating a yaml format that IMO is way more readable. [this](https://github.com/embassy-rs/stm32-data/blob/main/data/registers/usart_v2.yaml) is a yaml for a peripheral. [this](https://github.com/embassy-rs/stm32-data/blob/main/data/chips/STM32F030R8.yaml) is a yaml for a chip (referencing the peripheral yamls) and...
<re_irc> ... [chiptool](https://github.com/embassy-rs/chiptool/) can convert from SVD to YAML, and generate Rust PACs from SVD or YAML
<re_irc> <@dirbaio:matrix.org> maybe that's more in line with what you had in mind
<re_irc> <@esden:matrix.org> burrbull:matrix.org: Great to hear! :)
<re_irc> <@esden:matrix.org> dirbaio:matrix.org: Neat! I will take a look!
<re_irc> <@dirbaio:matrix.org> it could convert from yaml to svd too, but it'd be lossy
<re_irc> <@disasm-ewg:matrix.org> esden:matrix.org: Dunno, I found SVD format really convenient for manual editing, but I used an editor that supports XML and also loaded the SVD schema file.
<re_irc> <@esden:matrix.org> dirbaio:matrix.org: That is the point... I think for interoperability it has to be able to generate SVD too. And just warn that it is loosing info.
<re_irc> <@dirbaio:matrix.org> not *very* lossy though
<re_irc> <@dirbaio:matrix.org> most of my gripes were with derivedFrom which is hooooorrible
<re_irc> <@esden:matrix.org> disasm-ewg:matrix.org: I am not hand editing XML ...
<re_irc> <@esden:matrix.org> disasm-ewg:matrix.org: curious what editor you used, I tried like 10 of them, they all were utter garbage.
<re_irc> <@disasm-ewg:matrix.org> esden:matrix.org: Intellij IDEA Community
<re_irc> <@dirbaio:matrix.org> but also support for "cursed irregular arrays" 💩 https://github.com/embassy-rs/stm32-data/blob/44a04ef8bd5a8a44aa3cf62063f53d6034f56800/data/registers/dma_v2.yaml#L173-L178
<re_irc> <@esden:matrix.org> disasm-ewg:matrix.org: ahh ok... Intellij loves to completely take all my computer resources and run with them... that is why I am bit biased against that stuff.
<re_irc> <@esden:matrix.org> But to be fair, when it is not completely bogging down my machine the tools have lot's of good features.
<re_irc> <@esden:matrix.org> I just wish it was not so inefficient :(
<re_irc> <@disasm-ewg:matrix.org> Well, Firefox running on my machine consumes much more :D
<re_irc> <@firefrommoonlight:matrix.org> Yes..Jetbrains is unmatched for functionality and introspection/refactoring, but is a pig
<re_irc> <@esden:matrix.org> in any case.. I found writing yaml to be acceptable... originally I was just thinking of writing an SVD to yaml / yaml to SVD ... 1:1 converter... that would have been good enough...
<re_irc> <@esden:matrix.org> dirbaio:matrix.org: heh... fun stuff... :)
<re_irc> <@dirbaio:matrix.org> some ST engineer was drunk that night
<re_irc> <@esden:matrix.org> lol... I sometimes feel they have unlimited booze on tap ...
troth has quit [Ping timeout: 264 seconds]
troth has joined #rust-embedded
<re_irc> <@therealprof:matrix.org> esden:matrix.org: Sounds like a great place to work then. Welcome, BTW. 😉
<re_irc> <@tk:tastalk.co> YMMV but I recently started using CLion on my laptop with 16 GB RAM because in the largest workspace I use, its rust plugin manages to stay within the 3 GB that I allocated it (up from 2 GB), whereas RA will consume 5+GB without bound and push me into swap
<re_irc> <@tk:tastalk.co> Both are good, each just has interesting advantages/disadvantages