<diondokter[m]>
Ah, I do get the warning when actually using it in FFI
<ivmarkov[m]>
<ivmarkov[m]> "Comes from the RfL project..." <- Nightly Rust is not necessary anymore. The `no_std` portion of `pinned-init` 0.0.8 (just released) runs on stable.
jannic[m] has quit [Quit: Idle timeout reached: 172800s]
loki_val has quit []
crabbedhaloablut has joined #rust-embedded
dirbaio[m] has quit [Quit: Idle timeout reached: 172800s]
M9names[m] has quit [Quit: Idle timeout reached: 172800s]
AlexKoz[m] has joined #rust-embedded
<AlexKoz[m]>
Hi 👋🏻
<AlexKoz[m]>
I wrote a proposal for cargo. I am free right now and can make some patches for this. But before that, I need the opinion of the members of the working group and some advice. That would be great, so I am drawing your attention to the [issue](https://github.com/rust-lang/cargo/issues/14208).
<JamesMunns[m]>
Alex Koz. would you like me to add this to meeting agenda this week? If you'll be available tomorrow at 8pm CEST (about 24 + 5 hours from now), it might be good to present the issue and get any quick feedback!
<diondokter[m]>
Though this only works if your chip setup matches their memory map
<diondokter[m]>
Right, so what was going on was that I used memory address 0x3000000 for my RAM. This is aliased to 0x10000000 in the chip.
<diondokter[m]>
For the first one probe-rs doesn't have a memory region defined, but it has for the second one (even though they are the same)
<JamesMunns[m]>
I know this is asking you for more work, but I would super read a blog post about "the practicalities of developing for dual core", especially for heterogeneous cores (e.g. not RP2040)
<JamesMunns[m]>
like, what tooling is rough today, what libraries don't handle this well
<JamesMunns[m]>
even if you don't have solutions, I'd read the hell out of some condensed notes of all the branches you hit falling out of the tree :D
<diondokter[m]>
Good idea! Not sure if I have time for it, but I'll put it on the tracker :)
<diondokter[m]>
So far, the biggest things are:
<diondokter[m]>
- Rough edges in probe-rs
<diondokter[m]>
- What is the best way to make sure both cores agree a thing is in the same location in RAM?
birdistheword99[ has joined #rust-embedded
<birdistheword99[>
<JamesMunns[m]> "I know this is asking you for..." <- Seconded, this would be a great read, I am already a big fan of the tweede golf blog as there are some great articles on there!
<diondokter[m]>
birdistheword99[: Ah thanks!
<diondokter[m]>
There's gonna be a huge debugging story there soon haha
<JamesMunns[m]>
I'd love to see more "field notes" blog posts in general, I think they help a lot figure out what actually needs to be fixed and improved!
<birdistheword99[>
* some great rust articles on
<diondokter[m]>
It's hard to do. I remember you keeping lab notes. But it's hard to strike a balance between quick and short, and giving enough background
<JamesMunns[m]>
Yep, for sure!
<AlexKoz[m]>
s/phone/call/
JomerDev[m] has joined #rust-embedded
<JomerDev[m]>
🤔 docs.rs seems to have gotten an update, it's now possible to see the feature flags and which additional features get enabled
<embassy-learner[>
JamesMunns[m]: LOL Thanks James, better I don't explain my error!! :)
<JamesMunns[m]>
:)
<cr1901>
JamesMunns[m]: What are my options for creating serialized postcard where I don't know the length of the vec apriori? Is it theoretically possible to add a "this is a non-var-int" type that postcard goes back and rewrites with the correct size after serialization of all elements are done?
<cr1901>
(I lied; it's not actually a vec, it's "a stream of 1million+ entries, and I don't want to hold them all in a vec at once)
<JamesMunns[m]>
it that the only thing you're serializing?
<JamesMunns[m]>
the short answer is "you can't do that" in postcard
<cr1901>
yes, only thing I'm serializing.
<JamesMunns[m]>
the slightly longer answer though is, you could serialize it as N structs, postcard has things like take from bytes which gives you the remaining bytes, and you could do something like (sorry if this doesn't look good on IRC):
<cr1901>
I am alright with the solution you're about to propose.
<cr1901>
(I remember us discussing prepending a version marker in the same way)
<JamesMunns[m]>
this wont work for like, a field of a struct, there's no way to express it in serde+postcard
<JamesMunns[m]>
but if it's a single blob/stream where this is the only "kind" of item, you could make it work, by throwing away the "Vec" part, and just thinking of it as turning a stream of bytes into a stream of structs, and vice versa
<cr1901>
Yes, only kind of item
<cr1901>
streaming serialization/deserialization is a pain point it seems. Everyone really wants their data to fit in memory
<JamesMunns[m]>
yeah... I have more thoughts on it, but postcard was definitely written assuming that
<cr1901>
Obligatory "sorry if all it ever seems I do is bitch about postcard" :)
<JamesMunns[m]>
but tbh, if you don't have all the data at one time, then serde doesn't make as much sense, you're not JUST deserializing, you're also mashing in "framing" and "transfer" steps
<JamesMunns[m]>
You're exercising it, which is always fun to chat about!
<JamesMunns[m]>
but yeah, postcard works on frames of bytes. But if you have a stream of bytes, then you need framing to tell where one message ends and the next starts
<JamesMunns[m]>
and if you want a collection of frames, then you need to decide when one chunk ends, and the next starts
<cr1901>
That is fair, I want something between "serialization" and "an SQL database". Something seek-optimized but still not an SQL database :P.
<cr1901>
(Mainly b/c an SQLite database feels extreme for a seek-optimized array lmao)
<JamesMunns[m]>
yeah, seeking in postcard is a little harder, for sure.
<JamesMunns[m]>
using something like cobs for framing could make seeking easier, but if elements are variably sized, seeking also means fully deserializing just to stride the memory
<JamesMunns[m]>
JSON sometimes uses "line separation" to do this for framing, which works if your json doesn't contain any unescaped newlines
<JamesMunns[m]>
(this is very common for log streaming in JSON format)
<JamesMunns[m]>
the OTHER OTHER option, if you are willing to hand roll things, is the fact that postcard accepts non-canonical usize lengths
<cr1901>
or the usize is a separate item in non-canonical form?
<JamesMunns[m]>
not sure what you mean "deserialize iter"
<JamesMunns[m]>
but you could deserialize as a normal `Vec<YourStruct>`
<JamesMunns[m]>
the non-canonical thing is if you want "streaming serialization, non-streaming deserialization"
<cr1901>
Oooooh
<JamesMunns[m]>
you would have to "manually" do the serialization, e.g. make a vec, add 10 empty bytes, foreach struct, extend the vec with the serialized bytes, then at the end, write the non-canonical-10-byte-usize, then send the whole vec to be deserialized later
<JamesMunns[m]>
then on the deserialize side, you could deserialize a normal Vec
<cr1901>
Hmmm
<cr1901>
Yea, maybe I should combine this with some sort of framing crate. E.g. if I want the 900000th entry, I don't want to parse 900000 entries beforehand lol
<cr1901>
I could deserialize the usize in the vec, and then "skip that many entries times "serialized size""
<JamesMunns[m]>
note that serialized size is PROBABLY not a fixed size!
<cr1901>
Shit.
<JamesMunns[m]>
yeah, downside of postcard
<cr1901>
You are absolutely correct. Good thing I'm talking out loud
<JamesMunns[m]>
:)
<JamesMunns[m]>
well I'm going on a walk, so you're on your own for now :D
* cr1901
nods
<cr1901>
have fun
<cr1901>
Maybe I do want SQLite w/ postcard. I'll think it over
<JamesMunns[m]>
yep, postcard would work well as a single entry binary blob!
geky[m] has joined #rust-embedded
<geky[m]>
@libera_cr1901:catircservices.org Could you add an array of offsets for each postcard entry? Then seek is cheap at the cost of ~4 bytes per struct
<JamesMunns[m]>
Yep, or make a linked list by prefixing each struct with its len
<geky[m]>
I guess that depends what type of seek you want, O(n) vs O(1) and all that jazz
<cr1901>
I don't think I can do the array of offsets approach because I don't know how much space I need for the array of offsets apriori. But the linked list approach prob works.
<cr1901>
Well, I guess what one could do is store the array of offsets at the end of the file, and the very last 10 byte usize in the file tells how much to seek from the end to find the array of offsets
<geky[m]>
You could also store the offset array backwards at the end of the file. This would work with amortized doubling as well though it may be a bit tricky.
<geky[m]>
It may be easier to just store two files, but I guess that depends on what other systems the files need to pass through
<cr1901>
Good thing I don't need to store this on a tape drive :P
<geky[m]>
I mean you might as well use a linked-list then, my understanding is reading/seeking on tape drives take roughly the same amount of time :P
<geky[m]>
If you want to get fancy you could upgrade your linked-list to a CTZ skip-list for O(log n) lookups and ~2x4 bytes per struct, but I don't know if it would be worth the complexity tradeoff
<geky[m]>
But then it would be strictly append-only
<geky[m]>
Oh, and if your filesystem is FAT maybe don't bother with any of this because FAT uses linked-lists anyways
<geky[m]>
Though I guess that's at the cluster level, so maybe? 🤔