ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
adamgreig[m] has quit [Quit: Idle timeout reached: 172800s]
korken89[m] has joined #rust-embedded
<korken89[m]> Hey diondokter I'm giving `sequential_storage` a try but am hitting a snag you might have worked around before. The STM32L4 I'm testing on does not allow for the `MultiwriteNorFlash` trait bound.... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/qwgFeBnKhZcoqUghuqMJtiTk>)
<korken89[m]> * Hey diondokter I'm giving `sequential_storage` a try but am hitting a snag you might have worked around before. The STM32L4 I'm testing on does not allow for the `MultiwriteNorFlash` trait bound.... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/nzkMFCFYFVwuHoOKGfgrRIdg>)
<korken89[m]> I think this issue will apply for any MCU with ECC on flash as the ECC part is immutable after the first write.
<diondokter[m]> <korken89[m]> "Hey diondokter I'm giving `..." <- > <@korken89:matrix.org> Hey diondokter I'm giving `sequential_storage` a try but am hitting a snag you might have worked around before. The STM32L4 I'm testing on does not allow for the `MultiwriteNorFlash` trait bound.... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/oViUqLUZTCvwiVugtndzrNKg>)
<korken89[m]> Alright, so it comes down to being able to clear an entire word, I'll try to find where this is used within the codebase and see if it can be relaxed.
<diondokter[m]> korken89[m]: ItemHeader erase is what you should be looking for
<korken89[m]> Thanks! πŸš€
<korken89[m]> Alright, so it's the CRC field that needs the padding it seems so it can be written to 0.
<diondokter[m]> Yep, data crc (not the length crc)
<korken89[m]> Ah right, and then one would also need to split the current write so it does not touch already written data on erase_data. That seems like it should be it.
<diondokter[m]> Yup
<korken89[m]> πŸš€
<diondokter[m]> Is all very doable. Biggest question is how do you turn it on?
<diondokter[m]> Feature flag? Well now your other flashes also have extra unneeded padding...
<diondokter[m]> So idk, there no real other way I think...
<diondokter[m]> Well... Or there needs to be a new trait for the flash
<korken89[m]> I was thinking about a trait that is auto-implemented for MultiwriteNorFlash or one that could be implemented for flash types that are more restrictive than MultiwriteNorFlash but still uphold this requirement.
<korken89[m]> * I was thinking about a trait that is auto-implemented for MultiwriteNorFlash or one that could be implemented for flash types that are more restrictive than MultiwriteNorFlash but still uphold this "word-clear@ requirement.
<korken89[m]> * I was thinking about a trait that is auto-implemented for MultiwriteNorFlash or one that could be implemented for flash types that are more restrictive than MultiwriteNorFlash but still uphold this "word-clear" requirement.
<diondokter[m]> Or we could propose a new trait to the embedded-storage crate
<korken89[m]> That too, maybe it's not a too specific trait?
<korken89[m]> WordclearNorFlash
<diondokter[m]> Yep
<korken89[m]> Basically
<diondokter[m]> Super trait of WriteNorFlash and sub trait of MultiWriteNorFlash
<korken89[m]> Thank you for the help!
<diondokter[m]> Would be a breaking change though
<korken89[m]> Indeed
<korken89[m]> It also adds alignment requirements on the write operation, in my case you must make sure the data CRC buffer is aligned in flash to a 64 bit boundary
<korken89[m]> So worst case is 8 bytes of padding with only 4 data bytes being used.
<korken89[m]> Better than not working though :D
<diondokter[m]> The ItemHeader is already always word-aligned
<korken89[m]> Awesome!
<diondokter[m]> Needs to, to support nonmulti write flashes
<korken89[m]> Using NorFlash::WRITE_SIZE?
<diondokter[m]> It's got its own extension trait to define the word size
<diondokter[m]> Basically min(Write size, Read size)
<diondokter[m]> Wait no, max
<korken89[m]> πŸš€
<korken89[m]> diondokter: Here is how I envisioned it https://github.com/tweedegolf/sequential-storage/pull/63 :)
<diondokter[m]> korken89[m]: Thanks! I'll take a look
<korken89[m]> ❀️
<diondokter[m]> Dang, why is default associated const stable, but default associated type not 😭
<zeenix[m]> James Munns: Hi. When implementing Schema for a struct type, do the `NamedType::name` value is supposed to be the name of the field or type? If latter, does it need to be globally or locally unique?
<JamesMunns[m]> NamedType is for a given type, not sure what you mean wrt uniqueness, it doesn't consider the module path AFAIK
<zeenix[m]> i'm defining the fields of the struct
<JamesMunns[m]> that would be a NamedValue, not a NamedType
<JamesMunns[m]> you're making a tuple instead of a struct
<zeenix[m]> is there an example somewhere of Schema implementation for a struct?
<JamesMunns[m]> you can derive the schema for any type, and print it out on the desktop to see what it gives you
<JamesMunns[m]> NamedType implements Debug
<JamesMunns[m]> You can do something like:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/iQJKNoVzlFmMAloTvOAoqlRk>)
<zeenix[m]> James Munns: Not for external types. This is a NewType struct wrapper for `time::OffsetDateTime`
<JamesMunns[m]> im saying, if you want to figure out how it works, you can just experiment to see what it should be
<JamesMunns[m]> like, copy and paste the type into a temp project, add derive, and print it out
<zeenix[m]> ah ok
<JamesMunns[m]> there are descriptions of what the various pieces are tho, btw
<JamesMunns[m]> an enum or struct becomes a NamedType, a structs fields become a NamedValue, and an enums variants become a NamedVariant
<zeenix[m]> James Munns: btw, is there a way to get more information when things go wrong, I'm currently fighting two issue: If I send this DataTime type to an endpoint, the firmware side simply ignores it (because of wrong schema, as we just figured out). Then I've an error on the firmware side when it publishes an event but error is `()` so it doesn't give me any clue what went wrong
<JamesMunns[m]> not offhand, open to having defmt or other logs in error cases.
<JamesMunns[m]> IIRC if you send a request with a wrong schema it should send back an error
<zeenix[m]> interesting. Maybe I'm on older p-rpc
<JamesMunns[m]> are you using the define_dispatch! macro on the target side?
<zeenix[m]> yes
<JamesMunns[m]> > Then I've an error on the firmware side when it publishes an event but error is () so it doesn't give me any clue what went wrong
<JamesMunns[m]> AFAIK the only possible errors on publishing is that you don't have a large enough buffer to serialize into, or the USB connection was dropped
<JamesMunns[m]> These should definitely have better (non-`()`) errors πŸ˜…
<zeenix[m]> JamesMunns[m]: > <@jamesmunns:beeper.com> > Then I've an error on the firmware side when it publishes an event but error is () so it doesn't give me any clue what went wrong... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/CVWmWWWLQpwMyYgGvooqpeJY>)
<JamesMunns[m]> The other thing to check is that your PC and MCU are both using the same error type, they need to match
<zeenix[m]> The event in question is quite large
<JamesMunns[m]> If you are using define_dispatch, IIRC you must be using https://docs.rs/postcard-rpc/latest/postcard_rpc/standard_icd/enum.WireError.html
<zeenix[m]> I am, yes
<JamesMunns[m]> Then errors should definitely come back. You could try adding logging to the RX worker to see why it isn't being sent back to the send_resp call
<zeenix[m]> oh, the buffers are very small indeed πŸ€¦β€β™‚οΈ
<JamesMunns[m]> send resp waits for EITHER the expected response, OR an error response with the same sequence number
<zeenix[m]> cool, will do that
<zeenix[m]> Thanks so much!
<JamesMunns[m]> On the host side we basically look for matches of the tuple `(sequence number, message schema)`, so when you send a request, we wait for:
<JamesMunns[m]> * `(seqno, Response Schema)`
<JamesMunns[m]> * `(seqno, Error Schema)`
<JamesMunns[m]> and whichever returns first is what gets responded to the caller. you should see this as either an Ok(resp) or Err(error) as the return value of send_resp
<JamesMunns[m]> if you are getting NOTHING back (e.g. it hangs forever), then something is wrong, either in p-rpc, or in your setup.
<JamesMunns[m]> docs/fixes/logs all welcome based on whatever you find :)
<zeenix[m]> yeah, there is definitely something very wrong. I'm glad to hear that it's not intentional at least that request is just dropped
<zeenix[m]> s/at least//
<zeenix[m]> James Munns: also, IMO the buffer sizes shouldn't matter other than the speed of the transfer
<JamesMunns[m]> serde has no way to "pause" serialization
<JamesMunns[m]> if you want to serialize 256 bytes and only have a 128 byte buffer, there's no async way to pause, flush, and resume
<zeenix[m]> yeah serde isn't designed for streaming
<zeenix[m]> but the usual way is to receive the full message in a buffer and then deserialize it all at once
<JamesMunns[m]> Yeah, so in postcard rpc your buffers must be large enough to hold the largest serialized incoming/outgoing message.
<diondokter[m]> Hey all, so `embedded-storage` NorFlash is a bit limited in what it can do and naΓ―vely extending it won't work either.
<diondokter[m]> Here's a proposal of what we can do with it: https://github.com/rust-embedded-community/embedded-storage/issues/58
<diondokter[m]> I can spell it out better/more clearly, but first I want to know what people think of this approach and if they see the same problems
<diondokter[m]> Maybe I should bring this up in the weekly meeting. But I'm usually busy at that time...
ryan-summers[m] has quit [Quit: Idle timeout reached: 172800s]
pbsds3 has quit [Quit: The Lounge - https://thelounge.chat]
pbsds3 has joined #rust-embedded
<JamesMunns[m]> This is funny timing, but I was idly thinking about writing a simple filesystem, and did a brain dump this morning of some general facts around SPI flashes and how chips use them, I just published it in case it's useful for the current discussion:
<JamesMunns[m]> diondokter
<thejpster[m]> I thought of https://en.wikipedia.org/wiki/JFFS2, which links to LogFS, UBIFS, and YAFFS.
<diondokter[m]> JamesMunns[m]: > <@jamesmunns:beeper.com> This is funny timing, but I was idly thinking about writing a simple filesystem, and did a brain dump this morning of some general facts around SPI flashes and how chips use them, I just published it in case it's useful for the current discussion:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/uARdFySTVXuhFHzSvEmAcQZM>)
<thejpster[m]> I don't think Linux cares if the flash is behind SPI or some memory bus - I think the Memory Technology Devices (MTD) layer abstracts it away.
<diondokter[m]> James Munns: You mention 'power-off' safe. I'll say that in practice when I implemented this for s-s, this is very similar to async cancel safety
<JamesMunns[m]> yep!
<diondokter[m]> * James Munns: You mention 'power-off' safe. I'll say that in practice when I implemented this for s-s, I found this to be very similar to async cancel safety
<JamesMunns[m]> I think power-off write also might have cases of partial write, even down to the byte level, but it's probably not hugely different from being cancellation safe in general!
<JamesMunns[m]> I remember we talked about storing a "dirty" bit to detect if cancellation happened and you might have to recover, or treat the flash like you would on a clean power-on
<diondokter[m]> JamesMunns[m]: Yeah, ultimately I went a different way and use CRCs to detect these kinds of things (which also covers bit-flips)
<JamesMunns[m]> I was wondering how reasonable it would be to just not care about scattering various metadata and contents across the whole flash, to avoid having to keep a coherent tree of inodes or whatever. But it's less reasonable if you have to spend 30s scanning the whole flash at boot up time to re-assemble a coherent model of the FS.
<JamesMunns[m]> that's why I spent so much time looking at the speed/total access times
<diondokter[m]> Your post is very reasonable and covers things I've been thinking of too.... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/GRrOVDsXwcGGiLZjaLTHvJnN>)
<diondokter[m]> You're gonna need some metadata in any case. Page sizes, word alignment and appending to existing files are all PITAs
<JamesMunns[m]> Yeah, I was thinking about some kind of "stack machine" log, describing all the files, and maybe all the insertions/deletions in a fixed size instruction set, that you append-only to
<diondokter[m]> s/pointer/pointers/
<JamesMunns[m]> so like one "entry" would be say 16 or 32 bytes, plus some CRC or maybe even FEC data. You'd be able to run over the log in any order, and build internal state based on that
<diondokter[m]> s/pointer/pointers/, s/key/keys/, s/metadata/metadatas/
<JamesMunns[m]> (note: this is all a vague idea I'm not actually sure is feasible without way too much time or working ram)
<diondokter[m]> Haha yeah, the devil is in the details
<JamesMunns[m]> yep, this is why I started writing down all the facts like that, so I can start seeing if the next steps/layers are reasonable based on the details :D
<JamesMunns[m]> The other reasonable option IMO is instead of having one filesystem that can do many things, have a couple crates that can do specific things
<JamesMunns[m]> like, you already can combine ekv (for settings that change decently often) and sequential storage for stuff like appended logs
<JamesMunns[m]> basically "each partition can only be used in a certain way" :p
<diondokter[m]> Yeah, that's fair. A simple FS could be a file per page on only Multiwrite devices
<JamesMunns[m]> but yeah, I don't need a filesystem very often for the kinds of things that I do, the most common cases are:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/JKLaQxKdZfEDRUCsufjVnARV>)
<diondokter[m]> JamesMunns[m]: > <@jamesmunns:beeper.com> but yeah, I don't need a filesystem very often for the kinds of things that I do, the most common cases are:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/FWwXezZDwPthskJHFYZCoQXM>)
<diondokter[m]> An example would be the LoRa frame counters
<jr-oss> korken89[m], If you are using an STM32L4 with ECC flash, you probably want to have an NMI handler also to deal with ECC double errors (FLASH_ECCR.ECCD). Flash content can be corrupted when there is e.g. a power failure while writing. Can be quite annoying to find out, that you device does not boot, because reading the configuration from flash fails.
andar1an[m] has joined #rust-embedded
<andar1an[m]> When it comes to embedded systems, is there a manner in which one can use pxe or network to deploy system code? Sorry if ignorant wording
<andar1an[m]> Would it be similar to how one may init a linux kernel?
<diondokter[m]> I mean, you could write a network-capable bootloader
K900 has joined #rust-embedded
<K900> Depends on how embedded
<K900> For small MCUs, definitely not
<andar1an[m]> Like no-std embedded
<K900> Yeah no
<K900> You could make your own thing but it's probably a bad idea
<diondokter[m]> K900: All new things are probably bad ideas :P
<andar1an[m]> Will save for future tinkering then when I understand bootloaders better
<andar1an[m]> I love building new things haha
<andar1an[m]> I generally don't think they are bad ideas, at least for learning
<K900> diondokter[m]: I mean, there are legitimate reasons why this would be a bad idea
<K900> Like not being able to set up the device without a network is usually suboptimal
<andar1an[m]> I mean using network to set up device
<K900> And the really small embedded use cases generally don't have enough resources to have complex software running on them
<K900> Complex enough to warrant netbooting it
<andar1an[m]> Not for small embedded
<diondokter[m]> Well yeah, that's obvious. But it really depends on the usecase! If it's only for the first setup, or if the device is useless without network anyways
<andar1an[m]> But for servers that run specific embedded system instead of general os
<K900> Any no_std embedded is small
<K900> Anything that qualifies as a server will generally have UEFI anyway
<K900> So you can load iPXE
<K900> Or whatever
<andar1an[m]> So std embedded was probably better wording. I just want to learn no std eventually too
<diondokter[m]> Hmmm, you can make 16MB pi pico devices. You can do a whole lot of things with that!
<andar1an[m]> Ya, I am doing uefi work now. But for servers a general os is not necessarily needed for faas type bin deployment
<K900> std and embedded don't really go together
<K900> And for servers you most definitely want UEFI
<K900> Because it's a standard that your vendor is known compliant with
<andar1an[m]> Ya, just trying to imagine an alternative
<K900> And it's not like you'll be able to replace it anyway, at least on most hardware
<diondokter[m]> K900: What? Then what is ESP doing?
<danielb[m]> some happy accident
<K900> And then you'll want a hypervisor on top because you don't want a single deployment to take up the entire machine
<K900> diondokter[m]: Very cursed things, mostly
<andar1an[m]> K900: But I do
<K900> Then you probably want to look into unikernels
<K900> Those are kind of what you're describing
<diondokter[m]> Obviously everything has its pros and cons. And it's hard to say with this little info what a good or a bad choice is
<andar1an[m]> That is what I am exploring with what I call baremetal orchestration in my head
<K900> See, "bare metal orchestration" is just what we had before The Cloud
<andar1an[m]> Yes, and I like it
<K900> And the reason The Cloud won is specifically because it can allocate as much resources to a workload as is necessary
<K900> Which is most of the time much less than an entire machine
<K900> So you can do a lot more useful work with the same amount of machines
<andar1an[m]> Yes, but the alternative is scaling many small machines
<andar1an[m]> With singular purpose
<K900> That's really what cloud is all about - packing as much useful work as possible into a fixed amount of resources
<K900> andar1an[m]: That was also actually tried
<andar1an[m]> Cloud is about sharing those hosts for many customers too
<K900> Most recently with Scaleway's ill-fated tiny ARM boxes
<andar1an[m]> Im a self-host env, that is not necessarily the best choice when considering complexity
<K900> They basically ran a giant pile of SBC class ARM boards in a rack
<K900> And you could rent out an entire board
<K900> In fact that was the only thing you could rent out
<andar1an[m]> Ya, I am waiting to receive a bunch of rpi blades to do similar
<K900> Because they were not physically capable of virtualization
<andar1an[m]> But even with a larger machine, in a self-host env, a hypervisor is not necessarily a bonus
<andar1an[m]> One can vary workloads across hardware instead of having general purpose hardware
<andar1an[m]> Im really intrigued by thay
<andar1an[m]> s/thay/that/
<andar1an[m]> My current goal is understand the process with general purpose os and hardware, and understand how to migrate to embedded systems with specific hardware
<andar1an[m]> Im optimistic about this with what I have learned so far, but still a lot of unknown unknowns
<JamesMunns[m]> It's always a challenge to compete against the cost differences between specialized hw and lots of cheap general purpose hardware :)
<andar1an[m]> Yes, but even cheap general purpose hw can serve specific purpose software, especially with nano and microservices. I think there is growing diversity in hardware, especially with growth of risc-v and esp
<andar1an[m]> E.g. network hardware these days can do a lot more than what is traditional practice
<K900> If you mean specialized hardware as in actual hardware acceleration for specific operations, that use case is generally served by running an accelerator off to the side of a conventional CPU
<K900> Not shifting the entire workload to a custom built chip
<K900> There are use cases that warrant fully custom chips, but those are extremely rare, and extremely expensive
<andar1an[m]> That's why i find fpgas cool
<K900> FPGAs are one of the things that would qualify as an accelerator here, yes
<JamesMunns[m]> also, usually when an accelerator proves to be useful, it tends to move inside of a mass produced CPU
<JamesMunns[m]> we used to have accelerators for audio, video, and networking. These days, only discrete video cards have really survived
<JamesMunns[m]> and even then, only for higher end ones
<andar1an[m]> Yes, but there are physical and sustainability benefits that can be potentially be gained from fpgas
<JamesMunns[m]> maybe :)
<JamesMunns[m]> I see FPGAs used most often when the volumes are low
<JamesMunns[m]> in most other cases, you'd just turn the design into an ASIC, which is cheaper and more efficient per-unit, but has a large upfront tooling cost
<JamesMunns[m]> like 3d printed vs injection molding
<andar1an[m]> Yes, but if one can modify the purpose of hardware, there is potentially less need to produce more or buy more
<JamesMunns[m]> ehhhhhhh.
<JamesMunns[m]> yes, in general
<JamesMunns[m]> no, in practice
<JamesMunns[m]> most of the time you want to do something meaningfully different, the supporting hardware must change too
<JamesMunns[m]> like different cabling, or power requirements, etc.
<K900> I think the last time I've seen an FPGA in a widely shipping consumer product was Nvidia's G-Sync Module(tm) display scaler
<JamesMunns[m]> or it would make a card so expensive it'd be cheaper to replace it every 2-3 years
<K900> And that was literally strong-armed onto vendors by a monopoly
<K900> And lasted for like a year
<JamesMunns[m]> (like if your network card was $2000 instead of $50)
<K900> Before a royalty free spec came along and every other scaler vendor implemented it
<K900> And those things were stupidly expensive too
<JamesMunns[m]> I don't want to rain on any research you want to do andar1an!
<K900> Like, you could have an off the shelf display scaler for $5
<K900> And an FPGA for Nvidia's stuff was $100+
<JamesMunns[m]> but there's also usually a cost and reliability reason that "super modular" systems tend not to be as popular as "lightly customizable" systems
<andar1an[m]> JamesMunns[m]: Your not.
ejpcmac[m] has quit [Quit: Idle timeout reached: 172800s]
<JamesMunns[m]> FPGAs are super cool, especially for low volume manufacturing and prototyping!
<JamesMunns[m]> If you aren't making >1-10k units, often an FPGA is a much more reasonable choice than your own ASIC or CPU
<andar1an[m]> In my mind innovation comes with diversity. Fun to explore. With risc-v, fpgas, and ipv6 and a desire for more descentralization things are evolving
<JamesMunns[m]> but once you do get above 10-100k units, paying the upfront cost for an asic starts making sense
<andar1an[m]> JamesMunns[m]: Ya, why I am speaking in context of self-hosting for a single org
<JamesMunns[m]> anyway, interested to see what you build!
<andar1an[m]> Me too. Baby steps haha
<andar1an[m]> At least the journey is fun haha
<andar1an[m]> Do fpgas fall into realm of rust embedded?
<diondokter[m]> Yes if you're running Rust code on them or use Rust to interact with them. On their own? Not really
<K900> FPGAs generally aren't programmed in "normal" programming languages
<K900> You describe the target configuration in some sort of hardware description language or HDL
<andar1an[m]> I have some repos that allow you to translate from rust
<K900> And then you feed to a tool, usually vendor provided, that outputs _something_
<K900> That you then load onto the FPGA
towynlin[m] has quit [Quit: Idle timeout reached: 172800s]
<andar1an[m]> Would be cool to be able to do that through network like pxe.
<andar1an[m]> * I have found some repos
<K900> On most FPGAs you don't generally do that from the FPGA itself
<K900> But from some other device it's connected to
<K900> But you could definitely connect one to a small computer and provision it that way
<andar1an[m]> Ya, would imagine a small embedded system connected to fpga
<andar1an[m]> Arduino has a product that has an fpga on board
<andar1an[m]> Been wanting to tinker, but that is future me
<K900> Really depends on how small and how embedded you mean here but in most cases anything that's fast enough to drive an FPGA and have a network connection can probably run Linux
<andar1an[m]> Ya. Just trying to think in terms of complexity there
<andar1an[m]> Maybe complexity can be reduced. Dk
JomerDev[m] has joined #rust-embedded
<JomerDev[m]> I can recommend the pico-ice as an fpga board to tinker with, it has a rp2040 in combination with an ice40 fpga, which can be programmed with completely open toolchains
burrbull[m] has quit [Quit: Idle timeout reached: 172800s]
<zeenix[m]> James Munns: turns out, I had a very good reason to use tuple: https://docs.rs/time/latest/src/time/serde/mod.rs.html#298
<JamesMunns[m]> tbh: I think you would be better off defining your own wire type, like CustomOffsetDateTime, and impling From/Into in both directions
<JamesMunns[m]> but yeah, it's definitely possible to create a schema for a struct in that shape :)
<vollbrecht[m]> <K900> "Very cursed things, mostly..." <- if you say using libc is a cursed thing to do, than you probably should also not use the standard library on for example a linux or macos machine as its not that different.
<vollbrecht[m]> s/do/use/
<vollbrecht[m]> * if you say using libc is a cursed thing to use, than you probably should also not use the standard library on for example a linux or macos machine as its not that different to what is happening in the esp case.
<zeenix[m]> does this seem correct if my type is just `struct DateTime(OffsetDateTime);`:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/uFIjlqVgjgnWWFggfVTADgzP>)
<zeenix[m]> I really should debug why prpc is just ignoring errors of deserialization of requests
<JamesMunns[m]> seems reasonable, not 100% sure
<zeenix[m]> still doesn't work though. 😒
<JamesMunns[m]> I mean the actual schema doesn't really matter, as long as both sides agree on the hash
<JamesMunns[m]> like, it SHOULD be the same and deterministic, but the schema doesn't influence the actual ser/deserialization
<JamesMunns[m]> if both sides agreed on [1,2,3,4,5,6,7,8], then it'll still work
<zeenix[m]> I thought the serialization checks it
<zeenix[m]> so i'm more baffled now
<JamesMunns[m]> nah, all the schema stuff is calculated at compile time, and at runtime is just a bag of eight bytes that are included when sending, and checked when receiving
<JamesMunns[m]> it's sort of like a deterministically generated UUID per type
Darius_ has joined #rust-embedded
limpkin_ has joined #rust-embedded
korken89[m] has quit [*.net *.split]
danielb[m] has quit [*.net *.split]
Darius has quit [*.net *.split]
jistr has quit [*.net *.split]
dne has quit [*.net *.split]
limpkin has quit [*.net *.split]
fooker has quit [*.net *.split]
Ekho has quit [*.net *.split]
Amanieu has quit [*.net *.split]
SanchayanMaity has quit [*.net *.split]
edm has quit [*.net *.split]
mightypork has quit [*.net *.split]
Darius_ is now known as Darius
danielb[m] has joined #rust-embedded
korken89[m] has joined #rust-embedded
dne has joined #rust-embedded
jistr has joined #rust-embedded
fooker has joined #rust-embedded
Amanieu has joined #rust-embedded
SanchayanMaity has joined #rust-embedded
mightypork has joined #rust-embedded
edm has joined #rust-embedded
Ekho- has joined #rust-embedded
<thejpster[m]> today I am commiting linker crimes and getting it to do maths for me
<thejpster[m]> Did you know Rust won't let you access the address of a value during constant evaluation? It's so rude. I mean, you only have to look a little bit into the future. You can store the literal address, but you cannot convert it to a u32 and you can't offset it by more than the size of the thing you are pointing at.
<thejpster[m]> All I want is a struct in .rodata where one field contains the difference in bytes between the address of object A and the address of object B.
<dirbaio[m]> yeah that's not possible to do. quite annoying :)
<dirbaio[m]> if objects are in the same struct you can use offset_of, but that's probably not your case...
<thejpster[m]> PROVIDE(delta = __start - __end); works :)
<dirbaio[m]> ahh hehehe
<thejpster[m]> provided you can put the objects into two unique sections, you can create a linker symbol at the start of each section
<thejpster[m]> it's probably UB to ask for addr_of!(delta), because apparently everything in this area is UB.
<thejpster[m]> But I'm doing it anyway.
<JamesMunns[m]> It's Sunday, I'm out of energy to complain anyway
<dirbaio[m]> might be ok as long as you always keep it as a raw pointer, and never read from it
<thejpster[m]> it definitely works in practice, on this day with this toolchain.
<dirbaio[m]> experimentally-defined behavior πŸ‘οΈ
<thejpster[m]> it's literally how we qualified a toolchain
<dirbaio[m]> poke it with a stick, see if it breaks
<danielb[m]> dirbaio[m]: "try it and be sad" is my experience today
explodingwaffle1 has quit [Quit: Idle timeout reached: 172800s]
GuineaWheek[m] has quit [Quit: Idle timeout reached: 172800s]
cr1901 has quit [Read error: Connection reset by peer]
cr1901 has joined #rust-embedded