_florent_ changed the topic of #litex to: LiteX FPGA SoC builder and Cores / Github : https://github.com/enjoy-digital, https://github.com/litex-hub / Logs: https://libera.irclog.whitequark.org/litex
tpb has quit [Remote host closed the connection]
tpb has joined #litex
cr1901 has quit [Read error: Connection reset by peer]
Degi_ has joined #litex
Degi has quit [Ping timeout: 260 seconds]
Degi_ is now known as Degi
cr1901 has joined #litex
indy has quit [Read error: Connection reset by peer]
indy has joined #litex
CarlosEDP has quit [*.net *.split]
CarlFK has quit [*.net *.split]
novenary has quit [*.net *.split]
zyp has quit [*.net *.split]
zyp has joined #litex
novenary has joined #litex
CarlosEDP has joined #litex
CarlFK has joined #litex
TMM_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM_ has joined #litex
_franck_ has quit [Ping timeout: 265 seconds]
_franck_ has joined #litex
<leons> _florent_: I've looked into the Packetizer/Depacketizer situation a bit and realize that my fixes cause some unexpected behavior for when no data qualifier is being used.
<leons> At the same time, with your revert-commit applied on top -- even when no last_be data qualifier is present -- my tests break because one edge case causes the Packetizer to break pretty badly: when the first bus word is also the last, it'll reach a state where it won't ever get out of and just produce the same packet in a loop
Coldberg has quit [Ping timeout: 244 seconds]
<leons> But honestly speaking, I can't even wrap my head around what a useful implementation for a Packetizer or Depacketizer without a data qualifier should look like.
<leons> Let's assume a 5 byte header and 6 bytes data. That'll generate a 32-bit data stream of HHHH HDDD DDDX
<leons> When we have 8 bytes of data it'll look like HHHH HDDD DDDD DXXX
<leons> A depacketizer to avoid dropping bytes would recover the first data as DDDD DDXX, all fine
<leons> The second data would also fit into two bus words, but because the depacketizer doesn't know that it would recover DDDD DDDD XXXX
<leons> So the last word does not contain any useful data.
AndrewD_ has joined #litex
<AndrewD_> Was is the recommended way to read the clock domain for a sync?
<AndrewD_> I want to detect if a clockdomainsrenamer has been applied
<leons> _florent_: What I can propose to do is to integrate your "revert" commit for the last/ready logic into the proper last_be handling code. Then it'll continue to work with the test_packet.py and I'm happy too given that it'll work correctly with last_be. But it won't work with the non-last_be cases in test_packet2.py because they test cases which cause the infinite loop problem which currently also exists upstream
<leons> So it's really not an issue with the tests but indeed broken ๐Ÿ™‚ I'd just comment out the respective tests
AndrewD_ has quit [Quit: Client closed]
<_florent_> leons: Sorry I just saw your messages here
<leons> _florent_: I just saw your commit moving it to liteeth ๐Ÿ˜€
<_florent_> I in fact created a local copy of Packetizer/Depacketizer in LiteEth directly (that now also use them) to ease the development
<leons> Yeah, it broke my hacky fix PR but that's okay ๐Ÿ™‚
<_florent_> This way your tests are still passing (and mine is also passing in LiteX for the other cores).
<leons> Thanks, I think that makes things easier. My point is still valid for the bugs in the LiteX packetizer though :). It has some bugs which my test infrastructure did uncover (even without the last_be signal) so it might be worth looking into those in the future.
<_florent_> Yes sure, I want to work on that in the next days and try to switch LitePCIe to Packetizer/Depacketizer
<leons> Thank you! Looking forward to seeing the changes required to the Packetizer/Depacketizer.
<_florent_> leons: I think we can agree that the current last_be data qualifier both give us headaches and that there is clearly something to do :)
<leons> Absolutely. This is easily one of the most complex things I have worked on for some time now. But it's great to recover from the PTSD for now and have a "works for me" version in LiteEth ๐Ÿ™‚
<_florent_> leons: While I'm looking at this, if you have time, would you make sure we have good tests or designs exercising last_be in LiteEth that we could use to switch to the simpler data qualifier?
<leons> Happy to collaborate on trying to come up with something that's less painful
<leons> Well, I think the rewritten test_stream.py and test_packet.py excercise all known edge-cases of last_be (that's why they take so long), and the infrastructure is pretty flexible in switching it out for a different one, so I think we're covered there
<_florent_> leons: I'll also prepare some hardware to be able to test it. I'm not sure to remember your test setup, can you describe it to me again?
<leons> Oh, and if it helps, this is a screenshot the failure case I've seen with upstream Packetizer/Depacketizer:
<_florent_> Thanks
<acathla> Doc about CSRStorage says: "Size of the CSR register in bits. Can be bigger than the CSR bus width." But it does not say what's the size limit. It seems to use uint64_t in csr.h when CSR is less than 64bits and stop generating the code if >64
<acathla> I would like to make a module that controls 5 neopixels only, just send a color value, no complex effects. I already have the logic for one or two pixels using CSRStorage. I would like to use some BRAM or something else a little more optimized than many CSR.
<acathla> I started with that code : https://www.hacknology.de/projekt/2020/neopixelar/ but it seems strange to store address and values in CSR to access a memory...
<tpb> Title: Neopixelar oder 'Wie geht FPGA?' ยท hacKNology (at www.hacknology.de)
Coldberg has joined #litex
AndrewD has quit [Quit: Client closed]
TMM_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM_ has joined #litex
<zyp> acathla, how many colors do you need?
<zyp> I've also got a chain of 5 for indicators, and I've just set them up with 1-bit colors
_franck_0 has joined #litex
_franck_ has quit [Ping timeout: 244 seconds]
_franck_0 is now known as _franck_
cr1901 has quit [Read error: Connection reset by peer]
cr1901 has joined #litex