_florent_ changed the topic of #litex to: LiteX FPGA SoC builder and Cores / Github : https://github.com/enjoy-digital, https://github.com/litex-hub / Logs: https://libera.irclog.whitequark.org/litex
tpb has quit [Remote host closed the connection]
tpb has joined #litex
Degi has quit [Ping timeout: 244 seconds]
Degi_ has joined #litex
Degi_ is now known as Degi
nickoe has quit [Quit: Client closed]
nickoe has joined #litex
Degi_ has joined #litex
Degi has quit [Ping timeout: 250 seconds]
Degi_ is now known as Degi
slagernate has quit [Quit: Client closed]
nickoe has quit [Quit: Client closed]
nickoe has joined #litex
cr1901 has quit [Read error: Connection reset by peer]
FabM has joined #litex
FabM has joined #litex
cr1901 has joined #litex
Brinx has quit [Remote host closed the connection]
Brinx has joined #litex
Brinx has quit [Remote host closed the connection]
Brinx has joined #litex
Brinx has quit [Ping timeout: 264 seconds]
<tnt> A bit more details tracing my PCIe issue : In the "bad" case, I sometime get almost 5 ms (so basically forever at this timescale) between two consecutive litepcie_writel()
<tnt> In the "good" case, I observe an absolute max of 5 us between two consecutive access (so 1000x less !)
nickoe has quit [Quit: Client closed]
nickoe has joined #litex
Brinx has joined #litex
<tnt> There seem to be one such insanely slow access every 120 ms
Brinx has quit [Remote host closed the connection]
Brinx has joined #litex
Brinx has quit [Ping timeout: 246 seconds]
TMM_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM_ has joined #litex
Brinx has joined #litex
Brinx has quit [Remote host closed the connection]
<minute> hmm, looks like spiflash_read_status_register() is unreliable somehow
<minute> i have some code that erases flash sectors and polls spiflash_read_status_register(), and i can see that the erases are not complete, only the first ~half of the 4096 byte regions are erased
<minute> if i do the erase manually one 4096b sector at a time, it works
nickoe has quit [Quit: Client closed]
nickoe has joined #litex
<zyp> have you checked that the erase size is correct for your flash?
<minute> zyp: yep, it's 25Q128JV
<minute> W25Q128JVS
<minute> zyp: i've now switched to 64kb erase command, and it's going much too quick... i.e. there's something wrong with reading the busy flag
<minute> so i'm gonna printf debug the read status register 1 responses
<minute> ok so the existing code returns the second byte of the read status response, but the status is actually in the first byte
nickoe17 has joined #litex
<zyp> the first byte is whatever is shifted in while the command byte itself is shifted out
<zyp> unless your approach is different from the others
nickoe has quit [Ping timeout: 252 seconds]
<minute> zyp: then there's something weird going on with the timing perhaps
<minute> i'm getting 0x43/0x40, 0xff in the transfer buffer after an erase command
<minute> but also, they transfer the erase command followed by a zero byte for some reason
<minute> sorry, not true
<minute> they transfer the read status register command followed by a zero byte
<minute> ok that makes sense
<zyp> yes, spi is bidir, you shift two bytes in total, first byte is command out and dummy in, second byte is dummy out and status in
<minute> thanks for explaining... so it appears that sometimes things can get out of phase?
<zyp> it really shouldn't
Brinx has joined #litex
Brinx has quit [Remote host closed the connection]
Brinx has joined #litex
<tnt> Is litescope "trigger_depth" the "pre-trigger" ?
<tnt> (i.e ideally I'd like to have the trigger centered, with N samples before trigger and N samples after the trigger)
<minute> ok, i've got this kintex-7 to load litex bitstream from spi flash. it's very slow though, seems like ConfigRate is at 3mpbs?
<minute> how can i set the bitstream ConfigRate for series 7/vivado? bitgen_opt doesn't exist
<minute> i guess this has to go into the constraints
Guest1392 has joined #litex
Brinx has quit [Remote host closed the connection]
Brinx has joined #litex
Brinx has quit [Ping timeout: 244 seconds]
Guest1392 has quit [Quit: Client closed]
Brinx has joined #litex
Brinx has quit [Ping timeout: 265 seconds]
<minute> was there already any work done on getting DMA memory copying in linux? i.e. for scrolling in the framebuffer
FabM has quit [Ping timeout: 268 seconds]
Brinx has joined #litex
SpaceCoaster has quit [Ping timeout: 265 seconds]
SpaceCoaster has joined #litex
TMM_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM_ has joined #litex
<minute> at 150mhz and 4 cores i'm getting random crashes during linux boot, now checking if that's due to smp or 150 vs 100 mhz
<minute> back to 100mhz but still at 4 cores, it gets to init but then > [ 17.008844] ln[69]: unhandled signal 11 code 0x1 at 0x2b66a0d4 in ld-linux-riscv32-ilp32.so.1[95b64000+21000]
slagernate has joined #litex
<slagernate> anyone want to help me get a simple litex demo working? https://github.com/enjoy-digital/litex/issues/1441
<slagernate> ;)
slagernate has quit [Quit: Client closed]
FabM has joined #litex
FabM has quit [Changing host]
FabM has joined #litex
slagernate has joined #litex
FabM has quit [Ping timeout: 246 seconds]
<minute> yeah sigh smp appears broken somehow... linux works fine with only 1 cpu in the dtb, but if the 4 cores are active, init crashes
<minute> > [ 16.680009] init[1]: unhandled signal 11 code 0x1 at 0x00000000 in busybox[69355000+df000]
<minute> strangely, dual core works
<tnt> slagernate: huh ... how is that supposed to boot ?
<tnt> LRAM can't be initialized IIRC. And there is no flash ... so what do you expect the CPU to execute ?
<minute> not stable though @ 2 cores, udhcpc kills everything
<tnt> Or does it create ROM automatically ?
<slagernate> Where does it say LRAM can't be initialized? The memory usage guide for the nexus platform (https://www.latticesemi.com/view_document?document_id=52785) on page 77 says "In each memory mode, it is possible to specify the power-on state of each bit in the memory array. This allows the memory to be used as ROM if desired."
<slagernate> I mean, my assumption was that the target/lattice_crosslink_nx_evn.py file was already working and didn't have any egregious flaws
<slagernate> There are lattice_crosslink_nx_evn_[mem/rom].init files in the gateware directory --my assumption was that these were packed into the .bit file, but I could be terribly wrong (maybe gatecat can comment)
<tnt> Ok, my bad, I thought the LRAM had the same limitation than the SPRAM in the UP5k.
<tnt> But you're probably the second person, maybe third to try the crosslink nx most likely ...
<slagernate> All good. Yeah, true, I guess [porting litex to the crosslink NX](https://github.com/enjoy-digital/litex/issues/618) should be reopened. CC tcal alanvgreen _florent_ See: https://github.com/enjoy-digital/litex/issues/1441. Going to try an external UART and get back to you tho
slagernate has quit [Quit: Client closed]
Brinx has quit [Remote host closed the connection]
Brinx has joined #litex
Brinx has quit [Ping timeout: 252 seconds]
slagernate has joined #litex
<minute> did something change in liteeth that breaks the linux driver?
<minute> system basically freezes now when interacting with eth0 and it used to work fine
<tcal> Yes, I do recall bringing up LiteX on an EVN board a couple of years ago, using an external UART. I was quite new at these things at the time. I remember it being somewhat confusing writing to flash, then resetting it using a button? Or power cycling it? Or possibly with another ecpprog option, writing the bitstream to RAM config storage. It's pretty blurry...
<tcal> slagernate: ^^
<minute> when interacting with liteeth from linux, i get after a while a > [ 288.331419] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
<slagernate> tcal I see. The native crosslinknx eval board uart isn't working, so I will try an external UART
<slagernate> To be clear, I can program the fpga well (the LED chaser output is working)
<tcal> Oh, I realized that my prior experience wasn't using the litex-boards target file, but an older version of building a LiteX SoC (anyone remember lxbuildenv.py?). We did use the litex-boards platform file for the EVN board though.
<tcal> Wait, another thing occurred to me...there is a Yosys issue that causes our current design on the LIFCL-17 to not work. https://github.com/YosysHQ/yosys/issues/3416
<slagernate> I'm actually using lattice radiant (LSE synthesis tool, not synplify pro)
<slagernate> but I suspect my setup will work once a try a different (external) UART
<slagernate> how has working with LIFCL-17 (CrosslinkNX FPGA) been for you tcal? Am planning to use it extensively
<tcal> Ah yes, I'd forgotten about the choices you have with Nexus. LiteX even supports a Yosys (synth) --> Radiant (P&R) flow. We haven't used Radiant synthesis for quite a while since there was an issue where it wouldn't let us use all of the EBRs. As for the LIFCL-17 itself, it's a good fit for our use case -- a pretty good balance of RAM, DSP blocks, and logic cells.
nickoe17 has quit [Quit: Client closed]
nickoe17 has joined #litex
<slagernate> Glad to hear it's working acceptably well