_florent_ changed the topic of #litex to: LiteX FPGA SoC builder and Cores / Github : https://github.com/enjoy-digital, https://github.com/litex-hub / Logs: https://libera.irclog.whitequark.org/litex
tpb has quit [Remote host closed the connection]
tpb has joined #litex
NotHet has joined #litex
lexano has joined #litex
lexano has quit [Remote host closed the connection]
linear_cannon has quit [Quit: linear_cannon]
linear_cannon has joined #litex
Degi_ has joined #litex
Degi has quit [Ping timeout: 252 seconds]
Degi_ is now known as Degi
peepsalot has quit [Quit: Connection reset by peep]
NotHet has quit [Ping timeout: 250 seconds]
NotHet has joined #litex
NotHet has quit [Client Quit]
peepsalot has joined #litex
Guest7846 has joined #litex
<Guest7846> ERROR:SoCBusHandler:Region overlap between ethmac and csr:
<Guest7846> ERROR:SoCBusHandler:Origin: 0x00000000, Size: 0x00002000, Mode: RW, Cached: False Linker: False
<Guest7846> ERROR:SoCBusHandler:Origin: 0x00000000, Size: 0x00010000, Mode: RW, Cached: False Linker: False
<Guest7846> Sorry .. Have any of you seen this error for cpu-type set to None?
<Guest7846> This has just started to happen after I pulled the latest from Litex
andresmanelli has joined #litex
peepsalot has quit [*.net *.split]
linear_cannon has quit [*.net *.split]
zjason has quit [*.net *.split]
shorne has quit [*.net *.split]
tpw_rules has quit [*.net *.split]
awordnot has quit [*.net *.split]
RaYmAn has quit [*.net *.split]
andresmanelli has quit [Ping timeout: 268 seconds]
andresmanelli has joined #litex
peepsalot has joined #litex
zjason has joined #litex
linear_cannon has joined #litex
tpw_rules has joined #litex
shorne has joined #litex
awordnot has joined #litex
RaYmAn has joined #litex
FabM has joined #litex
FabM has joined #litex
FabM has quit [Changing host]
<_florent_> tnt: Great for the DDR4, I can double check the PCIe if you want.
<_florent_> Guest7846: I just pushed a workaround for --cpu-type --with-ethernet with: https://github.com/enjoy-digital/litex/commit/a489dadfbc2a63cd6db48db6d0112f4a1845dff3
<_florent_> --cpu-type=None
<_florent_> this case is a bit particular, I should spend some time improving it
TMM_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM_ has joined #litex
<tnt> _florent_: Yeah that'd be nice. I updated the platform/target files at https://people.osmocom.org/~tnt/stuff/adi/
<tpb> Title: Index of /~tnt/stuff/adi/ (at people.osmocom.org)
<tnt> Also using this over litepcie (I rebased it on top of master) https://github.com/ylm/litepcie/commit/c707b300d6c4aab366b0bd583f437b355c486e27
<_florent_> tnt: to test it, you loaded the bitstream and did a reboot of the machine or just a PCIe rescan?
<tnt> _florent_: I reloaded the bitstream, typed 'reboot' (which I guess is warm reboot, it's remote so can't do off/on cycle) and then looked in lspci when it was back up.
<tnt> However I'm not 100% sure there isn't something more fundamental wrong somewhere because I was never able to use the pcie clock as system clock ...
<_florent_> tnt: A reboot of the machine with the bitstream loaded is fine, so yes this could be related to the clock
<_florent_> tnt: is there an example design/bitstream with PCIe for this hardware? if so, this could be worth loading it and just see if you get something in lspci, just to verify the hardware
<_florent_> hardware/setup
<tnt> _florent_: couldn't find one :/ The ADI bitstream doesn't have it AFAICT.
<_florent_> tnt: otherwise the LitePCIe integration seems fine
<tnt> Can you probe anything from the bios prompt ? Link state / attempts / ... whatever ?
indy has quit [Remote host closed the connection]
indy has joined #litex
<_florent_> but the support is not integrated in the BIOS, so you'll have to do manual mem_read
<_florent_> the reason is that on most of the PCIe designs I use, there are no CPU :)
<tnt> Hehe, yeah.
<tnt> I wonder if my issue could stem from the fact lanes 0-3 are on Quad 225 but the ref clock comes in on Quad 224
<_florent_> It's possible to use a reference clock from another Quad, so if the design compiles fine it should be OK
<tnt> yeah, but that's the thing ... the commit above sets the options to use Quad 224 which is where the clock is but not the data lanes.
<_florent_> tnt: can you check the final IO report to see if the Clock pins are still the one from the .xdc?
<tnt> I'll rebuild and double check the io report.
ilia__s2 has joined #litex
ilia__s has quit [Ping timeout: 260 seconds]
ilia__s2 is now known as ilia__s
<tnt> Ok, so .. the clk signals was at the right place but the data was in quad 224 too which is wrong. I change the 224 to 225 in the .xci of litepcie and now io report shows clk in 224 and data in 225 which is good.
<tnt> There was however some timing error in sys_clk ?!? (not sure why those appeared).
<tnt> Tried the bitstream anyway and it did something ! ... not what I was hoping however : machine never rebooted.
<_florent_> arf... Doing PCIe bringup on a remote machine is not the easiest path...
<leons> florent: still debugging my SFP issue. can you confirm there's no extra steps required compared to getting a DAC working? 🙂
<DerekKozel[m]> It's a big pain. Our student project has been 100% remote. I think we'd have proposed something a bit different if that had been obvious at the start.
<leons> Okay that's good to know. Yeah, I've inverted the tx_disable polarity already accidentally haha
<leons> My other device does see power, but no carrier
<leons> Do you have any good idea on how to debug a missing/incorrect carrier? Maybe use an IBERT on another FPGA to see what's going on?
<tnt> Plot twist: The machine isn't crashed ... the network just isn't working. And lspci shows Xilinx device 9034.
<tnt> new pcie devices ... enp4s0 became enp5s0 ...
<tnt> LnkSta:Speed 8GT/s (ok), Width x4 (ok)
<mntmn> are the speedgrade_timings in litedram automatically selected? i mean, based on sth like (sys_clk_freq, "1:2")
<mntmn> (or "1:4")
<mntmn> hi _florent_ btw... i am quite comfortable with building my own bitfiles now
<mntmn> also i get the feeling that the factor "1:2" vs "1:4" is ignored when instantiating a ddr module
<_florent_> tnt: Good :)
<mntmn> yeah when i set the sys_freq to 200mhz and the memory module's rate to "1:2" i still get > 32-bit @ 1600MT/s (CL-11 CWL-8)
<mntmn> but i would expect 800MT/s
<tnt> _florent_: yup, looking at the sw side now :) Tx for the pointers.
<_florent_> tnt: otherwise, if you are planning to use the DDR4 as I/Q sample buffers, I worked on DRAM based FIFOs last week that could be useful for this:
<_florent_> This was not developed for SDR, but I'll problaby also use it on SDR designs in the future
<_florent_> leons: You could eventually use LiteICLink to debug your SFP issue
<_florent_> this will allow you to run PRBS tests between 2 FPGAs or between 2 SFP ports
<tnt> _florent_: yeah, I saw that tweet and thought "Oh that's good timing, I might need that :D"
<_florent_> leons: you would just need to adapt one of the LiteICLink test target: ex https://github.com/enjoy-digital/liteiclink/blob/master/bench/serdes/kcu105.py
<_florent_> mntmn: Hi, passing the speedgrade to the DRAM module is just useful to use the compute the internal timings, the speed shown in the BIOS is computed differently
<mntmn> ahh ok
<mntmn> _florent_: what's a recommended way to include my locally changed litex board/target when building linux-on-litex-vexriscv? PYTHONPATH? (i.e. i want to be able to make changes to the board definition/config on the fly)
<_florent_> mntmn: if litex-boards is installed in develop mode, you can directly modify the platform/target
andresmanelli has quit [Read error: Connection reset by peer]
<_florent_> mntmn: otherwise, you can copy the files locally and modify the import in linux-on-litex-vexriscv's make.py
<mntmn> _florent_: ok, thanks
<tpb> Title: root@ubuntu:/home/user/driver/user# ./litepcie_util infoFPGA identification: - Pastebin.com (at pastebin.com)
stfn has joined #litex
<stfn> does anyone know if it's possible to write to the NVCM of iCE40 using open source tools?
<tnt> It's not.
<tnt> There has been some work and some stuff documented but there isn't an end-to-end tested flow for that.
<stfn> would it be hard to reverse engineer?
<tnt> I think it's pretty much "known" ... but you'll probably still destroy a few parts working out the details and implementing it ...
<tnt> and ATM given stock ... I don't have a single part to waste
<leons> _florent_: That looks indeed very interesting, thanks for the pointers!
<stfn> I'll order 100 iCE40 they are dirt cheap
<stfn> then just change them out on a dev board if it doesn't work out :)
<tnt> _florent_: I seem to have an endianness issue ...
<tnt> litepcie_readl(fd, CSR_IDENTIFIER_MEM_BASE + 4*i) >> 24
<tnt> I had to add the `>> 24' to get the correct value.
<tnt> I'm assuming the dma_test has the same problem hence why it's not doing too well.
stfn has quit [Ping timeout: 265 seconds]
<_florent_> tnt: strange, can you verify that csr_data_width is set to 32 and that the data-width of your PHY matches the one in your target?
<tnt> _florent_: I added assert self.csr_data_width == 32 and it sent through, so csr_data_width seems OK. USPPCIEPHY is instanciated with data_width = 128 and litepcie/phy/xilinx_usp_gen3_x4/pcie_usp.xci MODELPARAM_VALUE.AXI4_DATA_WIDTH=128 if that's what you're asking.
<_florent_> tnt: ok, this seems fine, is there anything specific with the Host?
<tnt> No, it's a box standard PC. https://www.gigabyte.com/Motherboard/B560M-AORUS-ELITE-rev-10/ mother board with a i7-11700
<tpb> Title: B560M AORUS ELITE (rev. 1.0) Key Features | Motherboard - GIGABYTE Global (at www.gigabyte.com)
<tnt> board is plugged in the x16 slot normally used for GPU.
lexano has joined #litex
<_florent_> tnt: the endianness does not seems to be configured correctly in LiteX-Boards for PCIe/Ultrascale
<_florent_> I'm going to fix that
<tnt> yeah, atm I just do self.add_pcie(phy=self.pcie_phy, ndmas=1)
Guest48 has joined #litex
<tnt> (since it's just a simple target for litex-board for now)
<_florent_> if you want to test while I'm doing that, you can force endianness to "little" here: https://github.com/enjoy-digital/litepcie/blob/master/examples/xcu1525.py#L76
<_florent_> sorry wrong ling
<_florent_> link
<tnt> Ok, trying that.
<tnt> _florent_: yeah, info returns the right string now and dma_test shows things :) ( https://pastebin.com/YKdwAP9j I assume the error on the first line is normal while bootstrapping )
<tpb> Title: $ ./litepcie_util dma_testDMA_SPEED(Gbps) TX_BUFFERS RX_BUFFERS DIFF ERRORS - Pastebin.com (at pastebin.com)
<_florent_> tnt: yes, this looks fine now, good
<tnt> \o/
<tnt> I'll do a bit of cleanup but then I guess next step is going to be adding I2C / SPI cores so I can talk to the various chips on the board that need configuring. And then JESD.
<tnt> Can the memory training be run from the host through PCIe btw ?
<tnt> (I mean, I know it theorically can. Just not sure if it's supported / if there are examples)
<_florent_> if could be yes, but it's not supported currently
<_florent_> For your system, I would probably use LitePCIe and LiteDRAM as standalone cores (with 2 instances of LiteDRAM, each with its CPU) and re-integrate this in a LiteX SoC
<_florent_> This would use a bit more resources, but resources for the extra-CPUs, but resources almost come for free on these large devices
<_florent_> this would simplify the top level SoC
<tnt> How so ?
<tnt> I mean really I only need the CPU for the calibratio n... here it's nice for peek/poke through uart and see how stuff is running, but in the end, I don't even need the cpu to have access to the RAM (have it memory mapped).
<_florent_> With LiteDRAM/LitePCIe, you can generate standalone core with litedram/litepcie_gen and .yml configuration files
<tnt> Ah by "how so" I meant more "why do you think it's necessary / would help".
<_florent_> With LiteDRAM, you can also disable the CPU and expose a Wishbone interface instead to do the calibration: https://github.com/enjoy-digital/litedram/blob/master/examples/xcu1525.yml#L10
<_florent_> setting cpu to None
<_florent_> that's what is used by the Microwatt team for example to avoid the extra CPU
<_florent_> When the design becomes complex, using the standalone cores can help since increase decoupling (the main SoC is "just" integrating a PCIe core, 2 DRAM cores, a JESD cores with clear interfaces + extra peripherals/modules)
<_florent_> but each developer will probably have different preferences on this
<_florent_> For the extra-cpus, that's what is done heavily in Intel/Xilinx design embedding lots of soft cores CPU in IPs
<tnt> Yeah, I think I'll have to see for myself when I start hitting something to have a better understanding.
<tnt> Right, I saw MIG is embedding a while microblaze ...
<tnt> -while+whole
<_florent_> I'm not saying we should do the same :) but with LiteX we can choose which CPU is the best for each usage, so for LiteDRAM, you could use a small CPU: serv or VexRiscv Minimal
<tnt> yup, SERV is what I was thinking :)
TMM_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM_ has joined #litex
FabM has quit [Ping timeout: 245 seconds]
<Guest7846> Hi All, I have been seeing this error since the latest pull from Litex: "ERROR:SoCBusHandler:Region overlap between ethmac and csr:"
<Guest7846> Any idea what could be happening there?
<Guest7846> I have looked around soc.py, but nothing seems obvious
<Guest7846> INFO:SoCBusHandler:main_ram Region added at Origin: 0x40000000, Size: 0x40000000, Mode: RW, Cached: True Linker: False.
<Guest7846> INFO:SoCBusHandler:main_ram added as Bus Slave.
<Guest7846> INFO:SoCBusHandler:Allocating IO Region of size 0x00002000...
<Guest7846> INFO:SoCBusHandler:ethmac Region allocated at Origin: 0x00000000, Size: 0x00002000, Mode: RW, Cached: False Linker: False.
<Guest7846> INFO:SoCBusHandler:ethmac added as Bus Slave.
<Guest7846> Looks like ethmac of kc705 is getting added at 0x000000.. for some reason
<_florent_> Guest7846: If not, can you provide the command?
<leons> florent: after many days of trying to get the optical SFP+ path working, the cause is finally identified: the SFP+ module was broken.
<leons> I would’ve never expected the literally only non-custom designed/programmed part of my project to be the culprit
<mntmn> _florent_: got linux running.
<_florent_> leons: That's good you got it working, you probably learn new things during the process so the time was probably not completely lost... (at least that the way I try to see things when such things happen to me :))
<_florent_> mntmn: That's great! If you want a faster boot, you should be able to netboot (with the images on the tftp server at 192.168.1.100 by default) or boot from the SDCard
<leons> That’s certainly true. For one I learned that my devboard is a horrible piece of hardware and I did get to read through some specs in the process :)
<mntmn> _florent_: ah good hint! i will set up netboot
<mntmn> funny that my computer with tftp has this ip already...
<_florent_> mntmn: For the SDCard boot, I think last time we tried together the SDCard was probably not formated correctly
<_florent_> or maybe you could try with another
<mntmn> _florent_: ok i will try to add some debug statements to the sdcard loader
<mntmn> _florent_: the sdcard works in linux
<mntmn> (on the same device)
<_florent_> the process is the same: copy the images to the SDCard and it should boot
kgugala has joined #litex
<mithro> _florent_ / kgugala: There was a student from Princeton that was porting https://parallel.princeton.edu/papers/micro19-gao.pdf to work with LiteDRAM -- any idea what happened with that?
<kgugala> haven't heard from him for a while
<_florent_> mithro: I also don't know, but IIRC Finde here could maybe have more info
<Finde> it was Fei himself who was porting it
<Finde> you can contact him directly to ask
<Finde> afaik he had something working but I don't remember the details
lexano has quit [Remote host closed the connection]
lexano has joined #litex
<tnt> How was pcie_support.v generated btw ? I can't see it in the output products of vivado when creating the IP.
<_florent_> tnt: it's one of the top level file with some custom logic to translate the "new" protocol used by Xilinx on AXI-stream to standardized TLPs (that LitePCIe was already supporting)
<_florent_> tnt: the main differences between 7-Series PCIe PHY and Ultrascale are:
<_florent_> - different protocols on the AXI streams.
<_florent_> - Separate streams for requests/completions on Ultrascale.
lexano has quit [Remote host closed the connection]
lexano has joined #litex
zjason` has joined #litex
Guest48 has quit [Ping timeout: 256 seconds]
zjason has quit [Ping timeout: 252 seconds]
andresmanelli has joined #litex
andresmanelli has quit []
Guest4888 has joined #litex
mc6808 has joined #litex
<mc6808> I'm having issues with litescope recently: I can do an immediate capture but if I try to use a trigger litescope_cli dies on an empty ethernet packet litex/litex/tools/remote/etherbone.py", line 306, in decode
<mc6808> header = list(ba[:etherbone_packet_header.length])
<mc6808> TypeError: 'int' object is not subscriptable
<mc6808> Anyone else have an experience like this? A few months ago litescope was extremely reliable for me but this started happening occasionally and now triggering is unusable.
<mc6808> FYI Not sure if this is just me for some reason? I'm using a ecpix5 board and there are some litescope timing warnings.
<mc6808> Warning: Max frequency for clock '$glbnet$eth_clocks_rx$TRELLIS_IO_IN': 106.77 MHz (FAIL at 125.00 MHz)