nelgau has quit [Remote host closed the connection]
nelgau has joined #litex
nelgau has quit [Ping timeout: 256 seconds]
nelgau has joined #litex
nelgau has quit [Ping timeout: 240 seconds]
nelgau has joined #litex
nelgau has quit [Remote host closed the connection]
nelgau has joined #litex
nelgau has quit [Ping timeout: 256 seconds]
Degi has quit [Ping timeout: 240 seconds]
Degi_ has joined #litex
Degi_ is now known as Degi
nelgau has joined #litex
Melkhior has joined #litex
nelgau has quit [Remote host closed the connection]
nelgau has joined #litex
nelgau has quit [Ping timeout: 256 seconds]
rektide has quit [Ping timeout: 250 seconds]
rektide has joined #litex
FabM has joined #litex
FabM has joined #litex
FabM has quit [Changing host]
<_florent_>
Thanks for sharing somlo, it looks interesting.
<_florent_>
somlo: BTW, I added install configurations support to litex_setup to avoid installing all the CPUs for the standard installation (some of them were pretty large and I don't necessarily want LiteX to competes with Vivado on this :))
<leons>
Is there any documentation on how to use the LiteDRAM native interface? I'm asking as I managed to build a core which appears to use it correctly (writes and reads work) but it has really awful performance compared to the BIST, so I'm wondering whether I'm doing something entirely wrong
<_florent_>
Hi leons, sorry I indeed need to document it. The BIST provide an example to understand it, but agree it's not enough :) This is basically a simplified AXI with 1 channel for cmds, 1 for writes, 1 for reads.
jryans has joined #litex
sajattack[m] has joined #litex
shoragan[m] has joined #litex
dcallagh has joined #litex
david-sawatzke[m has joined #litex
DerekKozel[m] has joined #litex
CarlosEDP has joined #litex
vomoniyi[m] has joined #litex
promach[m] has joined #litex
Las[m] has joined #litex
bluecmd has joined #litex
jevinskie[m] has joined #litex
willcode4[m] has joined #litex
amstan has joined #litex
Crofton[m] has joined #litex
a3f has joined #litex
CarlFK has joined #litex
nrossi[m] has joined #litex
mikolajw has joined #litex
<_florent_>
leons: If you can share some code/provide some waveforms, I could try to help
donc has joined #litex
<leons>
_florent_: Thanks! I'm currently trying to investigate and reproduce this with a much simpler design and see where the problem lies
<_florent_>
If anyone here has a Kria KV260 Vision AI Starter Kit taking dust and want to sell it, feel free to contact me. I'm trying to get some to add LiteX support to it and Zynq MPSoC support with ilia__s
<leons>
_florent_: This is the simplest DRAM benchmark I could come up with :) it's a bit faster than my other code, but still nowhere near reported BIST values
<leons>
I'm calculating the effective speed as: bytes_written / cycle_count * (1 / clk_freq) in bytes/s
<leons>
This gives me approx. 1.6GiB/s, whereas the BIST shows a write speed of ~6.1GiB/s
MoeIcenowy has quit [Quit: ZNC 1.7.2+deb3 - https://znc.in]
MoeIcenowy has joined #litex
<_florent_>
leons: you have a shared FSM for the control and data paths, which will already reduce efficiency by 50%. For maximum efficiency, you have to decouple the control and data paths. (in one cycle, LiteDRAM can accept the next command and previous write data)
<_florent_>
So you can change this and after this I would recommend looking at the waveform, in simulation with litex_sim --with-sdram or with LiteScope on hardware
<leons>
florent: oh, that makes sense!
MoeIcenowy has quit [Quit: ZNC 1.7.2+deb3 - https://znc.in]
MoeIcenowy has joined #litex
<leons>
_florent_: oh wow. I think that just pushed it up to 7.9 GiB/s...? I'll need to validate that, but if that is indeed the case that's amazing :) It does make sense though, I imagine it's way more efficient if it can properly pipeline commands and writes
<leons>
A more general question, if I have a component which writes a bunch of data to memory and occasionally needs to read a few bytes to search for the next free slot, should I use the same or a different port for writing and reading? Is there any significant resource utilization / performance difference?
<_florent_>
leons: great. From a user perspective, using a different port will probably be easier (Read/Write arbitration/multiplexing will be directly integrated in LiteDRAM)
<_florent_>
leons: I think you are using a relatively large FPGA, so the additional resource usage with this should be negligible (and as said just before, you'll probably use same amount of logic while doing this externally)
<leons>
_florent_: makes sense, thank you!
nelgau has joined #litex
nelgau has quit [Ping timeout: 240 seconds]
nelgau has joined #litex
nelgau has quit [Ping timeout: 256 seconds]
<somlo>
florent: (re. --config=full) -- makes sense, thansk!
<somlo>
_florent_: commit 33024402 (soc: raise an error if adding a SoCRegion with incoherent cache configuration) is killing Rocket builds
<somlo>
It complains that "rom region in io region, it can't be cached: origin: 0x10000000, size: 0x00020000, mode r, cached: true, linker: false"
<somlo>
it's' true that I'm accessing the rom as uncacheable through the mmio port, but what's wrong with that, if it works? :)
<somlo>
maybe a warning instead of a flat-out error would be more helpful (unless we want to get fancy with marking regions like e.g. the rom as "cached-optional", which I'm definitely *not* recommending :)
<somlo>
* s/recommending/suggesting/ (still need my morning coffee, don't have the best words yet :D)
<somlo>
_florent_: and I noticed commit #5c278ae4 which fixes the issue by placing "rom" outside the "enforced range" for any of the cached/uncached policies -- thanks!
<leons>
Are the number of cycles from when a write transaction was accepted (wdata.ready & wdata.valid) to when it is committed to memory known/constant in LiteDRAM, or is there at least an upper bound?
FabM has quit [Quit: Leaving]
donc has quit [Quit: Client closed]
essele has joined #litex
essele_ has quit [Read error: Connection reset by peer]
nelgau has joined #litex
r4d10n has joined #litex
nelgau has quit [Ping timeout: 240 seconds]
r4d10n[m]1 has joined #litex
r4d10n has quit [Quit: Client closed]
<SpaceCoaster>
I have two devices (flash and psram) on one SPI bus, they have different cs pins. Can I setup the signals to use these both with LiteSPI?
<zyp>
although IMO the easiest way to support multiple chip selects would be to simply make cs wider throughout the entire stack
<zyp>
that way the frontends would be responsible for selecting which device they want to talk to and the rest of the stack would just pass it on
<zyp>
which would allow e.g. a single MMAP frontend to pick device based on the accessed address
<SpaceCoaster>
Thanks, I will take a look. Is QSPI working for flash?
<zyp>
yes
<cr1901>
somlo: > by placing "rom" outside the "enforced range" for any of the cached/uncached policies
<cr1901>
Wait what, ROM isn't cached anymore?
<r4d10n[m]1>
Hello folks...On Acorn CLE-215+... I was facing issues with loading Linux, with Litex boot getting stuck at Uploading Image file....
<r4d10n[m]1>
I just recompiled the bitstream on Vivado 2019.1, flashed it via the PCIe interface... new version bitstream is loaded fine (from the compile time in info header)...
<_florent_>
if working, I think I'll disable it on Artix7
<r4d10n[m]1>
_florent_: okay... will try right away and get back to you...
<r4d10n[m]1>
Another issue to report - while compiling buildroot afresh for kernel/rootfs, i got an "Incorrect selection of kernel headers: expected 5.15.x, got 5.14.x"... I had to do a make menuconfig and change the Custom Kernel Headers series to 5.14.x in the Toolchain menu....
<r4d10n[m]1>
_florent_: disabling write latency calibration helped... memtest is now OK...
<r4d10n[m]1>
however back to the old problem of loading kernel Image.... stuck at 3%... PC also hangs....
<_florent_>
r4d10n[m]1: are you using litex_term in safe mode (--safe)?
<r4d10n[m]1>
_florent_: nope
<_florent_>
r4d10n[m]1: ok, can you try it? (it will be slower but should work)
<r4d10n[m]1>
ok... tryin now...
<r4d10n[m]1>
one thing i noticed is that when trying to use linux_2021_03_29.zip from the prebuilt bitstreams, upload is going till 11% before getting stuck
<r4d10n[m]1>
_florent_: in safe mode, it has crossed 14% now... very slow...
<r4d10n[m]1>
my mobo has two m2 slots... on the present slot which has acorn, i think it shares with SATA SSD and hence is reported only to work at x2 in the BIOS... could that sharing be the issue ?
<r4d10n[m]1>
_florent_: that worked ! we have Liftoff and linux has booted... :)
<r4d10n[m]1>
i will try swapping the m2 slots and report updates... thanks for the support
<somlo>
_florent_, cr1901: "rom" is only not cached on Rocket, I should have been more precise, sorry for the false alarm :)
<somlo>
and it's now outside the policy enforcement region used in the patch added by _franck_ (but again, only on rocket :)