<_whitenotifier-3>
[glasgow] whitequark synchronize pull request #671: U13 function description updated. U13 has same function as U20 (was same as U21). - https://github.com/GlasgowEmbedded/glasgow/pull/671
<_whitenotifier-3>
[glasgow] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-671-f3501e97e153380658205d8ce282add1aa866156 - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3>
[GlasgowEmbedded/glasgow] id43 2b73c79 - manual: revisions/revC3: U13 function description updated. U13 has same function description as U20 (was same as U21).
<purdeaandrei[m]>
am I doing something wrong here?
<purdeaandrei[m]>
nothing with the name clk_out_actual_value shows up in my vcd
<purdeaandrei[m]>
aah okay, I get
<purdeaandrei[m]>
the signal's already there in the vcd, this works by generating a gtkwave "Save file"
<purdeaandrei[m]>
* aah okay, I get it
<purdeaandrei[m]>
How can I put the clock domain clock signal in there?
<purdeaandrei[m]>
hmm, something's buggy with traces
<purdeaandrei[m]>
traces = [dut.o_stream.p.meta], will result in a RecursionError: maximum recursion depth exceeded
Wanda[cis] has joined #glasgow
<Wanda[cis]>
do you have a traceback for that?
<whitequark[m]>
probably, the facility has been recently the subject of a contribution that added {} syntax
<whitequark[m]>
please report an MCVE as an Amaranth issue
<Wanda[cis]>
oh, ugh, forgot about netsplit
mwkmwkmwk[m] has joined #glasgow
<mwkmwkmwk[m]>
do you have a traceback for that?
<purdeaandrei[m]>
Alright, so I wrote a latency-agnostic testbench, this is what it looks like when it's wrong, with a latency of 2. It's supposed to sample on the reising edge of clk_out__o, but samples on the falling edge instead:
<purdeaandrei[m]>
In my opinion it's best of think of this in terms of edges/changes. When you want to generate a change on an output, do `o_stream.valid=1`, with the change specified in `o_stream.p`. If you want to sample a signal, do `o_stream.valid=1` with `o_stream.p.i_en=1`. If you want to do both at the same time, do both at the same time.
<mwkmwkmwk[m]>
... fuck's sake, I try to search for SPI and my browser autocompletes to "spiders georg"
<mwkmwkmwk[m]>
yeahh
<mwkmwkmwk[m]>
that's the same kind of thinking in terms of edges that makes DDR buffers so confusing
<mwkmwkmwk[m]>
the other possible reading is: `oe` means that I'll be driving this cycle; `i_en` means that I care about what the device will be driving this cycle; `oe && i_en` means loopback
<mwkmwkmwk[m]>
but anyway; a convention has been picked, and it does match how the DDRBuffer works for better or worse, so I'm just going to analyse this code according to it
<mwkmwkmwk[m]>
ok yeah this looks clearly wrong
<purdeaandrei[m]>
If the convention was that i_en means what the device will be driving this cycle, then the previous implementation was still problematic. This is because we should be thinking in terms of interface cycles, which could be much slower than FPGA cycles. If we generate and interface clock edge at the same time as we set i_en, and that interface clock edge causes the device to drive something, then we might not be giving it enough
<purdeaandrei[m]>
time for the driven signal to reach the FPGA.
<mwkmwkmwk[m]>
well yeah; the iostream would remember i_en and use it on the next edge to decide whether to send the captured data
<purdeaandrei[m]>
But then it would need to know which signal is an interface clock edge
<purdeaandrei[m]>
Or I guess it could assume that any stream transfer is a clock edge
<purdeaandrei[m]>
I don't know, I find the documented behavior more intuitive. (i.e. i_en refers to the past signal value)
<mwkmwkmwk[m]>
mhm
<mwkmwkmwk[m]>
it's a workable definition I think, just not the one I expected
<whitequark[m]>
it's definitely not the intent that each stream transfer is a clock edge
<whitequark[m]>
and the interface may not be clocked anyway (!)
<whitequark[m]>
the really useful thing about IOStreamer is that you can implement it on top of e.g. JTAG BSCAN
<whitequark[m]>
so ideally, while not a strict requirement, everything in Glasgow would use IOStreamer or similar
<whitequark[m]>
if that's the case, then Glasgow can be used as a remote programming tool, to program SPI flashes over SWD/GPIO or stuff like that
<whitequark[m]>
which I think would be really handy
<purdeaandrei[m]>
the skid test is a bit more confusing
<purdeaandrei[m]>
I don't just want to fix the asserts, cause it might test less then
<purdeaandrei[m]>
Also I think there might be more problems here... o_stream.ready depends on i_stream.ready combinatorially
<purdeaandrei[m]>
that feels wrong
<whitequark[m]>
it's not
<whitequark[m]>
that's how readiness works in most components that have a pass-through stream
<purdeaandrei[m]>
one can't send data, even if i_en is zero, if i_stream is busy
<whitequark[m]>
correct
<whitequark[m]>
that's how backpressure works with IOStreamer
<whitequark[m]>
the idea here is that whenever you can't process input for whatever reason (in Glasgow, because the IN FIFO is full), the whole thing stops
<whitequark[m]>
I do see your point about `i_en` being zero, but readiness can't depend on the value of `i_en`
<purdeaandrei[m]>
the skid buffer test is annoying me, I think I'll just rewrite it without trying to make sure that the original intent is maintained
<purdeaandrei[m]>
(this is for the latency=1 PR)
<purdeaandrei[m]>
I'll remove the check for combinatorial response to o_stream
<whitequark[m]>
as long as the behavior is well tested and the tests themselves are clear and well written I don't care exactly how the tests are written
<purdeaandrei[m]>
actually this is annyoing, i don't want to write a test, that will be thrown away when we change skid buffer behavior
<whitequark[m]>
feel free to change anything you don't like, I'm not married to that code
<purdeaandrei[m]>
is it okay if I do 1 PR that reduces latency and changes skid buffer behavior?
<whitequark[m]>
hm
<whitequark[m]>
in one commit?
<purdeaandrei[m]>
Or, have a very simplified skid buffer test in the latency=1 PR
<purdeaandrei[m]>
that doesn't test all corners of the skid buffer
<purdeaandrei[m]>
because we're about to change that behavior anyway
<whitequark[m]>
isn't there already a test in latency=1 PR? I don't understand
<whitequark[m]>
oh I see, the tests are broken
<purdeaandrei[m]>
I added my own test, but the existing tests were failing
redstarcomrade has joined #glasgow
redstarcomrade has quit [Changing host]
redstarcomrade has joined #glasgow
<purdeaandrei[m]>
the existing skid buffer test is still broken
<whitequark[m]>
right, I've seen you change one of them but I guess there were more
<purdeaandrei[m]>
the basic test I fixed
<whitequark[m]>
what you could do is to remove the skid buffer test entirely in the first commit, and then add a second commit which adds new behavior and tests
<whitequark[m]>
this way, each commit individually passes the testsuite, and the PR can be merged as a whole after being reviewed as a unit
<whitequark[m]>
I feel like the test you added has some issues with how it's written
<purdeaandrei[m]>
I wrote it to be generic
<whitequark[m]>
the names of the testbenches are inconsistent and unclear to me, it's not using the stream helper functions, and it's kind of hard to understand how the four testbenches interact
<purdeaandrei[m]>
to be latency-agnostic
<whitequark[m]>
the former two issues are with the style, the last one is with the substance
<whitequark[m]>
and I guess the substance is harder to understand if the style is inconsistent
<whitequark[m]>
aren't you testing that the value being sampled is the same as the value you put there previously? in which case a pair of testbenches (one outputting some sequence, one checking that sequence) and a `m.d.sync += sim_port.i.eq(sim_port.i)` would do the job
<whitequark[m]>
but maybe I'm missing some subtlety of it
<purdeaandrei[m]>
no, the outputs are unrelated, are only there for looking at the waveforms visually
<purdeaandrei[m]>
I could remove the data output
<purdeaandrei[m]>
and keep only the clock output, and data input
<purdeaandrei[m]>
So just to clarify
<purdeaandrei[m]>
I have input_generator_tb
<purdeaandrei[m]>
that generates values
<purdeaandrei[m]>
I don't know which of these are going to be sampled, cause I'm not supposed to know the latency
<purdeaandrei[m]>
save_supposed_to_sample_values_tb() -- this loks at the interface clock positive edge, when that happens, that's when the IO is supposed to sample the signal, so it saves the value, this is the expected value
<purdeaandrei[m]>
and I have a reader testbench, which reads from i_stream, and compares with the expected
<purdeaandrei[m]>
I'll edit it a bit to make things clearer...
<purdeaandrei[m]>
So clock edge 2 is where the o_stream transfer takes place, while clock edge 4 is where the i_stream transfer takes place. At which point i_stream samples "A", and whatever bit came before "A"
<purdeaandrei[m]>
i.e. these are the actual sampling points, as seen from the outside signal's perspective:
<purdeaandrei[m]>
That means that if there's a burst of N bits read, then there would need to be N // 2 + 1 transfers on i_stream, in order to read it
<purdeaandrei[m]>
(as long as the burst begins on the rising edge)
jstein has joined #glasgow
<purdeaandrei[m]>
Yeah, not intuitive at all
<purdeaandrei[m]>
A latency of 3 really feels wrong, cause then it would be sampling bits B and C
<purdeaandrei[m]>
(I wonder why the latency of 3 currently works with qspi)
<purdeaandrei[m]>
Alternatively we could have varying latency, in order to sample A and B, and that would make sense specifically for DDR buffers, but then it would be inconsistent with FFBuffer.
hl has quit [Remote host closed the connection]
hl has joined #glasgow
jstein has quit [Read error: Connection reset by peer]
<purdeaandrei[m]>
For the purpose of discussion, I think this would implement what's needed to prevent deadlock: https://github.com/GlasgowEmbedded/glasgow/pull/675/files Still need to do more detailed testing. Also, unfortunately this keeps the combinatorial path, but at least prevents deadlock.
<purdeaandrei[m]>
It's probably missing a condition of ```with m.If(i_en & self.i_stream.ready):``` In which case the skid buffer should be shifted, withoutincrementing/decrementing skid_at. Anyway let's what happens the the sdr latency = 1 pr first...
redstarcomrade has quit [Read error: Connection reset by peer]
<mwkmwkmwk[m]>
I don't think this is necessary because, when the skid buffer is non-empty, the whole thing grinds to a halt
<purdeaandrei[m]>
In short: stream rules would allow the i_stream consumer to implement: m.d.comb += ready.eq(valid)
meklort_ has joined #glasgow
<purdeaandrei[m]>
But that would make the system not work at all
meklort has quit [Ping timeout: 248 seconds]
meklort_ is now known as meklort
<purdeaandrei[m]>
In addition to what's been discussed: I think in general, for any pass-through stream, a combinatorial forwarding on of the ready signal should only be okay, if it doesn't prevent the pass-through stage from generating a valid signal. (the pass-through stage should be able to generate a valid signal either combinatorially from its input valid signal, or if not combinatorially, then it should be able to take and register at least one
<purdeaandrei[m]>
transfer, so no deadlock happens)
<purdeaandrei[m]>
hmm, I wanted to write a test for DDR iostreamer, but I'm getting DDR buffers not supported in simulation error
<whitequark[m]>
purdeaandrei (@_discord_693328004947902465:catircservices.org) technical reasons; we didn't want to commit to the topology of posedge flop + negedge flop + mux because it wasn't totally clear that this is the best way to do it
<whitequark[m]>
(and pre-capture for negedge flop)
<whitequark[m]>
the options are that and a simulator process, but we don't have a way to hook up the process yet
<mwkmwkmwk[m]>
that and the whole "we have no way to have latency match whatever hw platform you end up using" issue
<whitequark[m]>
that also but we can just say that the simulated one has some arbitrary fixed latency
<whitequark[m]>
it'd still be quite useful
<purdeaandrei[m]>
I think I could do a context manager that temporarly overrides amaranth.lib.io.DDRBuffer
<purdeaandrei[m]>
you mean like sub-half-cycle latencies?
<whitequark[m]>
no, full cycle
<whitequark[m]>
that's pretty cursed
<whitequark[m]>
you should probably make a subclass with a different elaborate implementation for the simulation case
<mwkmwkmwk[m]>
the problem with DDRBuffer is that the Lattice hardware DDR output buffer has 3 cycles of latency for some godsforsaken reason
<mwkmwkmwk[m]>
so we cannot commit to actually having well-defined latency.
<mwkmwkmwk[m]>
(and by "Lattice" I mean "Lattice and Gowin" because Gowin FPGAs are just Lattice FPGAs with the serial numbers filed off)
<purdeaandrei[m]>
yes, but the cursed override is what i'd still need to do right?
<whitequark[m]>
no, you just use the subclass
<whitequark[m]>
(in IOStreamer)
<purdeaandrei[m]>
oh, so I then add a parameter to the constructor, to be able to override what class to use, right?
<mwkmwkmwk[m]>
no you just always use your new shiny DDRBuffer
<mwkmwkmwk[m]>
just make sure it falls back to the normal implementation in non-sim case.
<purdeaandrei[m]>
ah, okay, got it, that would work, but then iostream.py would essentially contain test code, wouldn't it be better to do something like this?
<benny2366[m]>
so your arguening with google ok cool 🙂
<whitequark[m]>
yep! I have domain knowledge that you lavk
<whitequark[m]>
s/lavk/lack/
<purdeaandrei[m]>
Okay, so the 3 cycles thing is not something I should worry about in the context of glasgow, it's only something that affects Amaranth in general
<whitequark[m]>
it doesn't matter what name you slap on the chip, it matters which team has originally developed the silicon
<benny2366[m]>
euhm jha unless you come with evidence your knowledge means nothing!
<whitequark[m]>
fuck off lol
<mwkmwkmwk[m]>
it's something that will likely be an issue at some point, given the revE plans
galibert[m] has joined #glasgow
<galibert[m]>
Your knowledge means nothing compared to the power of Cthulu!
<mwkmwkmwk[m]>
but... that's some time off in the future, and we'll probably have some better infra in place in amaranth at that point
<benny2366[m]>
aah is that how you treat people how challenge you really mature. but this is the lasst thing i will say on this . also I would realy realy sudgest that you change your docs then "The SiliconBluePlatform class provides a base platform to support Lattice (earlier SiliconBlue) iCE40 devices."
<mwkmwkmwk[m]>
and exactly why would we change it?
<galibert[m]>
Altera ealier Intel earlier Altera?
<whitequark[m]>
that is how I treat people who waste my time and refuse to read a clear explanation that I provided twice in this chat
<whitequark[m]>
once you learn that life isn't about winning an argument, message me an apology and I'll unblock you
<galibert[m]>
So, never?
<whitequark[cis]>
it seems so
<galibert[m]>
He probably can't be seen losing an argument with a girrrrl
<whitequark[m]>
yes, it's a small part of why we don't currently provide a DDR buffer sim
<whitequark[m]>
(or at least there is no better one)
<whitequark[m]>
this method is kind of a historical artefact more than an intentional addition to the language
<whitequark[m]>
eventually we'd probably get some kind of SimulationPlatform and all of that code will break, which would cause some issues
<purdeaandrei[m]>
Actually I see amaranth checks ```python
<purdeaandrei[m]>
if isinstance(self._port, SimulationPort):
<purdeaandrei[m]>
```
<purdeaandrei[m]>
would that be better?
<whitequark[m]>
that has slightly different semantics, but yes, I'd say go for it, it's a more robust chevk
<whitequark[m]>
s/chevk/check/
<purdeaandrei[m]>
Can I add a process or a tesbench to a module to implement my simulation DDRBuffer?
<purdeaandrei[m]>
It's not clear to me how I can return non-synthesizable logic from elaborate()
<whitequark[m]>
you can't (I tried to explain it above, let me go over it again)
<whitequark[m]>
the two options are to reframe the buffer as synthesizable logic, or to add a process externally
<whitequark[m]>
the simulator doesn't let you return a process from elaborate, mainly because right now it's not well defined what happens with clock domains or eg ResetInserter
<whitequark[m]>
so it would be a footgun if it was allowed
twix has joined #glasgow
<whitequark[m]>
yes. that makes it less generic though
<whitequark[m]>
since you can't get at the buffer itself if you follow our coding guidelines (you should not modify the object being elaborated), it would have to be stashed inside the platform
<whitequark[m]>
meaning you'd need a custom simulation platform to go with the buffer
<whitequark[m]>
Glasgow actually used to do this before SimulationPort was upstreamed, look it up in git history how to do this
<whitequark[m]>
it's a lot of ceremony for something so simple, sadly, but that'd be the way yo go
<whitequark[m]>
s/yo/to/
<whitequark[m]>
(specifically, look up the history for the right Fragment.get invocation)
<purdeaandrei[m]>
So if I go the synthesizable logic route, I'd need to introduce a new clock domain, right?
<whitequark[m]>
yes
<whitequark[m]>
clock domains are local to the module (since 0.6, in 0.5.1 used in Glasgow you need to use local=True)
<whitequark[m]>
so this is invisible to the outer logic
<purdeaandrei[m]>
and at the same time the flipflop outputs a new value
<purdeaandrei[m]>
I wonder if that's something that can happen in real hardware
<purdeaandrei[m]>
i.e. can you even use a DDR buffer to generate a hazard-less signal?
<purdeaandrei[m]>
Maybe there's timing guarantees in the real FPGA, that the mux is glitchless and the clock always reaches the mux, before the mux sees the flipflop changing its output
<whitequark[m]>
this glitch is the consideration that kept DDR buffers from being implemented upstream
<purdeaandrei[m]>
but is that actually the case?
<whitequark[m]>
I don't know if the FPGA functions the way the schematic does
<whitequark[m]>
I've seen Xilinx patents and they do it different
<whitequark[m]>
but I don't know exactly what ice40 does because I haven't RE'd it from die shots
<whitequark[m]>
that said, I can tell you a way to work around this in the simulator
<whitequark[m]>
actually, hm. I'm not quite sure that what I'm thinking of can be made to work, but let's see
<whitequark[m]>
make a new module where all you do is m.d.comb += o.eq(i)
<whitequark[m]>
this is a buffer that adds an infinitesimal amount of time to the path it's on
<whitequark[m]>
one "delta cycle", per the terminology used in the Amaranth simulator internally
<galibert[m]>
it's not collapsed?
<whitequark[m]>
I think that should work, but since we've made some changes to the language that optimize the netlist a bit more in 0.5, I'm not completely sure
<whitequark[m]>
you can use fs_per_delta argument to write_vcd to visualize this
<whitequark[m]>
if you aren't using it already
<purdeaandrei[m]>
it was already there
<whitequark[m]>
so the goal when building the DDRBuffer model would be to add a delay to the clock before it reaches the mux
<whitequark[m]>
a delay of one delta cycle should be sufficient
<purdeaandrei[m]>
I guess I don't care how it actually implements it, what I care about is: does it offer a glitchless guarantee?
<purdeaandrei[m]>
I mean do I need to make my FFBuffer implementation glitchless? Or do I need to make my testbench not sensitive to glitches?
<purdeaandrei[m]>
cause right now my testbench is sensitive, and thus fails
<purdeaandrei[m]>
but I could just make it not sensitive, and not care about the glitches on the output
<purdeaandrei[m]>
So I guess it sounds like the language may be allowed to optimize this out, so if it doesn't optimize it out now, it may optimize it out in the future
<purdeaandrei[m]>
So that doesn't sound very robust
<purdeaandrei[m]>
maybe I should just make my testbench non-glitch sensitive, even if real FPGAs guarantee glitchless-ness, and be done with it, cause then it won't fail with a future language optimization change
<purdeaandrei[m]>
I don't necessarily mind seeing the glitches in gtkwave, since my testcase is really there to just verify sampling time
<purdeaandrei[m]>
I'd still like to know if FPGAs guarantee glitchlessness in general, for my own knowledge though
<whitequark[m]>
correct
<whitequark[m]>
it does
<whitequark[m]>
btw, that schematic is verifiably wrong (as in I personally verified that it doesn't match hardware)
<whitequark[m]>
specifically, you cannot toggle the clock by toggling CLOCK_ENABLE
<whitequark[m]>
and keeping the clock at 1
<whitequark[m]>
s/toggle/clock/, s/clock/flops/
<whitequark[m]>
s/toggle/trigger/, s/clock/flops/
<whitequark[m]>
so I think it was drawn by someone after the fact as an illustration, and bears little resemblance to what the device implements
<purdeaandrei[m]>
interesting
<whitequark[m]>
you can try it with your glasgow if you want
<whitequark[m]>
I thought that implementing it this way was batshit, because it would result in really obvious race conditions whenever CLOCK_ENABLE has glitches on it
<whitequark[m]>
and of course, it's just not implemented that way
<galibert[m]>
So not batshit for once?
<purdeaandrei[m]>
I think they do this kind of stuff sometimes, in silicon, in order to optimize clock tree power
<purdeaandrei[m]>
and I think they use timing analysis to prove it correct
<purdeaandrei[m]>
but I've never seen it with DDR
<purdeaandrei[m]>
I think it can only be done with simple flipflops
<whitequark[m]>
yes, I believe it's sometimes done
<whitequark[m]>
which is why I made sure to check what the silicon does instead of assuming one way or another
<whitequark[m]>
I would generally say this is a good idea, but in this specific case, consider that it would be valuable to test the QSPI controller with a simulated flash
<whitequark[m]>
however the glitches would totally break it
<whitequark[m]>
I guess there is a way to handle it, which is to add some logic to the simulated flash to filter out really short glitches
<whitequark[m]>
I don't like it, but I guess sometimes people do put glitch filters like that on clocks (eg the mandated I2C 50ns filter)
<whitequark[m]>
(I think it's mandated? I've seen it everywhere but I don't recall the spec)
<_whitenotifier-3>
[glasgow] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-674-2b73c7948d64b6da1c70931f95e11e2a55354ea1 - https://github.com/GlasgowEmbedded/glasgow
<purdeaandrei[m]>
what would be the best way to write a test that automatically starts from a fresh state, and tests multiple stimuluses? Should I reset the dut? or should I just re-instantiate everything?
Guest83 has quit [Client Quit]
<whitequark[m]>
latter
<whitequark[m]>
it could be multiple tests, with some behavior factored out
<purdeaandrei[m]>
I don't think I'd want to do 10000 test_ functions (in this specific artificial example)
<whitequark[m]>
yes, something like that
<whitequark[m]>
I wouldn't suggest 10k functions, that is of course too burdensome
Guest75 has joined #glasgow
Guest3 has joined #glasgow
redstarcomrade has joined #glasgow
redstarcomrade has joined #glasgow
Guest75 has quit [Client Quit]
Guest3 has quit [Client Quit]
Eli2| has joined #glasgow
Eli2_ has quit [Ping timeout: 248 seconds]
Attie[m] has joined #glasgow
<Attie[m]>
Catherine: there has been mention of a GPIO applet in the past... IIRC you're not against the idea - correct?
<whitequark[cis]>
am fine with the idea
<Attie[m]>
Great, thanks! I'll keep going with my scrappy implementation then
<Attie[m]>
a couple of follow up Q's...
<Attie[m]>
am I correct in thinking that "pin groups" all have a common OE? i.e: one can't be output with another input
<whitequark[cis]>
(brb)
<Attie[m]>
if so / due to this, I'm tempted to just grab all 16x I/Os, like the self test applet does. thoughts?
<Attie[m]>
2: am I correct in thinking a FIFO + state machine will be more performant / preferred to device registers? ("performant" when considering that Python is directly in the loop)
<whitequark[cis]>
no, applets shouldn't do platform.request for the pins (this won't even be portable across revA/revC) and the selftest applet should probably not even be an applet at all
<whitequark[cis]>
Attie[m]: no, with Amaranth 0.5 there is no requirement, you can slice a port group to instantiate several buffers (one per pin for example)
<whitequark[cis]>
Attie[m]: yes, it should probably be just a FIFO that grabs 16 bits and distributes them across buffers
<Attie[m]>
ok, I'll review my side and come back
<whitequark[cis]>
port groups at this point are just a UI thing
<whitequark[cis]>
you get a GlasgowPort either way, and applets should use that abstraction with buffers
<whitequark[cis]>
otherwise the applet analyzer won't work (when we port it to Amaranth 0.5)
<whitequark[cis]>
and in general, that's the only abstraction I'm willing to stabilize at some later point
<Attie[m]>
I thought it was a glasgow-side restriction, that has Nx IO, and 1x OE commoned across them all
<Attie[m]>
(and agree / understood re stability)
<whitequark[cis]>
no
<whitequark[cis]>
Glasgow has individual OEs... that's why revA had those awful FXMA chips
<whitequark[cis]>
cause there weren't enough pins for individual OEs
<whitequark[cis]>
so for revC we had to switch to the BGA and per-bit level shifters
<Attie[m]>
ahh, this looks like a change since Amaranth 0.5 - awesome
<Attie[m]>
stale knowledge...
<whitequark[cis]>
yep! we put a loooot of work into lib.io
<Attie[m]>
looks very shiny! sorry for not keeping up
<whitequark[cis]>
yeah, that person really did not want to read
WilfriedKlaebe[m has joined #glasgow
<WilfriedKlaebe[m>
.oO( it's even in the fine article at https://en.m.wikipedia.org/wiki/ICE_(FPGA): »Lattice received the iCE brand as part of its 2011 acquisition of SiliconBlue Technologies.« )
skipwich has quit [Ping timeout: 246 seconds]
skipwich has joined #glasgow
skipwich_ has quit [Ping timeout: 252 seconds]
skipwich__ has joined #glasgow
skipwich has quit [Ping timeout: 252 seconds]
skipwich__ is now known as skipwich
<purdeaandrei[m]>
hmm, so, randomized testing revealed yet another bug in iostreamer
<purdeaandrei[m]>
with DDR buffer, so latency of 2
<purdeaandrei[m]>
if you launch 2 transfers with i_en=1 in two consecutive clock cycles
<purdeaandrei[m]>
and on the immediately following clock cycle you set i_stream.ready low, and then again on the following clock cycle you set i_stream.ready high
<purdeaandrei[m]>
then a sample will be lost
<whitequark[cis]>
(is this before or after your changes? I think my mental model only covers before)