whitequark[cis] changed the topic of #glasgow to: https://glasgow-embedded.org · digital interface explorer · https://www.crowdsupply.com/1bitsquared/glasgow · code https://github.com/GlasgowEmbedded/glasgow · logs https://libera.irclog.whitequark.org/glasgow · matrix #glasgow-interface-explorer:matrix.org · discord https://1bitsquared.com/pages/chat
tom has joined #glasgow
tom has quit [Client Quit]
<_whitenotifier-3> [glasgow] whitequark created branch update-lockfile - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [glasgow] whitequark opened pull request #673: Update `pdm.min.lock` - https://github.com/GlasgowEmbedded/glasgow/pull/673
<_whitenotifier-3> [glasgow] whitequark commented on pull request #671: U13 function description updated. U13 has same function as U20 (was same as U21). - https://github.com/GlasgowEmbedded/glasgow/pull/671#issuecomment-2307958731
<_whitenotifier-3> [glasgow] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-671-c3fee747d7d2a19ed4afaa31cfa911b2a03c4831 - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [glasgow] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-673-c3fee747d7d2a19ed4afaa31cfa911b2a03c4831 - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [glasgow] github-merge-queue[bot] deleted branch gh-readonly-queue/main/pr-671-c3fee747d7d2a19ed4afaa31cfa911b2a03c4831 - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [GlasgowEmbedded/glasgow] github-merge-queue[bot] pushed 1 commit to main [+0/-0/±1] https://github.com/GlasgowEmbedded/glasgow/compare/c3fee747d7d2...f3501e97e153
<_whitenotifier-3> [GlasgowEmbedded/glasgow] github-merge-queue[bot] f3501e9 - software: update `pdm.min.lock`.
<_whitenotifier-3> [glasgow] github-merge-queue[bot] deleted branch gh-readonly-queue/main/pr-673-c3fee747d7d2a19ed4afaa31cfa911b2a03c4831 - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [glasgow] whitequark closed pull request #673: Update `pdm.min.lock` - https://github.com/GlasgowEmbedded/glasgow/pull/673
<_whitenotifier-3> [glasgow] whitequark deleted branch update-lockfile - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [glasgow] whitequark synchronize pull request #671: U13 function description updated. U13 has same function as U20 (was same as U21). - https://github.com/GlasgowEmbedded/glasgow/pull/671
<_whitenotifier-3> [glasgow] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-671-f3501e97e153380658205d8ce282add1aa866156 - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [GlasgowEmbedded/glasgow] github-merge-queue[bot] pushed 1 commit to main [+0/-0/±1] https://github.com/GlasgowEmbedded/glasgow/compare/f3501e97e153...2b73c7948d64
<_whitenotifier-3> [GlasgowEmbedded/glasgow] id43 2b73c79 - manual: revisions/revC3: U13 function description updated. U13 has same function description as U20 (was same as U21).
<_whitenotifier-3> [glasgow] github-merge-queue[bot] deleted branch gh-readonly-queue/main/pr-671-f3501e97e153380658205d8ce282add1aa866156 - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [glasgow] whitequark closed pull request #671: U13 function description updated. U13 has same function as U20 (was same as U21). - https://github.com/GlasgowEmbedded/glasgow/pull/671
DragoonAethis has quit [Quit: hej-hej!]
DragoonAethis has joined #glasgow
<purdeaandrei[m]> what signal should I use in gtkwave to illustrate the actual port signal value?
<whitequark[m]> you're using `SimulationPort`, right?
<purdeaandrei[m]> yes
<whitequark[m]> that has a signal internally (three, actually), you can grab port.o and put it in traces= while doing write_vcd
<whitequark[m]> that's probably the most reliable/easiest way to put it on the screen
<purdeaandrei[m]> with sim.write_vcd("test.vcd", fs_per_delta=1, traces={
<purdeaandrei[m]> "clk_out_actual_value": clk_port.o,
<purdeaandrei[m]> }):
<purdeaandrei[m]> am I doing something wrong here?
<purdeaandrei[m]> nothing with the name clk_out_actual_value shows up in my vcd
<purdeaandrei[m]> aah okay, I get
<purdeaandrei[m]> the signal's already there in the vcd, this works by generating a gtkwave "Save file"
<purdeaandrei[m]> * aah okay, I get it
<purdeaandrei[m]> How can I put the clock domain clock signal in there?
<purdeaandrei[m]> hmm, something's buggy with traces
<purdeaandrei[m]> traces = [dut.o_stream.p.meta], will result in a RecursionError: maximum recursion depth exceeded
Wanda[cis] has joined #glasgow
<Wanda[cis]> do you have a traceback for that?
<whitequark[m]> probably, the facility has been recently the subject of a contribution that added {} syntax
<whitequark[m]> please report an MCVE as an Amaranth issue
<Wanda[cis]> oh, ugh, forgot about netsplit
mwkmwkmwk[m] has joined #glasgow
<mwkmwkmwk[m]> do you have a traceback for that?
<purdeaandrei[m]> Alright, so I wrote a latency-agnostic testbench, this is what it looks like when it's wrong, with a latency of 2. It's supposed to sample on the reising edge of clk_out__o, but samples on the falling edge instead:
<purdeaandrei[m]> And this is what it looks like corrected with the latency of 1:
<purdeaandrei[m]> hopefully that makes it very clear visually what was wrong
<purdeaandrei[m]> my testbench will continue to pass work if latency changes in the future, as long as the sampling happen at the right point in time
<_whitenotifier-3> [glasgow] purdeaandrei opened pull request #674: Fix iostreamer - https://github.com/GlasgowEmbedded/glasgow/pull/674
<purdeaandrei[m]> So here's a PR fixing IOstreamer + my new testcase: https://github.com/GlasgowEmbedded/glasgow/pull/674
<purdeaandrei[m]> however the existing two IOStreamer testcases are now failing
<purdeaandrei[m]> still need to fix those
<mwkmwkmwk[m]> ... let me look at this
<mwkmwkmwk[m]> I wonder if the underlying problem is that the definition of i_en is suboptimal?
<mwkmwkmwk[m]> " a payload with the data captured at the same time as
<mwkmwkmwk[m]> the outputs were updated appears on `i_stream.p.i`."
<mwkmwkmwk[m]> or... not sure if suboptimal or confusing
<whitequark[m]> I thought that's unambiguous enough
<mwkmwkmwk[m]> but what it means is: `i_en` relates to *past* clock cycle's data, `o` and `oe` relate to *upcoming* clock cycle's data
<whitequark[m]> yeah
<whitequark[m]> this is what you want for non source synchronous interfaces, I think
<mwkmwkmwk[m]> is it?
<mwkmwkmwk[m]> hm
<whitequark[m]> yes, consider an SPI flash
<whitequark[m]> the worst case is that the time between your clock posedge and data update is 0
<whitequark[m]> so you want your capture window to end just before the posedge
<whitequark[m]> which is the behavior that I'm stating in the doc
<_whitenotifier-3> [glasgow] whitequark commented on pull request #674: Fix iostreamer - https://github.com/GlasgowEmbedded/glasgow/pull/674#issuecomment-2307990910
<purdeaandrei[m]> In my opinion it's best of think of this in terms of edges/changes. When you want to generate a change on an output, do `o_stream.valid=1`, with the change specified in `o_stream.p`. If you want to sample a signal, do `o_stream.valid=1` with `o_stream.p.i_en=1`. If you want to do both at the same time, do both at the same time.
<mwkmwkmwk[m]> ... fuck's sake, I try to search for SPI and my browser autocompletes to "spiders georg"
<mwkmwkmwk[m]> yeahh
<mwkmwkmwk[m]> that's the same kind of thinking in terms of edges that makes DDR buffers so confusing
<mwkmwkmwk[m]> the other possible reading is: `oe` means that I'll be driving this cycle; `i_en` means that I care about what the device will be driving this cycle; `oe && i_en` means loopback
<mwkmwkmwk[m]> but anyway; a convention has been picked, and it does match how the DDRBuffer works for better or worse, so I'm just going to analyse this code according to it
<mwkmwkmwk[m]> ok yeah this looks clearly wrong
<purdeaandrei[m]> If the convention was that i_en means what the device will be driving this cycle, then the previous implementation was still problematic. This is because we should be thinking in terms of interface cycles, which could be much slower than FPGA cycles. If we generate and interface clock edge at the same time as we set i_en, and that interface clock edge causes the device to drive something, then we might not be giving it enough
<purdeaandrei[m]> time for the driven signal to reach the FPGA.
<mwkmwkmwk[m]> well yeah; the iostream would remember i_en and use it on the next edge to decide whether to send the captured data
<purdeaandrei[m]> But then it would need to know which signal is an interface clock edge
<purdeaandrei[m]> Or I guess it could assume that any stream transfer is a clock edge
<purdeaandrei[m]> I don't know, I find the documented behavior more intuitive. (i.e. i_en refers to the past signal value)
<mwkmwkmwk[m]> mhm
<mwkmwkmwk[m]> it's a workable definition I think, just not the one I expected
<whitequark[m]> it's definitely not the intent that each stream transfer is a clock edge
<whitequark[m]> and the interface may not be clocked anyway (!)
<whitequark[m]> the really useful thing about IOStreamer is that you can implement it on top of e.g. JTAG BSCAN
<whitequark[m]> so ideally, while not a strict requirement, everything in Glasgow would use IOStreamer or similar
<whitequark[m]> if that's the case, then Glasgow can be used as a remote programming tool, to program SPI flashes over SWD/GPIO or stuff like that
<whitequark[m]> which I think would be really handy
<mwkmwkmwk[m]> anyway, yeah, latency should be 1
<mwkmwkmwk[m]> for FFBuffer
<_whitenotifier-3> [glasgow] whitequark commented on pull request #674: Fix iostreamer - https://github.com/GlasgowEmbedded/glasgow/pull/674#issuecomment-2308000669
<mwkmwkmwk[m]> for DDRBuffer uhhh.
<whitequark[m]> thanks for the review, mwkmwkmwk (@_discord_729063422678794270:catircservices.org)!
<mwkmwkmwk[m]> what is the definition, anyway?
<mwkmwkmwk[m]> should I read it as "`i_en` applies to both clock edges at which we will be putting out new data for this transfer"?
<mwkmwkmwk[m]> I guess that's the only consistent reading
<mwkmwkmwk[m]> so I'll just go with this
<mwkmwkmwk[m]> for DDR, the latency should be 2, I think?
<mwkmwkmwk[m]> yeah 2
<mwkmwkmwk[m]> the latency doesn't differ between `i[0]` and `i[1]`, exactly because it is defined in terms of edges
<whitequark[m]> I'm not sure I understand that
<whitequark[m]> oh, I guess it's because the `i_en` definition is the same weird one that `DDRBuffer` itself currently uses?
<_whitenotifier-3> [glasgow] purdeaandrei synchronize pull request #674: Fix iostreamer - https://github.com/GlasgowEmbedded/glasgow/pull/674
<mwkmwkmwk[m]> yup
<purdeaandrei[m]> So I fixed the basic test
<purdeaandrei[m]> the skid test is a bit more confusing
<purdeaandrei[m]> I don't just want to fix the asserts, cause it might test less then
<purdeaandrei[m]> Also I think there might be more problems here... o_stream.ready depends on i_stream.ready combinatorially
<purdeaandrei[m]> that feels wrong
<whitequark[m]> it's not
<whitequark[m]> that's how readiness works in most components that have a pass-through stream
<purdeaandrei[m]> one can't send data, even if i_en is zero, if i_stream is busy
<whitequark[m]> correct
<whitequark[m]> that's how backpressure works with IOStreamer
<whitequark[m]> the idea here is that whenever you can't process input for whatever reason (in Glasgow, because the IN FIFO is full), the whole thing stops
<whitequark[m]> I do see your point about `i_en` being zero, but readiness can't depend on the value of `i_en`
<whitequark[m]> actually
<whitequark[m]> I'm wrong
<whitequark[m]> you *could* make `o_stream.ready` combinationally depend on `i_stream.ready | ~o_stream.p.i_en`, it won't violate the [stream rules](https://amaranth-lang.org/docs/amaranth/latest/stdlib/stream.html#data-transfer-rules)
<whitequark[m]> having thought about it, I now have no opinion on whether i_en should be taken into account when forming o_stream.ready
<whitequark[m]> either option would work fine and I can't think of anything that would make one option clearly preferable over another
<whitequark[m]> if you have something like that, I'd like to hear it
meklort has quit [Quit: ZNC 1.7.5+deb4 - https://znc.in]
meklort has joined #glasgow
<purdeaandrei[m]> I guess it's alright either way, was just thinking through it...
<whitequark[m]> I thought about it initially as any other pass-through component (consider e.g. the framer or deframer)
<whitequark[m]> but of course it has side effects, so that analysis is not entirely correct
<whitequark[m]> you could say that o_stream items without i_en are functionally discarded after their side effect is applied
marcus_c has quit [Ping timeout: 252 seconds]
<whitequark[m]> my concern would be in that there could be an even more complex combinational path through i_en and that could pessimize timing
<whitequark[m]> right now, QSPI just about works at 85 MHz, but to get there I had to add a pipeline stage to IOStreamer output
<whitequark[m]> this feels like something I don't want to complicate the readiness feedback in
cyrozap has quit [Remote host closed the connection]
cyrozap has joined #glasgow
hl has quit [Remote host closed the connection]
hl has joined #glasgow
miek has quit [Ping timeout: 252 seconds]
sam_w has quit [Read error: Connection reset by peer]
sorear has quit [Read error: Connection reset by peer]
_alice has quit [Read error: Connection reset by peer]
JimGM0UIN has quit [Read error: Connection reset by peer]
jdek has quit [Read error: Connection reset by peer]
kitaleth has quit [Ping timeout: 248 seconds]
lane has quit [Read error: Connection reset by peer]
_alice has joined #glasgow
JimGM0UIN has joined #glasgow
sorear has joined #glasgow
sam_w has joined #glasgow
jdek has joined #glasgow
ar-jan has quit [Quit: ZNC - https://znc.in]
Stary has quit [Quit: ZNC - http://znc.in]
ar-jan has joined #glasgow
DragoonAethis has quit [Quit: No Ping reply in 180 seconds.]
V has quit [Remote host closed the connection]
dne has quit [Read error: Connection reset by peer]
dne has joined #glasgow
purdeaandrei[m] has quit [Ping timeout: 252 seconds]
Wanda[cis] has quit [Ping timeout: 272 seconds]
V has joined #glasgow
account[m] has quit [Ping timeout: 252 seconds]
DragoonAethis has joined #glasgow
ari has quit [Ping timeout: 272 seconds]
miek has joined #glasgow
Stary has joined #glasgow
<whitequark[m]> I don't follow
<whitequark[m]> that restriction is on valid, not ready
<whitequark[m]> that message is confusingly phrased, can you rephrase it?
<whitequark[m]> it does, yes, the rule is there to prevent deadlocks
<whitequark[m]> but I don't understand this part
<whitequark[m]> but valid cannot depend on ready, yes?
<whitequark[m]> that rule in effect means that streams are push based
<whitequark[m]> i.e. a producer *must* present a valid payload whenever it's available, whether or not `ready` is asserted
<whitequark[m]> and then the payload will be stuck in the same state for all the time during which ready is low
<whitequark[m]> this means that IOStreamer can spy on i_en to see if it should set ready or not
Stary has quit [*.net *.split]
DragoonAethis has quit [*.net *.split]
jdek has quit [*.net *.split]
_alice has quit [*.net *.split]
Fridtjof has quit [*.net *.split]
V has quit [*.net *.split]
ar-jan has quit [*.net *.split]
sorear has quit [*.net *.split]
anuejn has quit [*.net *.split]
m42uko has quit [*.net *.split]
edf0 has quit [*.net *.split]
Xesxen has quit [*.net *.split]
mwk has quit [*.net *.split]
bgamari has quit [*.net *.split]
<whitequark[m]> oh, I see
<whitequark[m]> yes, you are right, I misunderstood you
Xesxen has joined #glasgow
<whitequark[m]> the logic around the skid buffer should allow up to 1 entry to be occupied even if i_stream.ready is low
marcus_c has joined #glasgow
<whitequark[m]> and thus have one more entry in the buffer
lane has joined #glasgow
sorear has joined #glasgow
anuejn has joined #glasgow
ar-jan has joined #glasgow
V has joined #glasgow
DragoonAethis has joined #glasgow
account[m] has joined #glasgow
purdeaandrei[m] has joined #glasgow
edf0 has joined #glasgow
mwk has joined #glasgow
bgamari has joined #glasgow
Stary has joined #glasgow
ari has joined #glasgow
Wanda[cis] has joined #glasgow
jdek has joined #glasgow
Fridtjof has joined #glasgow
_alice has joined #glasgow
<purdeaandrei[m]> as it is right now that logic only allows the entry to be occupied if the read is already in flight
ari has joined #glasgow
ari has quit [Changing host]
<purdeaandrei[m]> Also going back to the case when we never do i_en=1
<purdeaandrei[m]> therefore i_stream valid never goes high
<purdeaandrei[m]> therefore i_stream consumer is allowed to never set i_stream ready high
<purdeaandrei[m]> with current logic that would block iostreamer
<purdeaandrei[m]> but it shouldn't
kitaleth has joined #glasgow
<whitequark[m]> yep
<whitequark[m]> I think you've made a good motivation for the current behavior being wrong
<whitequark[m]> so I'd say let's change it
<_whitenotifier-3> [glasgow] whitequark reviewed pull request #674 commit - https://github.com/GlasgowEmbedded/glasgow/pull/674#discussion_r1729714150
<purdeaandrei[m]> the skid buffer test is annoying me, I think I'll just rewrite it without trying to make sure that the original intent is maintained
<purdeaandrei[m]> (this is for the latency=1 PR)
<purdeaandrei[m]> I'll remove the check for combinatorial response to o_stream
<whitequark[m]> as long as the behavior is well tested and the tests themselves are clear and well written I don't care exactly how the tests are written
<purdeaandrei[m]> actually this is annyoing, i don't want to write a test, that will be thrown away when we change skid buffer behavior
<whitequark[m]> feel free to change anything you don't like, I'm not married to that code
<purdeaandrei[m]> is it okay if I do 1 PR that reduces latency and changes skid buffer behavior?
<whitequark[m]> hm
<whitequark[m]> in one commit?
<purdeaandrei[m]> Or, have a very simplified skid buffer test in the latency=1 PR
<purdeaandrei[m]> that doesn't test all corners of the skid buffer
<purdeaandrei[m]> because we're about to change that behavior anyway
<whitequark[m]> isn't there already a test in latency=1 PR? I don't understand
<whitequark[m]> oh I see, the tests are broken
<purdeaandrei[m]> I added my own test, but the existing tests were failing
redstarcomrade has joined #glasgow
redstarcomrade has quit [Changing host]
redstarcomrade has joined #glasgow
<purdeaandrei[m]> the existing skid buffer test is still broken
<whitequark[m]> right, I've seen you change one of them but I guess there were more
<purdeaandrei[m]> the basic test I fixed
<whitequark[m]> what you could do is to remove the skid buffer test entirely in the first commit, and then add a second commit which adds new behavior and tests
<whitequark[m]> this way, each commit individually passes the testsuite, and the PR can be merged as a whole after being reviewed as a unit
<whitequark[m]> I feel like the test you added has some issues with how it's written
<purdeaandrei[m]> I wrote it to be generic
<whitequark[m]> the names of the testbenches are inconsistent and unclear to me, it's not using the stream helper functions, and it's kind of hard to understand how the four testbenches interact
<purdeaandrei[m]> to be latency-agnostic
<whitequark[m]> the former two issues are with the style, the last one is with the substance
<whitequark[m]> and I guess the substance is harder to understand if the style is inconsistent
<whitequark[m]> aren't you testing that the value being sampled is the same as the value you put there previously? in which case a pair of testbenches (one outputting some sequence, one checking that sequence) and a `m.d.sync += sim_port.i.eq(sim_port.i)` would do the job
<whitequark[m]> but maybe I'm missing some subtlety of it
<purdeaandrei[m]> no, the outputs are unrelated, are only there for looking at the waveforms visually
<purdeaandrei[m]> I could remove the data output
<purdeaandrei[m]> and keep only the clock output, and data input
<purdeaandrei[m]> So just to clarify
<purdeaandrei[m]> I have input_generator_tb
<purdeaandrei[m]> that generates values
<purdeaandrei[m]> I don't know which of these are going to be sampled, cause I'm not supposed to know the latency
<purdeaandrei[m]> save_supposed_to_sample_values_tb() -- this loks at the interface clock positive edge, when that happens, that's when the IO is supposed to sample the signal, so it saves the value, this is the expected value
<purdeaandrei[m]> and I have a reader testbench, which reads from i_stream, and compares with the expected
<purdeaandrei[m]> I'll edit it a bit to make things clearer...
<_whitenotifier-3> [glasgow] purdeaandrei synchronize pull request #674: Fix iostreamer - https://github.com/GlasgowEmbedded/glasgow/pull/674
<purdeaandrei[m]> I removed data_out, I renamed some things, and added docstrings to explain how things interact.
<purdeaandrei[m]> Do you mean stream_get and stream_put?
<purdeaandrei[m]> those are defined in test_qspi
<purdeaandrei[m]> should I just copy them over?
<whitequark[m]> that or import them
<whitequark[m]> I need to take care of something so I'll check this out later
<_whitenotifier-3> [glasgow] purdeaandrei synchronize pull request #674: Fix iostreamer - https://github.com/GlasgowEmbedded/glasgow/pull/674
<purdeaandrei[m]> Alright, I switched it over to use helper functions, hopefully it's more readable now
<_whitenotifier-3> [glasgow] purdeaandrei synchronize pull request #674: Fix iostreamer - https://github.com/GlasgowEmbedded/glasgow/pull/674
<purdeaandrei[m]> I've also simplified the skid buffer test
<purdeaandrei[m]> the new test now passes, and it would also pass with modifications we're planning for the skid buffer
<purdeaandrei[m]> So I guess PR #674 can be individually reviewed and merged as it is, and skid buffer modifications can go into its own PR
<_whitenotifier-3> [glasgow] purdeaandrei reviewed pull request #674 commit - https://github.com/GlasgowEmbedded/glasgow/pull/674#discussion_r1729742373
lane has quit [Ping timeout: 248 seconds]
DragoonAethis has quit [Quit: No Ping reply in 180 seconds.]
Wanda[cis] has quit [Ping timeout: 252 seconds]
account[m] has quit [Ping timeout: 272 seconds]
purdeaandrei[m] has quit [Ping timeout: 272 seconds]
anuejn has quit [Remote host closed the connection]
DragoonAethis has joined #glasgow
bgamari has quit [Quit: ZNC 1.8.2 - https://znc.in]
lane has joined #glasgow
bgamari has joined #glasgow
anuejn has joined #glasgow
account[m] has joined #glasgow
purdeaandrei[m] has joined #glasgow
Wanda[cis] has joined #glasgow
<purdeaandrei[m]> Does DDRBuffer instantiate an extra flip-flop that's not in the PIO block, for the falling edge output?
tec has quit [Quit: bye!]
tec has joined #glasgow
<purdeaandrei[m]> Looks like the answer to that question is YES
<purdeaandrei[m]> I think I agree
<purdeaandrei[m]> So clock edge 2 is where the o_stream transfer takes place, while clock edge 4 is where the i_stream transfer takes place. At which point i_stream samples "A", and whatever bit came before "A"
<purdeaandrei[m]> i.e. these are the actual sampling points, as seen from the outside signal's perspective:
<purdeaandrei[m]> That means that if there's a burst of N bits read, then there would need to be N // 2 + 1 transfers on i_stream, in order to read it
<purdeaandrei[m]> (as long as the burst begins on the rising edge)
jstein has joined #glasgow
<purdeaandrei[m]> Yeah, not intuitive at all
<purdeaandrei[m]> A latency of 3 really feels wrong, cause then it would be sampling bits B and C
<purdeaandrei[m]> (I wonder why the latency of 3 currently works with qspi)
<purdeaandrei[m]> Alternatively we could have varying latency, in order to sample A and B, and that would make sense specifically for DDR buffers, but then it would be inconsistent with FFBuffer.
hl has quit [Remote host closed the connection]
hl has joined #glasgow
jstein has quit [Read error: Connection reset by peer]
<_whitenotifier-3> [glasgow] purdeaandrei opened pull request #675: gateware.iostream.IOStreamer: let o_stream transfer if i_stream not rdy - https://github.com/GlasgowEmbedded/glasgow/pull/675
<purdeaandrei[m]> For the purpose of discussion, I think this would implement what's needed to prevent deadlock: https://github.com/GlasgowEmbedded/glasgow/pull/675/files Still need to do more detailed testing. Also, unfortunately this keeps the combinatorial path, but at least prevents deadlock.
<purdeaandrei[m]> It's probably missing a condition of ```with m.If(i_en & self.i_stream.ready):``` In which case the skid buffer should be shifted, withoutincrementing/decrementing skid_at. Anyway let's what happens the the sdr latency = 1 pr first...
redstarcomrade has quit [Read error: Connection reset by peer]
<mwkmwkmwk[m]> I don't think this is necessary because, when the skid buffer is non-empty, the whole thing grinds to a halt
<mwkmwkmwk[m]> and won't accept more data?
<purdeaandrei[m]> In short: stream rules would allow the i_stream consumer to implement: m.d.comb += ready.eq(valid)
meklort_ has joined #glasgow
<purdeaandrei[m]> But that would make the system not work at all
meklort has quit [Ping timeout: 248 seconds]
meklort_ is now known as meklort
<purdeaandrei[m]> In addition to what's been discussed: I think in general, for any pass-through stream, a combinatorial forwarding on of the ready signal should only be okay, if it doesn't prevent the pass-through stage from generating a valid signal. (the pass-through stage should be able to generate a valid signal either combinatorially from its input valid signal, or if not combinatorially, then it should be able to take and register at least one
<purdeaandrei[m]> transfer, so no deadlock happens)
<purdeaandrei[m]> hmm, I wanted to write a test for DDR iostreamer, but I'm getting DDR buffers not supported in simulation error
<purdeaandrei[m]> why's that?
<whitequark[m]> purdeaandrei (@_discord_693328004947902465:catircservices.org) yeah
<whitequark[m]> purdeaandrei (@_discord_693328004947902465:catircservices.org) technical reasons; we didn't want to commit to the topology of posedge flop + negedge flop + mux because it wasn't totally clear that this is the best way to do it
<whitequark[m]> (and pre-capture for negedge flop)
<whitequark[m]> the options are that and a simulator process, but we don't have a way to hook up the process yet
<mwkmwkmwk[m]> that and the whole "we have no way to have latency match whatever hw platform you end up using" issue
<whitequark[m]> that also but we can just say that the simulated one has some arbitrary fixed latency
<whitequark[m]> it'd still be quite useful
<purdeaandrei[m]> I think I could do a context manager that temporarly overrides amaranth.lib.io.DDRBuffer
<purdeaandrei[m]> you mean like sub-half-cycle latencies?
<whitequark[m]> no, full cycle
<whitequark[m]> that's pretty cursed
<whitequark[m]> you should probably make a subclass with a different elaborate implementation for the simulation case
<mwkmwkmwk[m]> the problem with DDRBuffer is that the Lattice hardware DDR output buffer has 3 cycles of latency for some godsforsaken reason
<mwkmwkmwk[m]> so we cannot commit to actually having well-defined latency.
<purdeaandrei[m]> Oh, t1, and t2 from this image are different from platform to platform? https://amaranth-lang.org/docs/amaranth/latest/stdlib/_images/io/ddr-buffer.svg
<mwkmwkmwk[m]> yes.
<mwkmwkmwk[m]> (and by "Lattice" I mean "Lattice and Gowin" because Gowin FPGAs are just Lattice FPGAs with the serial numbers filed off)
<purdeaandrei[m]> yes, but the cursed override is what i'd still need to do right?
<whitequark[m]> no, you just use the subclass
<whitequark[m]> (in IOStreamer)
<purdeaandrei[m]> oh, so I then add a parameter to the constructor, to be able to override what class to use, right?
<mwkmwkmwk[m]> no you just always use your new shiny DDRBuffer
<mwkmwkmwk[m]> just make sure it falls back to the normal implementation in non-sim case.
<purdeaandrei[m]> ah, okay, got it, that would work, but then iostream.py would essentially contain test code, wouldn't it be better to do something like this?
<whitequark[m]> no, it's worse, because that parameter is effectively noise
<purdeaandrei[m]> hmm? where does 3 cycles come from?
<whitequark[m]> nobody should care about it as there is no reason to inject the dependency
<whitequark[m]> the hardware has 3 flops in the output path
<purdeaandrei[m]> are the ice40 docs incorrect?
<whitequark[cis]> ice40 is not lattice
<purdeaandrei[m]> there are these posedge and negedge flops in the PIO:
<whitequark[m]> ice40 is not lattice
<whitequark[m]> it's siliconblue
<purdeaandrei[m]> ah, sorry
<whitequark[m]> lattice is ecp* and nexus
<whitequark[m]> yeah, we refer to FPGA families by the original developer
benny2366[m] has joined #glasgow
<benny2366[m]> https://www.latticesemi.com/iCE40 it think you might be wrong
<whitequark[m]> nope, I'm right
<benny2366[m]> so your arguening with google ok cool 🙂
<whitequark[m]> yep! I have domain knowledge that you lavk
<whitequark[m]> s/lavk/lack/
<purdeaandrei[m]> Okay, so the 3 cycles thing is not something I should worry about in the context of glasgow, it's only something that affects Amaranth in general
<whitequark[m]> it doesn't matter what name you slap on the chip, it matters which team has originally developed the silicon
<benny2366[m]> euhm jha unless you come with evidence your knowledge means nothing!
<whitequark[m]> fuck off lol
<mwkmwkmwk[m]> it's something that will likely be an issue at some point, given the revE plans
galibert[m] has joined #glasgow
<galibert[m]> Your knowledge means nothing compared to the power of Cthulu!
<mwkmwkmwk[m]> but... that's some time off in the future, and we'll probably have some better infra in place in amaranth at that point
<benny2366[m]> aah is that how you treat people how challenge you really mature. but this is the lasst thing i will say on this . also I would realy realy sudgest that you change your docs then "The SiliconBluePlatform class provides a base platform to support Lattice (earlier SiliconBlue) iCE40 devices."
<mwkmwkmwk[m]> and exactly why would we change it?
<galibert[m]> Altera ealier Intel earlier Altera?
<whitequark[m]> that is how I treat people who waste my time and refuse to read a clear explanation that I provided twice in this chat
<whitequark[m]> once you learn that life isn't about winning an argument, message me an apology and I'll unblock you
<galibert[m]> So, never?
<whitequark[cis]> it seems so
<galibert[m]> He probably can't be seen losing an argument with a girrrrl
<whitequark[m]> yes, it's a small part of why we don't currently provide a DDR buffer sim
<galibert[m]> The variable latency?
<whitequark[m]> currently, yes
<whitequark[m]> (or at least there is no better one)
<whitequark[m]> this method is kind of a historical artefact more than an intentional addition to the language
<whitequark[m]> eventually we'd probably get some kind of SimulationPlatform and all of that code will break, which would cause some issues
<purdeaandrei[m]> Actually I see amaranth checks ```python
<purdeaandrei[m]> if isinstance(self._port, SimulationPort):
<purdeaandrei[m]> ```
<purdeaandrei[m]> would that be better?
<whitequark[m]> that has slightly different semantics, but yes, I'd say go for it, it's a more robust chevk
<whitequark[m]> s/chevk/check/
<purdeaandrei[m]> Can I add a process or a tesbench to a module to implement my simulation DDRBuffer?
<purdeaandrei[m]> It's not clear to me how I can return non-synthesizable logic from elaborate()
<whitequark[m]> you can't (I tried to explain it above, let me go over it again)
<whitequark[m]> the two options are to reframe the buffer as synthesizable logic, or to add a process externally
<whitequark[m]> the simulator doesn't let you return a process from elaborate, mainly because right now it's not well defined what happens with clock domains or eg ResetInserter
<whitequark[m]> so it would be a footgun if it was allowed
twix has joined #glasgow
<whitequark[m]> yes. that makes it less generic though
<whitequark[m]> since you can't get at the buffer itself if you follow our coding guidelines (you should not modify the object being elaborated), it would have to be stashed inside the platform
<whitequark[m]> meaning you'd need a custom simulation platform to go with the buffer
<whitequark[m]> Glasgow actually used to do this before SimulationPort was upstreamed, look it up in git history how to do this
<whitequark[m]> it's a lot of ceremony for something so simple, sadly, but that'd be the way yo go
<whitequark[m]> s/yo/to/
<whitequark[m]> (specifically, look up the history for the right Fragment.get invocation)
<purdeaandrei[m]> So if I go the synthesizable logic route, I'd need to introduce a new clock domain, right?
<whitequark[m]> yes
<whitequark[m]> clock domains are local to the module (since 0.6, in 0.5.1 used in Glasgow you need to use local=True)
<whitequark[m]> so this is invisible to the outer logic
<mwkmwkmwk[m]> you'd want to use `ClockSignal(self.i_domain)` (or `o_domain`) as source, but yes
<purdeaandrei[m]> Hmm, so I have a testcase, but I'm getting glitches on the output:
<purdeaandrei[m]> it's when this mux switches so to speak:
<purdeaandrei[m]> and at the same time the flipflop outputs a new value
<purdeaandrei[m]> I wonder if that's something that can happen in real hardware
<purdeaandrei[m]> i.e. can you even use a DDR buffer to generate a hazard-less signal?
<purdeaandrei[m]> Maybe there's timing guarantees in the real FPGA, that the mux is glitchless and the clock always reaches the mux, before the mux sees the flipflop changing its output
<whitequark[m]> this glitch is the consideration that kept DDR buffers from being implemented upstream
<purdeaandrei[m]> but is that actually the case?
<whitequark[m]> I don't know if the FPGA functions the way the schematic does
<whitequark[m]> I've seen Xilinx patents and they do it different
<whitequark[m]> but I don't know exactly what ice40 does because I haven't RE'd it from die shots
<whitequark[m]> that said, I can tell you a way to work around this in the simulator
<whitequark[m]> actually, hm. I'm not quite sure that what I'm thinking of can be made to work, but let's see
<whitequark[m]> make a new module where all you do is m.d.comb += o.eq(i)
<whitequark[m]> this is a buffer that adds an infinitesimal amount of time to the path it's on
<whitequark[m]> one "delta cycle", per the terminology used in the Amaranth simulator internally
<galibert[m]> it's not collapsed?
<whitequark[m]> I think that should work, but since we've made some changes to the language that optimize the netlist a bit more in 0.5, I'm not completely sure
<whitequark[m]> you can use fs_per_delta argument to write_vcd to visualize this
<whitequark[m]> if you aren't using it already
<purdeaandrei[m]> it was already there
<whitequark[m]> so the goal when building the DDRBuffer model would be to add a delay to the clock before it reaches the mux
<whitequark[m]> a delay of one delta cycle should be sufficient
<purdeaandrei[m]> I guess I don't care how it actually implements it, what I care about is: does it offer a glitchless guarantee?
<purdeaandrei[m]> I mean do I need to make my FFBuffer implementation glitchless? Or do I need to make my testbench not sensitive to glitches?
<purdeaandrei[m]> cause right now my testbench is sensitive, and thus fails
<purdeaandrei[m]> but I could just make it not sensitive, and not care about the glitches on the output
<purdeaandrei[m]> So I guess it sounds like the language may be allowed to optimize this out, so if it doesn't optimize it out now, it may optimize it out in the future
<purdeaandrei[m]> So that doesn't sound very robust
<purdeaandrei[m]> maybe I should just make my testbench non-glitch sensitive, even if real FPGAs guarantee glitchless-ness, and be done with it, cause then it won't fail with a future language optimization change
<purdeaandrei[m]> I don't necessarily mind seeing the glitches in gtkwave, since my testcase is really there to just verify sampling time
<purdeaandrei[m]> I'd still like to know if FPGAs guarantee glitchlessness in general, for my own knowledge though
<whitequark[m]> correct
<whitequark[m]> it does
<whitequark[m]> btw, that schematic is verifiably wrong (as in I personally verified that it doesn't match hardware)
<whitequark[m]> specifically, you cannot toggle the clock by toggling CLOCK_ENABLE
<whitequark[m]> and keeping the clock at 1
<whitequark[m]> s/toggle/clock/, s/clock/flops/
<whitequark[m]> s/toggle/trigger/, s/clock/flops/
<whitequark[m]> so I think it was drawn by someone after the fact as an illustration, and bears little resemblance to what the device implements
<purdeaandrei[m]> interesting
<whitequark[m]> you can try it with your glasgow if you want
<whitequark[m]> I thought that implementing it this way was batshit, because it would result in really obvious race conditions whenever CLOCK_ENABLE has glitches on it
<whitequark[m]> and of course, it's just not implemented that way
<galibert[m]> So not batshit for once?
<purdeaandrei[m]> I think they do this kind of stuff sometimes, in silicon, in order to optimize clock tree power
<purdeaandrei[m]> and I think they use timing analysis to prove it correct
<purdeaandrei[m]> but I've never seen it with DDR
<purdeaandrei[m]> I think it can only be done with simple flipflops
<whitequark[m]> yes, I believe it's sometimes done
<whitequark[m]> which is why I made sure to check what the silicon does instead of assuming one way or another
<whitequark[m]> I would generally say this is a good idea, but in this specific case, consider that it would be valuable to test the QSPI controller with a simulated flash
<whitequark[m]> however the glitches would totally break it
<whitequark[m]> I guess there is a way to handle it, which is to add some logic to the simulated flash to filter out really short glitches
<whitequark[m]> I don't like it, but I guess sometimes people do put glitch filters like that on clocks (eg the mandated I2C 50ns filter)
<whitequark[m]> (I think it's mandated? I've seen it everywhere but I don't recall the spec)
<_whitenotifier-3> [glasgow] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-674-2b73c7948d64b6da1c70931f95e11e2a55354ea1 - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [GlasgowEmbedded/glasgow] github-merge-queue[bot] pushed 1 commit to main [+0/-0/±2] https://github.com/GlasgowEmbedded/glasgow/compare/2b73c7948d64...81653bbcf021
<_whitenotifier-3> [glasgow] whitequark closed pull request #674: Fix iostreamer - https://github.com/GlasgowEmbedded/glasgow/pull/674
<_whitenotifier-3> [glasgow] github-merge-queue[bot] deleted branch gh-readonly-queue/main/pr-674-2b73c7948d64b6da1c70931f95e11e2a55354ea1 - https://github.com/GlasgowEmbedded/glasgow
<purdeaandrei[m]> I'm not sure
<_whitenotifier-3> [glasgow] purdeaandrei opened pull request #676: gateware.iostream.IOStreamer: fix bug for incorrect sampling DDR inputs - https://github.com/GlasgowEmbedded/glasgow/pull/676
<purdeaandrei[m]> Okay, I have my testbench and DDR latency fix here: https://github.com/GlasgowEmbedded/glasgow/pull/676
altracer[m] has joined #glasgow
<altracer[m]> I3C relies in I2C targets not seeing 12.5 MHz bitrates
<altracer[m]> /in/on
<altracer[m]> it could be physical tRC for the real device... do you need >133MHz QuadSPI?
<_whitenotifier-3> [glasgow] whitequark reviewed pull request #676 commit - https://github.com/GlasgowEmbedded/glasgow/pull/676#discussion_r1730036822
<purdeaandrei[m]> This testbench is a bit hairy to read, but it makes it robust to glitches... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/gDigdsoNDDOxoNkwoDiFMGUA>)
<_whitenotifier-3> [glasgow] whitequark reviewed pull request #676 commit - https://github.com/GlasgowEmbedded/glasgow/pull/676#discussion_r1730037387
<_whitenotifier-3> [glasgow] whitequark reviewed pull request #676 commit - https://github.com/GlasgowEmbedded/glasgow/pull/676#discussion_r1730037638
<_whitenotifier-3> [glasgow] purdeaandrei synchronize pull request #676: gateware.iostream.IOStreamer: fix bug for incorrect sampling DDR inputs - https://github.com/GlasgowEmbedded/glasgow/pull/676
<_whitenotifier-3> [glasgow] purdeaandrei synchronize pull request #676: gateware.iostream.IOStreamer: fix bug for incorrect sampling DDR inputs - https://github.com/GlasgowEmbedded/glasgow/pull/676
<_whitenotifier-3> [glasgow] purdeaandrei reviewed pull request #676 commit - https://github.com/GlasgowEmbedded/glasgow/pull/676#discussion_r1730039026
<_whitenotifier-3> [glasgow] purdeaandrei reviewed pull request #676 commit - https://github.com/GlasgowEmbedded/glasgow/pull/676#discussion_r1730039031
<_whitenotifier-3> [glasgow] purdeaandrei synchronize pull request #676: gateware.iostream.IOStreamer: fix bug for incorrect sampling DDR inputs - https://github.com/GlasgowEmbedded/glasgow/pull/676
<_whitenotifier-3> [glasgow] purdeaandrei reviewed pull request #676 commit - https://github.com/GlasgowEmbedded/glasgow/pull/676#discussion_r1730039466
<_whitenotifier-3> [glasgow] whitequark reviewed pull request #676 commit - https://github.com/GlasgowEmbedded/glasgow/pull/676#discussion_r1730040419
<_whitenotifier-3> [glasgow] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-676-81653bbcf021d720b356317a33b3383640889426 - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [GlasgowEmbedded/glasgow] github-merge-queue[bot] pushed 1 commit to main [+0/-0/±2] https://github.com/GlasgowEmbedded/glasgow/compare/81653bbcf021...579ff961d9f2
<_whitenotifier-3> [GlasgowEmbedded/glasgow] purdeaandrei 579ff96 - gateware.iostream.IOStreamer: fix bug for incorrect sampling DDR inputs
<_whitenotifier-3> [glasgow] github-merge-queue[bot] deleted branch gh-readonly-queue/main/pr-676-81653bbcf021d720b356317a33b3383640889426 - https://github.com/GlasgowEmbedded/glasgow
<_whitenotifier-3> [glasgow] whitequark closed pull request #676: gateware.iostream.IOStreamer: fix bug for incorrect sampling DDR inputs - https://github.com/GlasgowEmbedded/glasgow/pull/676
Guest75 has joined #glasgow
redstarcomrade has joined #glasgow
redstarcomrade has quit [Changing host]
redstarcomrade has joined #glasgow
<_whitenotifier-3> [glasgow] purdeaandrei synchronize pull request #675: gateware.iostream.IOStreamer: let o_stream transfer if i_stream not rdy - https://github.com/GlasgowEmbedded/glasgow/pull/675
Guest75 has quit [Quit: Client closed]
<_whitenotifier-3> [glasgow] whitequark reviewed pull request #675 commit - https://github.com/GlasgowEmbedded/glasgow/pull/675#discussion_r1730044263
redstarcomrade has quit [Read error: Connection reset by peer]
Guest75 has joined #glasgow
<_whitenotifier-3> [glasgow] purdeaandrei reviewed pull request #675 commit - https://github.com/GlasgowEmbedded/glasgow/pull/675#discussion_r1730046782
Guest75 has quit [Quit: Client closed]
Guest75 has joined #glasgow
Guest75 has quit [Client Quit]
Guest83 has joined #glasgow
<purdeaandrei[m]> what would be the best way to write a test that automatically starts from a fresh state, and tests multiple stimuluses? Should I reset the dut? or should I just re-instantiate everything?
Guest83 has quit [Client Quit]
<whitequark[m]> latter
<whitequark[m]> it could be multiple tests, with some behavior factored out
Guest75 has joined #glasgow
Guest75 has quit [Client Quit]
<purdeaandrei[m]> something like this?
<purdeaandrei[m]> I don't think I'd want to do 10000 test_ functions (in this specific artificial example)
<whitequark[m]> yes, something like that
<whitequark[m]> I wouldn't suggest 10k functions, that is of course too burdensome
Guest75 has joined #glasgow
Guest3 has joined #glasgow
redstarcomrade has joined #glasgow
redstarcomrade has joined #glasgow
Guest75 has quit [Client Quit]
Guest3 has quit [Client Quit]
Eli2| has joined #glasgow
Eli2_ has quit [Ping timeout: 248 seconds]
Attie[m] has joined #glasgow
<Attie[m]> Catherine: there has been mention of a GPIO applet in the past... IIRC you're not against the idea - correct?
<whitequark[cis]> am fine with the idea
<Attie[m]> Great, thanks! I'll keep going with my scrappy implementation then
<Attie[m]> a couple of follow up Q's...
<Attie[m]> am I correct in thinking that "pin groups" all have a common OE? i.e: one can't be output with another input
<whitequark[cis]> (brb)
<Attie[m]> if so / due to this, I'm tempted to just grab all 16x I/Os, like the self test applet does. thoughts?
<Attie[m]> 2: am I correct in thinking a FIFO + state machine will be more performant / preferred to device registers? ("performant" when considering that Python is directly in the loop)
<whitequark[cis]> no, applets shouldn't do platform.request for the pins (this won't even be portable across revA/revC) and the selftest applet should probably not even be an applet at all
<whitequark[cis]> Attie[m]: no, with Amaranth 0.5 there is no requirement, you can slice a port group to instantiate several buffers (one per pin for example)
<whitequark[cis]> Attie[m]: yes, it should probably be just a FIFO that grabs 16 bits and distributes them across buffers
<Attie[m]> ok, I'll review my side and come back
<whitequark[cis]> port groups at this point are just a UI thing
<whitequark[cis]> you get a GlasgowPort either way, and applets should use that abstraction with buffers
<whitequark[cis]> otherwise the applet analyzer won't work (when we port it to Amaranth 0.5)
<whitequark[cis]> and in general, that's the only abstraction I'm willing to stabilize at some later point
<Attie[m]> I thought it was a glasgow-side restriction, that has Nx IO, and 1x OE commoned across them all
<Attie[m]> (and agree / understood re stability)
<whitequark[cis]> no
<whitequark[cis]> Glasgow has individual OEs... that's why revA had those awful FXMA chips
<whitequark[cis]> cause there weren't enough pins for individual OEs
<whitequark[cis]> so for revC we had to switch to the BGA and per-bit level shifters
<Attie[m]> ahh, this looks like a change since Amaranth 0.5 - awesome
<Attie[m]> stale knowledge...
<whitequark[cis]> yep! we put a loooot of work into lib.io
<Attie[m]> looks very shiny! sorry for not keeping up
<whitequark[cis]> no worries
<purdeaandrei[m]> Is there an idiomatic way I can assert my testcase finishes in N cycles, better than something like this?... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/kSoLonNxrxADuQXBStxWlcNp>)
<purdeaandrei[m]> or would it be better to just allow the testcase to run forever?
<whitequark[m]> maybe .run_until()?
<whitequark[m]> otherwise I think that's an OK way to handle it
<purdeaandrei[m]> sorry, actually sim.add_testbench(tb, background=True) cause otherwise it would always fail
mabl[m] has joined #glasgow
<mabl[m]> <whitequark[m]> "maybe .run_until()?" <- But that doesn't tell me the state of the testbenches right?
mats1 has quit [Ping timeout: 260 seconds]
<whitequark[cis]> <mabl[m]> "But that doesn't tell me the..." <- yeah, testbenches don't really have a state
<whitequark[cis]> to Amaranth that is
mats1 has joined #glasgow
skipwich has quit [Remote host closed the connection]
skipwich has joined #glasgow
twix has quit [Quit: ZNC 1.9.0+deb2+b1 - https://znc.in]
<Attie[m]> after some reading, it seems I've fallen into the same trap - io.Buffer has only one OE for all signals
<Attie[m]> if I want separate OE for each signal, should I just use one io.Buffer per signal / pin?
<Wanda[cis]> yup
<Attie[m]> ok! thanks :)
skipwich_ has joined #glasgow
<whitequark[cis]> we had an RFC about changing that and rejected it
<Attie[m]> fair - i think it makes sense, just caught me out
skipwich has quit [Ping timeout: 260 seconds]
skipwich_ is now known as skipwich
skipwich has quit [Remote host closed the connection]
<whitequark[cis]> yeah, we might want to have a quick note on that in the docs
skipwich has joined #glasgow
skipwich has quit [Remote host closed the connection]
skipwich has joined #glasgow
skipwich has quit [Remote host closed the connection]
skipwich has joined #glasgow
skipwich has quit [Remote host closed the connection]
skipwich has joined #glasgow
skipwich_ has joined #glasgow
skipwich has quit [Ping timeout: 252 seconds]
skipwich_ is now known as skipwich
<Attie[m]> i'm using glasgow script, and finding that i need to do a FIFO read before the writes "take effect"...
<Attie[m]> it works fine / immediately using glasgow repl
<Attie[m]> any ideas?
<whitequark[cis]> yes
<whitequark[cis]> you need .flush()
<whitequark[cis]> the REPL does this automatically as a courtesy
<Attie[m]> got it
redstarcomrade has quit [Read error: Connection reset by peer]
<Attie[m]> so if I'm making a "GPIOInterface", should I build that flush into the applet?
<Attie[m]> (rather than expect a script user to call it)
<whitequark[cis]> yeah, just put it into .set() or whatever
<Attie[m]> cool
skipwich_ has joined #glasgow
skipwich has quit [Ping timeout: 252 seconds]
skipwich_ is now known as skipwich
skipwich_ has joined #glasgow
<Attie[m]> thoughts welcome
skipwich has quit [Ping timeout: 260 seconds]
skipwich_ is now known as skipwich
<whitequark[cis]> Attie[m]: I'll have a lot of feedback when I get to it
<Attie[m]> 🙈
<Attie[m]> when you're ready, thanks
skipwich has quit [Remote host closed the connection]
skipwich has joined #glasgow
skipwich_ has joined #glasgow
skipwich has quit [Ping timeout: 252 seconds]
skipwich_ is now known as skipwich
skipwich has quit [Ping timeout: 252 seconds]
skipwich has joined #glasgow
skipwich_ has joined #glasgow
skipwich__ has joined #glasgow
skipwich has quit [Ping timeout: 246 seconds]
skipwich__ is now known as skipwich
skipwich_ has quit [Ping timeout: 246 seconds]
<_whitenotifier-3> [glasgow] attie opened pull request #677: Implement a GPIO applet, for controlling the pins directly from Python - https://github.com/GlasgowEmbedded/glasgow/pull/677
skipwich_ has joined #glasgow
skipwich has quit [Ping timeout: 248 seconds]
skipwich_ is now known as skipwich
skipwich_ has joined #glasgow
<whitequark[cis]> yeah, that person really did not want to read
WilfriedKlaebe[m has joined #glasgow
<WilfriedKlaebe[m> .oO( it's even in the fine article at https://en.m.wikipedia.org/wiki/ICE_(FPGA): »Lattice received the iCE brand as part of its 2011 acquisition of SiliconBlue Technologies.« )
skipwich has quit [Ping timeout: 246 seconds]
skipwich has joined #glasgow
skipwich_ has quit [Ping timeout: 252 seconds]
skipwich__ has joined #glasgow
skipwich has quit [Ping timeout: 252 seconds]
skipwich__ is now known as skipwich
<purdeaandrei[m]> hmm, so, randomized testing revealed yet another bug in iostreamer
<purdeaandrei[m]> with DDR buffer, so latency of 2
<purdeaandrei[m]> if you launch 2 transfers with i_en=1 in two consecutive clock cycles
<purdeaandrei[m]> and on the immediately following clock cycle you set i_stream.ready low, and then again on the following clock cycle you set i_stream.ready high
<purdeaandrei[m]> then a sample will be lost
<whitequark[cis]> (is this before or after your changes? I think my mental model only covers before)
skipwich_ has joined #glasgow
skipwich has quit [Ping timeout: 252 seconds]
skipwich_ is now known as skipwich
<_whitenotifier-3> [glasgow] purdeaandrei opened pull request #678: F iostreamer sample lost bugfix - https://github.com/GlasgowEmbedded/glasgow/pull/678
Guest75 has joined #glasgow
<_whitenotifier-3> [glasgow] purdeaandrei commented on pull request #678: IOstreamer sample lost bugfix - https://github.com/GlasgowEmbedded/glasgow/pull/678#issuecomment-2308574285
Guest75 has quit [Quit: Client closed]
Guest75 has joined #glasgow
Guest18 has joined #glasgow
Guest18 has quit [Write error: Broken pipe]
<_whitenotifier-3> [glasgow] purdeaandrei synchronize pull request #675: gateware.iostream.IOStreamer: let o_stream transfer if i_stream not rdy - https://github.com/GlasgowEmbedded/glasgow/pull/675
Guest70 has joined #glasgow
<_whitenotifier-3> [glasgow] purdeaandrei reviewed pull request #675 commit - https://github.com/GlasgowEmbedded/glasgow/pull/675#discussion_r1730173854
Guest75 has quit [Ping timeout: 256 seconds]
jn_ has joined #glasgow
jn_ has joined #glasgow
jn_ has quit [Changing host]
jn has quit [Ping timeout: 260 seconds]
Guest70 has quit [Quit: Client closed]