cr1901 changed the topic of ##yamahasynths to: Channel dedicated to questions and discussion of Yamaha FM Synthesizer internals and corresponding REing. Discussion of synthesis methods similar to the Yamaha line of chips, Sound Blasters + clones, PCM chips like RF5C68, and CD theory of operation are also on-topic. Channel logs: https://libera.irclog.whitequark.org/~h~yamahasynths
<cr1901> ejs: Have you ever 3d printed an ISA bracket?
<ejs> no. only MCA
<cr1901> Ahhh I see...
<cr1901> My AST RAMpage 286 did not come with a bracket. It is a full length card. I haven't had to touch it in a while.
<cr1901> I have learned the hard way tonight that the RAMpage is extremely sensitive to touch
<cr1901> And have gotten numerous crashes while trying to adjust my expansion cards :i
<cr1901> :o*
<cr1901> (maybe if I made a bracket, I wouldn't crash the machine several times)
<cr1901> ejs: Also, FWIW... I'm running diagnostics... I got a "corrected by ECC" alert on my hard disk. I didn't know MFM drives _had_ ECC on them.
<ejs> interesting
<cr1901> I thought they'd use a CRC for detection, like floppies
<cr1901> Though I guess you COULD use a CRC for error correction, since a CRC is >= 3 bits.
<cr1901> I wonder why floppies don't bother doing correction...
<cr1901> ejs: Did you get as far as finding the CRC in your RLL project?
<tpw_rules> i don't think you can use a CRC as error correction
<ejs> it wasn't consistent. i could decode the metadata header but not the data sector
<NiGHTS> whitequark sur Twitter : "beyond some point yes, but I think you could go a long way by using better line codes, adding ECC, and using narrower tracks with servo control… https://t.co/CRVwDwsH5v"
<tpw_rules> does wq know the etymology of "cylinder"?
<cr1901> tpw_rules: The idea behind error correction is that you tack on extra bits to your data bits to increase the Hamming distance between possible valid piece of data
<tpw_rules> well i guess if you additionally restructure the CRC to cover less bits
<cr1901> if one bit gets screwed up, you map it to the valid piece of data with the least Hamming distance to what you received. CRCs aren't optimized for this, but there's no reason why you can't do the same mapping.
<cr1901> ejs: Ahhh I see. Was mainly curious if you saw something resembling a CRC
<sorear> any distance > 2 error detecting code can be turned into an ECC with a brute force search for nearby valid codewords. the trick is that only some of them can do this sub-exponentially (see also McEliece cryptosystem where a ECC is disguised as a non-special checksum)
<ejs> oh it definitely had a CRC, i just never figured it out
<cr1901> sorear: Yea, that's a better explanation that mine :D. My question is basically "how do you optimally search for the valid codeword if you know the extra data bits are a CRC"?
<cr1901> Things like Hamming(7,4) and Reed Solomon use some cool properties to make the search quick
<cr1901> But Idk how to do it for a CRC
<sorear> the syndrome (computed CRC ^ stored CRC) is the discrete exponential of the flipped bit (for 1 flipped bit), so you need a discrete logarithm, realistically a lookup table
<sorear> if it's a CRC8 it can only recognize 255 different errors which is a problem if you have 4096 bit sectors
<cr1901> 512 bit sectors
<cr1901> tpw_rules: This is a shameless plug, but you might like this mini thread by me: https://twitter.com/cr1901/status/1385730725656371206
<NiGHTS> William D. Jones sur Twitter : "Finally (re-)learned something I've been meaning to do for a while: How error correcting codes work. Short version: * Add at least 3 extra bits (1 or 0, doesn't matter) to data payload * Hamming distance between each valid data payload becomes at least 3. * A 1-bit error >>"
<tpw_rules> many of those 255 errors would not be single bit errors though right
<sorear> or you could do the dlog iteratively (1 iteration per bit, but only if there was an error in the first place ... each iteration is a shift, XOR-if-carry, and equality test)
<sorear> well 255 different nonzero syndromes each corresponding to (2^512 - 1) / 255 errors
<sorear> restricting to 1-bit errors you still have 2 possible errors for each syndrome (the pattern repeats every 255 bits)
<cr1901> I forget is syndrome decoding refers to "any technique used to reduce the size of the received word => codeword LUT" or "the linear algebra trick that reveals the bit position in Hamming(7,4)"
<cr1901> sorear: I vaguely remember what a syndrome is, but see above^
<sorear> it's both
<sorear> but I also defined it in this usage abve
<sorear> (the CRC *is* a linear code, although not Hamming)
<cr1901> Ahhh... I'll have to reread the wiki pages and your explanation :D. And TIL!
<cr1901> https://www.pdp8online.com/mfm/code/mfm/mfm_read_util_doc.html balrog: Is this the person who makes that Beagle MFM reader?
<cr1901> using the PRUs
<balrog> yes
<cr1901> The docs indeed say "MFM uses CRC as an error correction"
<balrog> oh, 3blue1brown did a very good pair of video on how hamming codes work fwiw
<balrog> of videos*
<cr1901> I'll watch it tonight to refresh. The Wiki pages on syndrome decoding and error correction are also surprisingly decent
<sorear> > 32 bit or longer polynomials may be usable for ECC even if the original controller only used them for CRC. The quality of the polynomial chosen determines the miss-correction probability. Most 32 bit polynomials specify 5 bit correction though some say they can be used for up to 11 bit correction.
<tpw_rules> over how much data
<sorear> a sector
<tpw_rules> it seems weird to me that 32 bits additional could correct 5 bits out of 4096
<sorear> apparently floppies were actually a 16-bit CRC, not 8-bit?
<tpw_rules> i guess 4096-D is pretty sparse
<cr1901> Yes
<cr1901> 16-bit CCITT
<cr1901> well, see the Glasgow doc
<sorear> tpw_rules: the sphere packing limit requires 2^32 > (4096 choose 5)
<tpw_rules> but that's not true?
<tpw_rules> unless google calculator is lying to me
<sorear> no, it's not, so it would only be over a shorter codeword
<NiGHTS> glasgow/__init__.py at 99b81aac282f02ec3f3cda9566d929e30f1ecef4 · GlasgowEmbedded/glasgow · GitHub
<cr1901> I've declared intellectual bankruptcy on trying to understand the 10 billion ways to represent the same CRC
<cr1901> So the way I remember CRCs are: The CRC is the remainder of dividing a bitstring (message) by a smaller bitstring (polynomial), where all borrows from subtraction are ignored. Thus subtraction == addition.
<cr1901> Because subtraction == addition, appending the remainder to your message bitstring and then dividing by the polynomial will give you remainder 0
<cr1901> something like that, right?
<cr1901> M = P*Q + R. M = Message, P = CRC Polynomial, Q = Quotient, R = Remainder, and addition ignores carries
<cr1901> Thus M + R is "concatenating the remainder", and P*Q + R + R = P*Q + 2*R = P*Q + 0 = P*Q
<cr1901> sorear: My q is... how can you relate discrete logs to M = P*Q + R?
<sorear> we assumed that the codeword has exactly one error. so in polynomial space the error is something like x^100 for an error at bit position 100, and the syndrome is x^100 (mod P)
<cr1901> So x^100 would come from actual_message - desired_message, or actual_message ^ desired_message
<cr1901> So you'd calculate x^100 (mod P), that will get you a value between 0 and P - 1, and you can calculate the bit position of the error in a LUT
<cr1901> with P entries
<sorear> yes
<sorear> or you can just loop from 0 to 100, won't take too long
<cr1901> >If the CRC is n bits, then the total protected message length (i.e. including the CRC itself) must be less than 2 n bits. 
<NiGHTS> Error Correction Using the CRC | SpringerLink
<cr1901> I said 512 bits... I meant bytes
<cr1901> sorear: I apologize, I meant 4096 bits, I can't multiply
<cr1901> But 4096 + 16 bits is still less than 2^16
<cr1901> So all single bit errors should be corrected?
<sorear> yes
<sorear> as long as the CRC polynomial is primitive
<cr1901> right I vaguely remember that too
<cr1901> >(10:35:35 PM) sorear: or you can just loop from 0 to 100, won't take too long
<cr1901> Can you elaborate on what you mean by "looping"? Flipping every bit from 0 to 100 until the CRC becomes 0?
<cr1901> err the CRC check becomes 0
<sorear> yes, but you can optimize that a bit
<cr1901> Keep in mind that in my particular case, this is a hard disk controller from 1984
<cr1901> It's not going to be fast, yet it _claims_ to have done ECC
<sorear> start with the syndrome "x^100", multiply it by x repeatedly until it becomes a known value "x^528" or an iteration limit is reached
<cr1901> And I'm guessing a LUT is prohibitive, as well as "iterating over the received data repeatedly")
<sorear> the multiply is just a single LFSR step
<cr1901> 528 = 512 + 16?
<cr1901> Hmmm
<cr1901> sorear: Cool, I'll have to chew on this until I understand it :P
<cr1901> Page 11
<cr1901> ECC polynomial is X32 + X28 + X26 + X19 + X17 + XI0 + X6 + X2 + 1
<cr1901> It also refers to "5 bits of error correction"
<cr1901> sorear: PDF page 87 gives an overview if you're curious. It's not a complete datasheet sadly: https://usermanual.wiki/Document/1983WesternDigitalComponentsCatalog.266017478/view
<NiGHTS> 1983_Western_Digital_Components_Catalog 1983 Western Digital Components Catalog
q3k is now known as q3k|h
<cr1901> https://twitter.com/gatecatte/status/1421932659979296771 "Catgirls? In my retro tech potpourri channel? It's more likely than you think."
<NiGHTS> gatecat 🏳️‍🌈 sur Twitter : "… "
q3k|h is now known as q3k