<widlarizerEmilJT>
Looks like they had some usecase at some point for per-machine-stable fingerprints in the mix and then things got changed but they left that stuff in
<whitequark[cis]>
widlarizerEmilJT: no it's incompetence (see above)
<widlarizerEmilJT>
Boo
<widlarizerEmilJT>
So they had machines with bad RNG engines? I wonder what that story is
<whitequark[cis]>
no i don't think they knew you can ask OS for randomness
<Wanda[cis]>
no, they have never tried using "RNG engines"
<whitequark[cis]>
on linux you use getrandom() or maybe read from /dev/urandom; on macos you use arc4random() i think; on windows it's CryptGetRandom maybe?
<whitequark[cis]>
i'm not actually sure what you use on Windows
<Wanda[cis]>
it's C++, isn't it? it just has something in stdlib
<whitequark[cis]>
RtlGetRandom or ProcessPrng
<widlarizerEmilJT>
oh. Booooooooo
<Wanda[cis]>
(though, having mentioned C++ stdlib random stuff, I'd like to call out a) the C++ standard for not actually mandating any sort of good randomness, b) mingw for making use of this optionality)
<widlarizerEmilJT>
Wikipedia says /dev/random is in linux since 1994
<whitequark[cis]>
/dev/random should never be used
<whitequark[cis]>
/dev/urandom is about that old though
<widlarizerEmilJT>
Well, I learned something new. I thought random was the correct one to use due to the idea of running out of entropy
<whitequark[cis]>
the whole "running out of entropy" business is complete nonsense no cryptographer backs
<whitequark[cis]>
it's just the kernel devs' misunderstanding of crypto
<whitequark[cis]>
it's possible to have no entropy (real), which is a different condition from being "low on entropy" (fake"
<whitequark[cis]>
s/no/_no_/, s/"/)/
<whitequark[cis]>
but it's very rare to genuinely be in that situation, and in most cases it means you're on an embedded device which should have saved 256 bits from /dev/urandom on the previous boot somewhere in NVRAM, or something like that
<whitequark[cis]>
(there's a really annoying case here where you're generating an ssh key on first boot. in this case you kind of just need an entropy source, you can't really avoid it)
<whitequark[cis]>
consider that a typical stream cipher is a PRNG that you XOR with your data. if you could "run out of entropy" then we would have had to count "remaining entropy in a key of a stream cipher", which nobody does because it's not a real thing
<galibert[m]>
well, a stream cipher xor data is not a one-time-pad. /dev/random kinda pretends to be one
<whitequark[cis]>
you also have a misunderstanding of how entropy works
<galibert[m]>
do I?
<whitequark[cis]>
why else would you make that statement?
<galibert[m]>
entropy is the amount of information you need to reproduce the stream
<galibert[m]>
for a stream cypher, it's the size of the key. For a one-time-pad, it's the size of the pad
<whitequark[cis]>
you're aware that /dev/random and /dev/urandom are implemented using a stream cipher, right?
<galibert[m]>
/dev/random tries to keep the emount of information needed equal to (at least) the size of the output. /dev/urandom doesn't
<whitequark[cis]>
/dev/random blocks for no reason, /dev/urandom doesn't
<whitequark[cis]>
this is a more accurate description
<galibert[m]>
There is a reason. It's a stupid reason, but it's mathematically real. It's just stupid
<whitequark[cis]>
it's not a real reason
<whitequark[cis]>
there is no cryptographic property that depends or benefits from this behavior
<whitequark[cis]>
therefore it's not real
<whitequark[cis]>
it implements an algorithm that computes a useless number and then uselessly blocks if the number is too small. this has no bearing on any actual behavior (except for the above-mentioned case of "the initial state is deterministic")
<galibert[m]>
The initial state is always deterministic
<galibert[m]>
If you can get random stuff in the initial state, then your starting entropy is not zero
<galibert[m]>
Doesn't change the fact that using /dev/random is batshit stupid, of course
<galibert[m]>
I'm not defending it
<Wanda[cis]>
then maybe stop discussing it
<galibert[m]>
'kay
<Wanda[cis]>
"well, actually"ing about pointless bullshit is not particularly useful
<galibert[m]>
gonna go to bed anyway, goodnight you two
<galibert[m]>
or three
<widlarizerEmilJT>
Gn gn
<whitequark[cis]>
galibert[m]: this is not even true, have you read random.c?
<whitequark[cis]>
random_init_early is going to initialize the state to something arch_get_random_*longs returns
<widlarizerEmilJT>
Semantics about the word real is irrelevant, if it's a useless metric that builds a misleading interface programs should avoid using, that's all that matters
<whitequark[cis]>
this is actually a pretty funny little function
<whitequark[cis]>
someone on discord is looking at the LCG used in cadical and it seems to be somehow even worse than you'd think
<ignaloidas>
whitequark[cis], having looked around in bleeding-edge SAT solvers, that's far from the worst of it, there's a ton of mess in most of them, there's fairly few that I trust the code quality of, and then there are others that somehow have way better performance
<ignaloidas>
cadical is one of the better ones if you'd believe it
<whitequark[cis]>
oh i absolutely would
<whitequark[cis]>
they really don't know what they're doing with the RNG but at least they're trying, and it has documentation
<ignaloidas>
I mean, they need the RNG to be fast, and the quality of the randomness is kinda secondary
<whitequark[cis]>
seeding an LCG with what could be consecutive numbers disqualifies the output from being called "random"
<ignaloidas>
after all, just shuffling the order of clauses and randomly renaming literals has been shown to absolutely destroy the performance of most SAT solvers
<ignaloidas>
whitequark[cis], seeding doesn't matter, it's just there to not get stuck on a particular decision
<ignaloidas>
like truly, the quality of randomness is of the lowest priority, it just needs to take a roughly different path on every reset
<whitequark[cis]>
if it is "of the lowest priority" then how come that file spends so much effort getting better randomness? what you say conflicts with the comments and implementation
<whitequark[cis]>
it's just nonsense, the entire thing should be replaced with a single getentropy call
<whitequark[cis]>
which also has the advantage of not asking for random platform-specific stuff for no reason
<ignaloidas>
oh, for why the bitwuzella does this I have zero clue
<ignaloidas>
cadical just takes a seed for it's LCG after all
<whitequark[cis]>
the snippets above are from cadical itself
<whitequark[cis]>
bitwuzla mostly doesn't do anything cursed
<ignaloidas>
there is some consideration when you're doing distrubuted solving for seeds to not correlate with each other because you want each thread to explore a different search space
<whitequark[cis]>
yeah i'm going to send them a PR
<whitequark[cis]>
i don't feel like arguing about randomness again so it will just mix in the result of getentropy
<whitequark[cis]>
$ file build/src/main/bitwuzla
<whitequark[cis]>
build/src/main/bitwuzla: WebAssembly (wasm) binary module version 0x1 (MVP)
<whitequark[cis]>
i built it
<whitequark[cis]>
in only 2 hours, nice
<whitequark[cis]>
but i was like 8-10 hours worth of angry i think
<whitequark[cis]>
okay, let's try to actually use it
<cr1901>
I know that abc will never run on a BE platform, but I remember compiling yosys (and thus abc) for 32-bit for a while (before, say, 2019)... did abc get even less portable since?
<Wanda[cis]>
cr1901: you misunderstand, abc doesn't work on *64-bit* platforms
<cr1901>
abc.exe: PE32+ executable for MS Windows 5.02 (console), x86-64 (stripped to external PDB), 11 sections
<Wanda[cis]>
this is because abc doesn't like some 64-bit pointer values. specifically, ones that have bits 16-31 all set to 0.
<cr1901>
Guess I'm using abc at my own peril
<Wanda[cis]>
macos allocator happens to give out such pointers significantly more often, hence why it has the most failures
<cr1901>
Ahhh, okay. So I have a 1/65536 chance (uniformly distributed) of abc dying for no reason
<Wanda[cis]>
and if you try to compile abc with asan on linux, it will fail pretty much deterministically
<cr1901>
Interesting... TIL
<Wanda[cis]>
not because of actual asan violation, mind you; it's because of how asan allocator gives out addresses.
<Wanda[cis]>
at least that's just an abort. the big-endian platforms get silent miscompilations instead.
<Wanda[cis]>
that one was fun to debug
<cr1901>
Oh I remember that. You were... well, not happy
<Wanda[cis]>
(I have taken one look at the relevant code and decided I don't want anything to do with it, then just declared that yosys shall simply not support big-endian platforms)
<cr1901>
(I'm just happy you give more than a glance at BE machines)
<cr1901>
Anyways, I don't have a Mac machine, and never seen abc crashes on Linux (I'm not even sure I have the 32-bit libc/c++ stuff installed on that machine). But good to know there's... issues...
<Wanda[cis]>
see, this is one of many reasons why we'd rather spend months learning how to write LUT mapping and logic optimization code than call abc from prjunnamed
<cr1901>
Because you get to watch your PowerPC go brr?
<Wanda[cis]>
I was going for "elementary reliability", but yeah sure why not
<cr1901>
PowerPC is elementary reliability... it's why NASA still uses it
<cr1901>
(TM)
<cr1901>
Look I'm procrastinating debugging a not-working FPGA stream that actually will require me to read a datasheet, so don't mine me
<cr1901>
mind*
<Wanda[cis]>
heh
<Wanda[cis]>
... one day I'm definitely going to make prjunnamed self-hosting on that xc2vp30 (lol that 1GiB of RAM is going to murder me)
<whitequark[cis]>
it works
<whitequark[cis]>
the wasmed bitwuzla
<whitequark[cis]>
it's waaaaay slower than z3
<ignaloidas>
fwiw I think a decent portion of the SMT slowness is the fact that most of the expressions are kept as that - expressions, and they end up getting duplicated when used multiple times
<whitequark[cis]>
(wasm) real 0m55.435s
<whitequark[cis]>
(natv) real 0m49.017s
<whitequark[cis]>
so the wasm solver is a little bit slower, but the native one is 2.5 times slower than Z3 too
<whitequark[cis]>
ignaloidas: for sure
<whitequark[cis]>
I think the entire current SMT verifier should be scrapped and reimplemented in a way that isn't terrible
<cr1901>
Really glad I didn't bother updating from boolector then for RV stuff
<cr1901>
(yes I actually have to measure. But 2.5 times slower than z3 does not sound promising sample)
<cr1901>
I compiled the wasm version w/ minisat b/c it was easiest to build; cadical is the best by far, but minisat was acceptable, especially when I used multiple processes for multiple problems
<cr1901>
Only experts can simp, apparently
<whitequark[cis]>
cadical builds fine for wasm with some patches
<whitequark[cis]>
I have the patches, they could be submitted upstream
<whitequark[cis]>
i killed cvc5 at 1min17s
<whitequark[cis]>
it's pretty amusing that nobody here likes Z3 but it is thus far the fastest
<ignaloidas>
hmm, maybe it needs a theory set?
<whitequark[cis]>
actually hold on
<whitequark[cis]>
it's not doing anything
<whitequark[cis]>
I'm confused
<whitequark[cis]>
no, nevermind, it's pegging a core for 2 minutes even with (set-logic BV)
<ignaloidas>
do QF_BV
<ignaloidas>
no quantifiers are used right now
<whitequark[cis]>
oh right
<whitequark[cis]>
that does not seem to be helping anything
<whitequark[cis]>
well, i believe the results in front of me
<cr1901>
whitequark[cis]: FWIW, I actually do like z3 (it has a neat Python API and good online tutorial). Just my own experience is usually, emphasis usually, (did I mention usually?) yices and boolector are faster than z3 for bmc stuff. And boolector is the only usable solver for RV formal stuff, but yices is fine on most other problems.
<whitequark[cis]>
oh right! i could also try yices
<whitequark[cis]>
3min20s and counting for cvc5
<whitequark[cis]>
I think the difference is that we are not doing RV formal
<cr1901>
I'm still shocked at how bad yices and z3 are on that one specific problem- boolector can do the whole thing in 30 minutes; I killed yices (and z3) after 12 hours
<whitequark[cis]>
okay, I don't think it's gonna complete after 6 minutes
<whitequark[cis]>
let's try yices-smt2!
<whitequark[cis]>
real 0m24.898s
<whitequark[cis]>
slightly slower than z3 but an actual contender
<whitequark[cis]>
you end up with a logic loop otherwise
<ignaloidas>
hmm, where?
<ignaloidas>
might be misunderstanding how dff currently works I guess
<whitequark[cis]>
well between the input and the output
<whitequark[cis]>
the "past" value for the dff is its state
<whitequark[cis]>
the "current" value for the dff is the next state
DemiMarieObenour has joined #prjunnamed
<DemiMarieObenour>
<whitequark[cis]> "wow, it reorders messages https:..." <- Matrix has no total order of messages. This is an unavoidable consequence of rooms not being tied to a particular server. However, if messages sent by a single session are reordered this is either a spec bug or an implementation bug.
<whitequark[cis]>
I think all of us are on matrix.org?
<ignaloidas>
oh, ok, I was misunderstanding how DFF is working
<whitequark[cis]>
yeah, me, Wanda, and galibert are all on matrix.org, so I have no idea why or how the messages would be reordered
<DemiMarieObenour>
whitequark[cis]: Sounds like a bug to me then.
<DemiMarieObenour>
Or an intentional decision to trade off total ordering for performance.
<whitequark[cis]>
yes, Matrix is probably the worst chat system I've ever use
<whitequark[cis]>
s/use/used/
<whitequark[cis]>
I don't need someone to tell me that they make bad decisions; I am exposed to bad decisions of the Matrix people every day
<DemiMarieObenour>
Matrix is what you get if you prioritize decentralization above absolutely everything else.
<DemiMarieObenour>
If you are willing to require a central server you get a ton of nice things that are provably impossible otherwise.
<whitequark[cis]>
I am aware
<whitequark[cis]>
I also don't care.
<DemiMarieObenour>
Is there some way I am misunderstanding you?
<whitequark[cis]>
you're explaining things I already know to me, in an apparent attempt to justify Matrix's obvious deficiencies
<whitequark[cis]>
I don't need that
<whitequark[cis]>
I do not care at all about the technical details of a system that fails me in its most basic tasks every day
<DemiMarieObenour>
Ah, then I was misunderstanding you.
<DemiMarieObenour>
Matrix is a terrible chat system.
<whitequark[cis]>
sure is
<DemiMarieObenour>
honestly I would just prefer something centralized
<whitequark[cis]>
I'm basically fine with IRC
<whitequark[cis]>
I used to be less fine with IRC because I wanted things like "mobile client that works"
<whitequark[cis]>
but Matrix is so bad that I changed my mind, I don't think IRC is so bad any more
<DemiMarieObenour>
To me, the fatal flaw in IRC is lack of message history.
<DemiMarieObenour>
XMPP might be the best of both worlds, but I’ve never used it.
<whitequark[cis]>
I've used XMPP extensively since I was a child
<whitequark[cis]>
it's... awful
<DemiMarieObenour>
How?
<whitequark[cis]>
well it's extensible, which means that (a) there are lots of extensions you absolutely need to have a decent experience, and (b) half the clients don't implement it
<whitequark[cis]>
for example, "sending a message to two places but ringing a notification in only one" has been an unsolved problem for an incredibly long time
<whitequark[cis]>
"tunneling OTR over XMPP" was indescribably painful and I would often roll back to plaintext because it would just be fucked
<whitequark[cis]>
MPOTR was never a thing but I think somebody may have ported the Signal protocol on top of XMPP by today
<whitequark[cis]>
file transfer never worked, I would often send a friend a message starting with Qlpo... instead
<whitequark[cis]>
which is a tbz2|base64
<whitequark[cis]>
because the way file transfer worked was so incredibly braindead it was like 100 times more efficient to base64 the same file and send it as a message or as several
<whitequark[cis]>
I think they solved it by now, with an extension, that half the clients don't implement
<whitequark[cis]>
message history was an unsolved problem but I've heard there's an extension now
<whitequark[cis]>
you get the idea
<whitequark[cis]>
oh, also on desktop lots of the XMPP clients used libpurple
<whitequark[cis]>
libpurple is like, notoriously insecure. "segfaults if you fuzz it for a minute" insecure
<whitequark[cis]>
so all the people who would use OTR over libpurple and believe them to be protected from MITM (including myself) were kind of delusional about it
<whitequark[cis]>
it's definitely not "best of both worlds"; at best I think it is something like "about as bad as Matrix but for different reasons and with different historical tradeoffs"
<whitequark[cis]>
I guess the servers are better: ejabberd (in erlang) actually reliably and quickly delivers your messages, federation just worked for the most part, it didn't consume ungodly amounts of memory like both synapse and conduit do
<whitequark[cis]>
if you want to try using XMPP today, try Conversations, if you have an Android phone anyway
<whitequark[cis]>
I don't know what XMPP client one would use on desktop today. Kopete probably still exists?
<DemiMarieObenour>
whitequark[cis]: Conduit being a memory hog is… an unpleasant surprise.
* DemiMarieObenour
hopes for something to do for chat what prjunnamed will hopefully do for FPGA tooling
<whitequark[cis]>
I've talked to somebody who has operational experience with conduit that convinced me to probably never use it
<DemiMarieObenour>
😢
<whitequark[cis]>
which was disappointing
<whitequark[cis]>
I reran all of the tests with (set-logic QF_BV) and nothing changed
<whitequark[cis]>
I feel like the conclusion of this is just that we should stick with Z3?
<whitequark[cis]>
at least for the time being
<mupuf>
whitequark[cis]: did you give soju a try? it is a relatively new IRC bouncer. With chat history and shared read location
<mupuf>
goguma is a compatible client for the phone. it isn't fantastic but it is more standardized than quassel was
<mupuf>
I still miss quassel's UI on my desktop though
<mupuf>
instead, I have a pinned tab with the gamja web client
<mupuf>
soju supports uploading files also. But just like for xmpp, client support is externally spotty
<mupuf>
it's nice to see new protocol extensions being developped to modernize irc
<mei[m]>
<whitequark[cis]> "so the wasm solver is a little..." <- feels like it might've been a good idea to try that first
<mei[m]>
<whitequark[cis]> "I used to be less fine with..." <- have you tried quassel?
cr1901 has quit [Read error: Connection reset by peer]
cr1901 has joined #prjunnamed
<whitequark[cis]>
<mei[m]> "feels like it might've been a..." <- oh, yes. everybody seemed really confident though that Z3 is the worst option
catlos has joined #prjunnamed
<catlos>
on the smt solver topic, my experience is that for model checking stuff yices2 is often fastest but its gpl, bzla and btor are reasonable but the syntxy part of bzla is not super fast so if you are submitting stuff as text input/the problems are syntactically large but semantically easy to solve it struggles a lot. it is better on syntactically smaller but hard problems
<catlos>
in bounded model checking z3 has been about the slowest of the big solvers but this probably varies depending on how you use it
<whitequark[cis]>
yes, i can see the parser to be a bottleneck in this case
<whitequark[cis]>
unfortunately, given that the SMT solvers are all bad in different ways, i don't see myself committing to non-smt2 representations
<catlos>
there is also the case that if you are submitting queries that have a lot of bit level operations, i wouldn't be suprised if some of the solvers have very different performance characteristics to when the operations are more consistently word level
<catlos>
(which would get worse using smtlib2 textual format, consider the obvious memory difference between an and gate in an aig representation vs smtlib or whatever)
<catlos>
s/memory difference/parsing complexity
<whitequark[cis]>
it's a tradeoff between execution speed and implementation complexity, yeah
<galibert[m]>
One interesting point for z3 is that there is a rust binding though
<whitequark[cis]>
I don't particularly want to do another AIG lowering just in the verifier, and verification taking some constant factor more seems like a fine exchange
<whitequark[cis]>
galibert[m]: I specifically avoided using it so that other SMT solvers could be measured against it (also to avoid having non-Rust dependencies which would make Wasm builds harder)
<catlos>
yeah I think this is pretty reasonable for the verifier, particularly as I guess verified rewriting is mainly for testing anyway right
<whitequark[cis]>
I think there are cases where you would want to use LEC every step of the way, mainly ASICs
<whitequark[cis]>
but for day to day FPGA work you probably will not
<catlos>
oh also on (set-logic QF_BV), generally setting logic is largely just for the parser to reject operations outside of the set logic. I know bzla for example always uses a solver for all supported theories internally (which is how there were performance issues for stuff related to quantifiers before even with the logic set to QF_BV)
<DemiMarieObenour>
catlos: Personally I would turn it on for production builds or if I suspected a toolchain bug and leave it off the rest of the time.
<whitequark[cis]>
yeah i don't know why setting logic is supposed to change anything, although cvc5 does complain if you do not