cfbolz changed the topic of #pypy to: #pypy PyPy, the flexible snake | IRC logs: and | so many corner cases, so little time
lritter has joined #pypy
lritter has quit [Quit: Leaving]
_0az3 has quit [Quit: afk]
_0az3 has joined #pypy
slav0nic has joined #pypy
<mgorny> mattip, cfbolz: redis(-py) seems to be another package affected by the spawn problem -- here we can't pickle _thread.Lock being part of the client
<mgorny> i suppose the only solution is to have all consumers using redis + multiprocessing force "fork" method, correct?
<mgorny> though i suppose passing threading.Lock() to a forked process is kinda weird
<mgorny> maybe they should switch to multiprocessing.Lock instead?
<cfbolz> mgorny: and they simply don't work on mac, I assume?
<cfbolz> I don't know, the real solution would be to find this bug :-(
<cfbolz> but I am a quite stuck with that at the moment
<mgorny> cfbolz: probably not, i find it hard to test since it relies on working redis instance and their test suite relies on docker
<cfbolz> right
<mgorny> after resolving locks, now lambdas are a problem
<mgorny> basically a lot of work to make redis client class serializable, and i'm not even sure if they should really be doing that
<cfbolz> mgorny: basically the fork default lets projects be a little bit lazy. because some unpicklable things will be around in the child process from the fork
<mattip> redis-py does support macOS, but maybe that specific feature is disabled?
<mattip> there is this stale and closed issue
<mgorny> mattip: i don't grep darwin|macos|osx
<mgorny> maybe they're just ignoring the test failures
<mattip> they are not running macOS tests
<mattip> but the issue I pointed to was from macOS
<mattip> I think unilaterally turning on spawn instead of fork is problematic
<mgorny> yes
<mattip> if we leave fork as the default, does the hang show up in places outside compileall?
<mgorny> but the deadlock is also bad :-(
<mgorny> i mean, i can just switch gentoo to use serial compileall on pypy3.9 but i'm afraid this could also affect other multiprocessing uses
<cfbolz> particularly since compileall is a relatively "simple" use of multiprocessing
<cfbolz> ie it does not use threads itself
otisolsen70 has joined #pypy
<mgorny> i've filed with somewhat detailed explanation
<mattip> mgorny: thanks
jacob22_ has quit [Ping timeout: 256 seconds]
otisolsen70_ has joined #pypy
otisolsen70 has quit [Ping timeout: 256 seconds]
otisolsen70 has joined #pypy
otisolsen70 has quit [Client Quit]
otisolsen70_ has quit [Ping timeout: 240 seconds]
jacob22_ has joined #pypy
<cfbolz> hmmmm
<cfbolz> I wonder what happens if I revert the multiprocessing code to that of the 3.8 stdlib
<cfbolz> (I mean I don't want to ship that. but is the deadlock still there)
<cfbolz> ok, I tried to basically bisect CPython commits too see which ones hang
<cfbolz> unfortunately, when I back that commit out, it still deadlocks :-(
<mattip> did anything change in _multiprocessing?
<cfbolz> good question
<cfbolz> anyway, if I go back to *before* that commit, the deadlock is definitely gone
lritter has joined #pypy
greedom has joined #pypy
greedom has quit [Remote host closed the connection]
greedom has joined #pypy
<mattip> ahh, cool
<mattip> so it is something after that commit
<cfbolz> yeah, maybe
greedom has quit [Remote host closed the connection]
slav0nic has quit [Ping timeout: 240 seconds]