<cfbolz>
Guest60: I don't think it's necessarily about the virtualizables, but yes, I agree, eventually it produces good traces
jcea has joined #pypy
<mgorny>
fyi, pybind11 added some assert for GIL being held, and it fails when pypy3 (.9, .10) is calling their class destructors on intepreter exit; but i don't know if it's a bug in pypy or in their code: https://github.com/pybind/pybind11/issues/4748
jcea has quit [Quit: jcea]
jcea has joined #pypy
jcea has quit [Quit: jcea]
jcea has joined #pypy
jcea has quit [Ping timeout: 246 seconds]
<mattip>
I don't know if PyPy makes any promises about the order of destructors and releasing the GIL, but I would assume destroying the GIL is supposed to happen after type destructors
Guest60 has joined #pypy
<Guest60>
Is there any place that explains `function_threshold` more specifically than: 'number of times a function must run for it to become traced from start'?
<Guest60>
(motivation: I have been playing with even-odd via two bytecoded mutually recursive inner functions, both with tail recursion optimisation and as straight recursion. The code works under pypy for small numbers of repetitions, and with JIT turned off for large numbers of repetitions. If JIT is enabled with the default settings, the tail recursive
<Guest60>
variant succeeds for small numbers of repetitions and segfaults for larger (1619+), while the straight recursive variant always succeeds, but takes ~2x as long as with JIT off. 1619 made me try `function_threshold=-1`, which causes the tail recursive variant to always succeed, and the straight recursion to complete in the same amount of time as
<Guest60>
without JIT)
<Guest60>
(as usual for me, ^^^ is an rpython question, not a pypy question)
Guest60 has quit [Quit: Client closed]
<cfbolz>
Guest60: function_threshold does not apply to your interpreter
<cfbolz>
because it's tail recursive
<cfbolz>
it does not map language recursion to interpreter recursion
Guest60 has joined #pypy
<Guest60>
cfbolz: that's what I would have thought also, but the behaviour definitely changes when passing `function_threshold=-1` to `jit.set_user_param`.
<Guest60>
(I'll try to add a branch tomorrow if you'd be willing to play with it)
<cfbolz>
You need to find the source of the segfault though. Does it work without jit?
<Guest60>
yes. it works with `off` as a user param, and it works with function_threshold=-1 as a user param
<Guest60>
it also works with 1600 repetitions, but not with 1620.
<Guest60>
no, wait, 1620 works but 1621 segfaults
<Guest60>
last things in the PYPYLOG before the segfaults are [1a251af987fd] jit-backend}
<Guest60>
compiled new entry bridge
<Guest60>
[1a251af9b7c0] {jit-log-opt-loop ...
<Guest60>
[1a251b00bdd8] jit-log-opt-loop}
<Guest60>
[1a251b01019b] jit-tracing}
<Guest60>
Segmentation fault: 11
<Guest60>
no wait, never mind ... those are present also in the PYPYLOG of the non segfaulting run
<Guest60>
but when it doesn't segfault, the jit-backend-counts are:
<Guest60>
entry 1:1619
<Guest60>
TargetToken(140689581032032):1619
<Guest60>
TargetToken(140689581032288):807878
<Guest60>
entry 2:162
<Guest60>
bridge 140689581017424:1417
<Guest60>
entry 4:0
<Guest60>
which looks to me like 0 counts for the last log-opt-loop (loop 4)
<Guest60>
additional evidence that implicates `function_threshold` is that if I set it to 10, I get the segfault after only 11 reps, not 1621.
<Guest60>
sorry for the dump ... have to sign off for the evening