cfbolz changed the topic of #pypy to: #pypy PyPy, the flexible snake | IRC logs: and | Matti: I made a bit of progress, the tests now only segfault towards the end
tsraoien has quit [Ping timeout: 252 seconds]
<mattip> fijal: any idea why the rpython/jit/backend/aarch64/test/ tests are failing on macos?
<mattip> the annotator thinks a custom trace hook in the GC can recursively call the GC
<fijal> ugh?
<fijal> anto hooks? somehow? magically?
<fijal> I can reproduce at least, let's have a look
tsraoien has joined #pypy
tsraoien has quit [Ping timeout: 245 seconds]
lritter has joined #pypy
<fijal> mattip: well, the problem is that malloc is not removed in this particular way of invoking it
<fijal> I'm not sure how do I enable the backendopt in zrpy_gc_tests
<fijal> and alternative would be to rewrite wrapper code in llexternal in a way that does not use *args
<fijal> probably a good idea anyway
<fijal> mattip: should be fixed
<mattip> thanks
leshaste has quit [Ping timeout: 240 seconds]
<fijal> mattip: it's a "how the fuck did it ever work before" kind of scenario
leshaste has joined #pypy
lritter has quit [Quit: Leaving]
<arigato> unsure how to fix it properly. I guess just by getting rid of the global 'fire_in_another_thread'
<arigato> maybe replacing it with a single flag in each ExecutionContext
<arigato> which would be set when *either* ec._signals_enabled *or* ec.w_async_exception_type is not None
<arigato> ah no, it's wrong too
<arigato> it would rearm the ticker whenever we switch to the main thread, which is not desired
<arigato> so maybe two checks: if (fire_in_another_thread and ec.signals_enabled) or ec.w_async_exception_type
<arigato> i.e. don't touch fire_in_another_thread or signals_enabled, and just add another test for ec.w_async_exception_type
<cfbolz> arigato: thanks, I see
<cfbolz> arigato: yeah, there are various tricky cases here
<cfbolz> arigato: I was even wondering, what happens if the main thread is blocked in a lock, will the current mechanism manage to deliver a signal to it?
<arigato> it depends on the blocking lock, the answer is no in python2.7 and yes in python3.x (both cpython and pypy)
<cfbolz> arigato: right, I see
<cfbolz> arigato: and for the signal logic a single flag is the right thing, because we will interrupt the main thread only once even if many threads request an interruption
<cfbolz> ?
<arigato> well, several reasons
<arigato> but yes
<cfbolz> arigato: would you still put the new logic into the signal module? Conceptually it belongs to thread I suppose
<cfbolz> (I still wonder whether we can only have a single check on the ec for both conditions)
<arigato> unclear, because the old condition fires for any thread that has signals_enabled() (normally just the main thread, but not always)
Atque has quit [Ping timeout: 268 seconds]
<cfbolz> arigato: ah, does that mean that currently interrupting the main thread might interrupt another thread with signals enabled?
<arigato> yes, "signals enabled" means basically "behaves like the main thread"
<arigato> I don't know, you can kill that pypy-only feature if you like
<arigato> it was done for STM
<cfbolz> arigato: no, it's fine
<arigato> just saying, it would be much simpler then
<cfbolz> Hm, how?
<arigato> the condition can then be encoded in a single flag on the ec, and instead of setting 'fire_in_another_thread' we set the main thread's ec's flag
<arigato> and get rid of 'fire_in_another_thread'
<arigato> (the app-level interface is in __pypy__.thread.signaled_enabled/_signals_enter/_signals_exit)
<cfbolz> I see
<cfbolz> I thought that was only to also disable signals in the main thread
<cfbolz> arigato: maybe I should check also quickly check what cpython does
<cfbolz> Thanks a lot for your help! I'll try a second attempt
<arigato> still checking...
Atque has joined #pypy
<arigato> yes, it seems that __pypy__.thread.* is the only API that can enable or disable signals, and we never do it from interp-level, and the API is not used as far as I can tell
<arigato> not used inside the default or the py3.8 branch
<arigato> only in pypy\module\__pypy__\test\
<cfbolz> Ok
<cfbolz> Of course you never know which projects use weird internal APIs of us
<arigato> yes, though in this case you have to abuse this API quite a lot to disable signals in the main thread, because it's mostly exposed through a "with" handler that enables signals in non-main thread
<cfbolz> arigato: makes sense
<arigato> google doesn't find any direct users of "_signals_exit"
<cfbolz> good, I think
<cfbolz> so the idea is: get rid of fire_in*; when interrupting the main thread, set a flag on the main thread ec directly; also set the same flag when setting ec.w_async_exception_type; on a thread switch we check the flag, then fire an action if it's set; the action sets for w_async_exception_type in general and additionally for signals when on another thread
<cfbolz> *when on the main thread
Atque has quit [Remote host closed the connection]
Atque has joined #pypy
tsraoien has joined #pypy
jcea has joined #pypy
<fijal> should I comment on the issue?
<fijal> this is probably the classic case of "large growing container is bad for our GC"
<fijal> cfbolz: have you ever tried not caching MIFrames? to see what happens?
<cfbolz> fijal: it's on my list, yes. if you stop caching them, you can also make them small and not store the constants
<hexology> what's the overhead like nowadays for numpy compared to cpython? and what would be a good way to benchmark it? i might actually have a use case for pypy at work soo!
<hexology> soon*
<mattip> hi hexology. We have adopted using the C-API and vanilla NumPy rather than continuing with our re-implementation of numpypy
<mattip> but there is a move afoot to make NumPy compatible with HPy, which would speed it up considerably on PyPy
<hexology> mattip: right, i didn't mean numpypy specifically. i had heard in the past that the c-api (and therefore calls to various numpy routines) had more overhead compared to cpython
<hexology> i've been watching hpy with interest!
<hexology> in my case, i might have a situation where i have to do a lot of numpy operations in somewhat of a "hot" code path, so measuring the overhead of calling numpy code would be useful to me
<mattip> yes, there is a significant overhead to using the c-api in PyPy
Atque has quit [Ping timeout: 268 seconds]
Atque has joined #pypy
jcea has quit [Ping timeout: 272 seconds]
tumbleweed has quit [Ping timeout: 272 seconds]
tumbleweed has joined #pypy
j4at has joined #pypy
j4at has quit [Read error: Connection reset by peer]
mannerism has quit [Ping timeout: 272 seconds]
mannerism has joined #pypy
_whitelogger has joined #pypy
rwb has quit [Ping timeout: 244 seconds]
lritter has joined #pypy
rwb has joined #pypy
Dejan has quit [Quit: Leaving]
Atque has quit [Ping timeout: 268 seconds]
Atque has joined #pypy
Atque has quit [Remote host closed the connection]
Atque has joined #pypy