<olliemath>
I assume that's turning off some specialisation in json.loads somehow?
<olliemath>
But the code in lib/python/3/json looks unremarkable - am I missing something?
<olliemath>
Ah - I found it: _pypyjson - hmm
<cfbolz>
olliemath: yes, sorry
<cfbolz>
olliemath: hard to fix In general
<olliemath>
Yes, the only improvement I can think of is that maybe if cls in json.loads is the real built-in JSONDecoder still specialise
<olliemath>
You can set your own loader on Flask but it will call loads(s, cls=whatever)
<olliemath>
I imagine they'll still do it this way if I get a patch in not to subclass (they only add a docstring - subclassing seems somewhat unnecessary)
<cfbolz>
olliemath: given enough interest it's also possible to do a lot of work to improve it
<olliemath>
I'm interested in helping improve it - I'm just greping some other web frameworks to see if they are doing similar subclassing or if it's a Flask-only thing
<arigo>
cfbolz: looking now. of course if we change details, like move "TLS.entered -= 1" before "del seeing[seeingkey]", then the test passes
<cfbolz>
arigo: yes :-(
<arigo>
I'm surprized that you managed to put all these prints in the same function without affecting that
<cfbolz>
arigo: I tried ten variants
<cfbolz>
Eg debug_prints break it
<arigo>
"ah" OK
<cfbolz>
arigo: but you would agree that this looks Very Wrong?
<arigo>
definitely
<arigo>
I guess "--jit off" fixes it?
<cfbolz>
arigo: yes
<cfbolz>
arigo: I could try a more recent nightly, but could possibly just hide the bug
<arigo>
I'm trying a build I made 3 days ago
<cfbolz>
Maybe I should at least try lldebug
<arigo>
in my opinion it looks like a register being overwritten at a random point
<cfbolz>
arigo: yes, I was wondering something like that
<cfbolz>
arigo: or a guard failure is broken a very tiny bit
<cfbolz>
arigo: so what's the tool here? rr?
<arigo>
so it occurs in an older call to codepoints_in_utf8
<arigo>
ah, that may be a hint: it's a call that occurs with value == "object too deeply nested to unmarshal"
<arigo>
so maybe related to the stack overflow
<cfbolz>
arigo: aaaaah
<arigo>
(to answer your question, yes, maybe rr would help, but it's quite some efforts)
<cfbolz>
arigo: yeah, well, I wonder what happens if the stack overflow is raised at the start of the finally block
<cfbolz>
Eg in the dict write
<arigo>
ah, wrong level, I'm trying to add fprintf in the C sources for interp_marshal.c
<arigo>
but that's not what we're looking for
<cfbolz>
arigo: yes, it's a bit confusing, it's really an interpreted test for marshal
<arigo>
and codepoints_in_utf8("object too deep") occurs because there's the space.newtext() in interp_marshal, interpreted
<cfbolz>
arigo: right
Julian has joined #pypy
<arigo>
ah yes, here's a theory
<arigo>
del dict[(id1, id2)]
<arigo>
calls the hash on the tuple in RPython
<arigo>
this is certainly recursive so it might raise an rpython RuntimeError("stack overflow")
<arigo>
so the original RuntimeError is masked with another one that is almost identical, and the program proceeds as usual except the "del dict" did not occur
<cfbolz>
arigo: right, I was wondering
<cfbolz>
arigo: that sounds like incredibly difficult to fix
<arigo>
if the jit is involved, it might only be because safe_equal() is usually jitted as a function, but when 'x == y' ends up raising an exception we exit the jitted code and build more stuff on the stack
<arigo>
so the first time we check for a stack overflow, we'll get it
<cfbolz>
arigo: ah
<cfbolz>
arigo: or it's 'only' because the JIT has somewhat different stack usage
<arigo>
stack overflows should not be possible while inside the JIT itself
<arigo>
there's logic to silence them
<arigo>
ah, a possible idea to fix it would be to do something similar to the problem of running out of memory in the GC:
<cfbolz>
arigo: sure, but jit stuff could be on the stack leading the error to happen in a slightly different place
<arigo>
(yes)
<cfbolz>
arigo: we would really like a slightly smaller example
<arigo>
in the GC we throw an rpython MemoryError, but then we let the program allocate a bit more (from a reserve)
<arigo>
similarly, when we get an rpython stack overflow, we could "enter reserve mode" and let the program consume a bit more of the stack without raising another stack overflow immediately
<cfbolz>
arigo: right
<cfbolz>
arigo: and then a second stack overflow is a sort of hard error?
<cfbolz>
It's a bit bad that this was silent
<arigo>
maybe we'd need to detect "at this point the stack overflow was handled, because the stack has shrunk a lot"
<arigo>
"so we restore the normal limit"
<cfbolz>
arigo: right
<cfbolz>
arigo: that's similar in the gc
<cfbolz>
?
<arigo>
it all looks like hacks and workarounds, but yes, I don't see any clean solution
<arigo>
I think the GC doesn't have that part of the logic
<arigo>
the expectation there is really that the program will finish soon, because if you get a random MemoryError you'd better log things and exit
<arigo>
it's even more asynchronous than a stack overflow, at least in pypy
<cfbolz>
Ok
<cfbolz>
arigo: kind of true for stack overflows too
<cfbolz>
Maybe not 'quit'
<cfbolz>
But yes, reduce the stack a lot
<arigo>
yes, but once you reduce the stack a lot, you can continue running, like tests do
<cfbolz>
Yeah
<cfbolz>
I still would like a small reproducer somehow
Julian has quit [Ping timeout: 265 seconds]
<arigo>
(and reducing the stack a lot is the default behavior, so that makes more sense)
Julian has joined #pypy
Julian has quit [Client Quit]
jryans has quit [K-Lined]
ambv has quit [K-Lined]
graingert[m] has quit [K-Lined]
saltrocklamp[m] has quit [K-Lined]
the_drow has quit [K-Lined]
jryans has joined #pypy
agronholm has quit [Ping timeout: 264 seconds]
cfbolz has quit [Ping timeout: 264 seconds]
cfbolz has joined #pypy
ambv has joined #pypy
graingert[m] has joined #pypy
saltrocklamp[m] has joined #pypy
the_drow has joined #pypy
eamanu has quit [Ping timeout: 264 seconds]
indyZ has quit [Ping timeout: 264 seconds]
indyZ has joined #pypy
graingert has quit [Ping timeout: 256 seconds]
jerith has quit [Ping timeout: 256 seconds]
phlebas has quit [Ping timeout: 250 seconds]
yizawa has quit [Read error: Connection reset by peer]
ronan has quit [Read error: Connection reset by peer]
cfbolz has quit [Read error: Connection reset by peer]
fijal has quit [Read error: Connection reset by peer]
samth has quit [Read error: Connection reset by peer]
krono has quit [Read error: Connection reset by peer]
fijal has joined #pypy
jerith has joined #pypy
krono has joined #pypy
eamanu has joined #pypy
ronan has joined #pypy
cfbolz has joined #pypy
samth has joined #pypy
phlebas has joined #pypy
yizawa has joined #pypy
agronholm has joined #pypy
graingert has joined #pypy
jryans has quit [Quit: Client limit exceeded: 20000]
graingert[m] has quit [Quit: Client limit exceeded: 20000]
saltrocklamp[m] has quit [Quit: Client limit exceeded: 20000]
lritter has joined #pypy
greedom has joined #pypy
greedom has quit []
greedom has joined #pypy
olliemath has quit [Ping timeout: 256 seconds]
jacob22 has quit [Ping timeout: 265 seconds]
greedom has quit [Remote host closed the connection]
<cfbolz>
arigo: this is entirely random and unrelated, but it seems the way that other VMs use gmp in their garbage collected languages is via this low level interface: https://gmplib.org/manual/Low_002dlevel-Functions
jryans has joined #pypy
graingert[m] has joined #pypy
saltrocklamp[m] has joined #pypy
lritter has quit [Quit: Leaving]
greedom has joined #pypy
iwkse has joined #pypy
<iwkse>
hi, having an issue with pypy 3.8 and virtualenv. When I create a virtual environment, it doesn't install correctly pip. Running pip --version, I get ModuleNotFoundError: No module named 'pip._vendor.packaging'
<iwkse>
when I manually install pip using get-pip.py with pypy binary, it's installing the modules in the pypy3.8 directory instead of the virtualenv directory
<iwkse>
I'm using the pypy3.8 tarball from the website
<mattip>
hi iwkse. Can you try pypy -m venv venv_name rather than using the distro-provided virtualenv
<iwkse>
mattip: hi mattip , sure
<mattip>
what platform are you on?
<iwkse>
debian linux
<mattip>
are you running this as a normal user or as root?
<iwkse>
normal user
<mattip>
cool
<iwkse>
pypy doesn't give any output, weird
<iwkse>
pypy3 yes
<iwkse>
I will delete the former venv
<mattip>
ok, there should be all the following in the bin directory: pypy, pypy3, pypy3.8, python, python3. python3.8
<mattip>
but maybe you have a different pypy in your path before the tarball's bin dir
<iwkse>
yes
<mattip>
hmm. I get confused when I try to let the $PATH decide what python to run, unless I know I am inside a virtualenv or conda env
<iwkse>
I think there's some alias doing mess
<iwkse>
mattip: great, it works nicely with pypy -m venv :-)
<iwkse>
thank you
<mattip>
cool. We changed our directory layout for pypy3.8, which means we now have a layout much more like cpython
<mattip>
some of the tools like virtualenv have not caught up yet
greedom has quit [Remote host closed the connection]
<iwkse>
oh ok, understood
<iwkse>
I think it's because I have an issue with alembic, I'm checking if it's related
<iwkse>
basically I upgrade a migration it says all fine but doesn't create any table
<iwkse>
it's a test creation table migration file
<iwkse>
will try again
<mattip>
cool. Does PyPy speed things up for you?
<iwkse>
mattip: yeah
<iwkse>
it's quite nice!
<mattip>
iwkse: would love to hear more but it is late here. What still doesn't work as well as you would like, what does work well, ...
otisolsen70 has quit [Quit: Leaving]
stkrdknmibalz has quit [Ping timeout: 256 seconds]
<iwkse>
mattip: sorry, was afk, sure! I'm going to provide feedbacks as I started to use only pypy since some time, performance is an interesting point
jacob22 has joined #pypy
<iwkse>
mattip: it works nicely alembic. It was just a typo in the migration file