cfbolz changed the topic of #pypy to: #pypy PyPy, the flexible snake | IRC logs: and | hacking on TLS is fun, way more fun than arguing over petty shit, turns out
jstoker has quit [Remote host closed the connection]
jstoker has joined #pypy
lazka has quit [Remote host closed the connection]
lazka has joined #pypy
lazka has quit [Quit: bye]
marvin has quit [Remote host closed the connection]
marvin has joined #pypy
marvin has quit [Remote host closed the connection]
marvin has joined #pypy
lazka has joined #pypy
marvin has quit [Remote host closed the connection]
marvin has joined #pypy
lazka has quit [Remote host closed the connection]
lazka has joined #pypy
otisolsen70 has joined #pypy
Atque has quit [Quit: ...]
slav0nic has joined #pypy
<mattip> in the 9 years since for format-width-parsing code was written for PyUnicode_FromFormatV, no-one tried a format string with a width
<mattip> like %.100R ?
<mattip> I must be missing something trivial
otisolsen70_ has joined #pypy
otisolsen70 has quit [Ping timeout: 268 seconds]
otisolsen70_ has quit [Quit: Leaving]
<cfbolz> mattip: also, there are no tests for it either then ;-)
<fijal> cfbolz: can I merge it or do you want to have another look?
<cfbolz> fijal: go ahead
* cfbolz is teaching in a bit
<fijal> cool
<mattip> it is "tested" because it is used internally, but only for %s. The others are indeed not tested
<hexology> if anyone is interested i just switched my neovim config to use pypy 3.8 instead of cpython as the "python3 provider". everything appears to work fine!
<LarstiQ> hexology: I wouldn't mind seeing that config
<hexology> LarstiQ: you just have to set `g:python3_host_prog` to the right interpreter, as long as that interpreter has the 'pynvim' package from pypi installed. in my case, i did it in a venv, and have a script to automatically set up the venv (eg. if i move to a new computer)
<hexology> as you can see i have my own function to join filesystem paths, but you can just do it with vim string concatenation
<hexology> tldr `pypy3 -m venv ./venv && ./venv/bin/pip install -U pynvim` and then point `g:python3_host_prog` at `/full/path/to/venv/bin/python`
<LarstiQ> I'm really behind with my vim config I see
<hexology> does pyqt5 run under pypy? i saw this blog post from approximately forever ago but i'm not sure if the situation has changed
<hexology> LarstiQ: heh, the ecosystem has advanced _very_ rapidly. neovim is pushing things along
<cfbolz> hexology: ctismer is working on it!
<cfbolz> But still a way to go
stkrdknmibalz has joined #pypy
<hexology> happy someone is working on it! it'd be interesting to see how something like spyder performs under pypy
<hexology> jupyterlab appears to work fine
<hexology> hm, the host pypy system isn't coming up as an available kernel, but maybe that's just jupyter, i haven't used it in a while
<ctismer> hexology: Yes, working like crazy on it. There are some crucial bugs with signals which I still don’t understand. But I hope to get far enough for a first version in this year.
<hexology> that's great to hear
<ctismer> hexology: a Mandelbrot example runs 10 times faster with PyPy, but the GUI code disabled. Running the GUI version is probably not slower. I just need correct signals….
<hexology> pypy is generally lighter on memory usage than cpython too, right?
<hexology> i wonder if qtile will run under pypy
<hexology> not currently on a linux machine so i can't test
<LarstiQ> hexology: really depends on usage, pypy has a higher starting memory usage but is more efficient with e.g. large lists (unboxing \o/)
<hexology> makes sense. is it like the jvm, where it needs some "warmup" time?
<LarstiQ> hexology: may I introduce you to ;)
<Corbin> Yes. But also, JITs in general do not predictably always warm up; see for an examination.
<hexology> oh and by the way, jupyter appears to run fine even with the fancy new built-in debugger
<Corbin> Ha, thinking the same thing.
<hexology> thanks Corbin LarstiQ
<LarstiQ> Corbin: I can never think of benchmarks in the same way again
<LarstiQ> hexology: fancy!
<hexology> currently installing numpy & friends to see what kind of machine learning stuff i can get away with before i break something
<hexology> fascinating article btw
<hexology> it seems like the best benchmark would be adding some instrumentation to a real, existing application and then testing the application under different loads
<hexology> what i can say for sure is that in my personal experience pypy is super fast for scripts that need to process large streams of text, beating the pants off luajit and only ~2x slower than nim
<hexology> although that's a bit unfair because cpython also beats the pants off luajit :P
<hexology> alas. numpy apparently has not been updated to support pypy 3.8 and failed to build
<hexology> AssertionError: would build wheel with unsupported tag ('pp37', 'pypy37_pp73', 'macosx_10_7_x86_64')
<fijal> hexology: I think that's a non-fatal AssertionError? but I'm not sure
<hexology> it was definitely fatal in my case
<hexology> it also happened under 3.7, not sure what that's about
<hexology> i'll try again with the latest pip setuptools and wheel, just in case
<hexology> ok, numpy worked with the latest of those, under 3.7
<hexology> no warnings/errors
<mattip> fwiw, conda provides binary packages for ~1000 libraries on 3.7 for macOS and linux, about 600 for windows
<mattip> conda create -n mypypyvenv pypy
<mattip> conda activate mypypyvenv
<mattip> conda install the-world
<mattip> PyPI wheels are harder because we need to work with each project to convince them to upload
<hexology> yep i am very familiar with conda
<hexology> i used it extensively for data science
<hexology> however this is interesting: the issue seems to arise when building pep 517 build deps for pandas
<hexology> but not when building numpy on its own
<hexology> is this possibly an issue where pandas is pinning its deps to some strange version of numpy?
<hexology> it does appear to be nonfatal, but i'm not really sure what it's falling back to
<hexology> looks like it's just falling back to pandas 1.3.3 instead of 1.3.4
<hexology> doesn't seem promising
<hexology> should i file a pandas bug report?
<hexology> numpy and matplotlib however appear to work just fine
<Corbin> I wonder which benchmarks are being used here. I'm guessing that Oracle is not going to welcome a fair and open comparison!
<hexology> hah, probably not
<hexology> note on the above: numpy apparently consistently fails to build, but only when being used in a pep 517 build backend context
<hexology> i'm not sure if that's a pypy problem, a pip problem, a setuptools problem, or a numpy problem
<hexology> i had the same issue with pandas as with scipy. numpy throws that assertion error and building the wheel fails
<hexology> i'll try with --no-use-pep517
<hexology> aha, that worked! so something is wrong in the pep 517 machinery. should i file a bug report with pip?
<cfbolz> phlebas: do you happen to know?
<cfbolz> Corbin: Tim (phlebas) who works on GraalPython is here in the channel, he's a pypy dev too
<Corbin> cfbolz: Oh, excellent. Hopefully they're allowed to talk about it.
<cfbolz> Corbin: what's the context of that diagram? some blog post?
<phlebas> cfbolz: is the geomean of the "meso" benchmarks in our github repo. running those same benchmarks on pypy gives sth like 9.1 speedup on our benchmark server. so pypy is a bit better. main reason for not mentioning it here is that we're really targeting Java users that need Python integration
<phlebas> and those basically have the option of using Jep, Jython, or Graal
<Corbin> cfbolz: their marketing page currently on HN's front page.
<cfbolz> phlebas: that's reasonable (it's not like we have graalpython on
<Corbin> phlebas: Cool, thanks! FWIW I'm really happy to see excellent JIT work and yet another Python implementation doing well; I'm just skeptical of Oracle-published benchmarks since Oracle has a track record of forbidding honest comparisons of their products with other offerings.
<phlebas> why is it on HN? it's not like there was a release... hmm.. anyway, i never read HN, only when someone sends me some link 😂
<hexology> graalpython can generate "drop-in" ahead-of-time compiled binaries, right? (without needing to install a python runtime at least)
<Corbin> I was *also* interested in comparing the benchmark selection to what has, simply because I think that it's interesting to know what Oracle's customers want.
<hexology> that could be really interesting for server deployments
Atque has joined #pypy
<hexology> phlebas: sometimes people just post stuff and sometimes people upvote it :P
<hexology> or even just useful for distributing & packaging command line tools
<phlebas> hexology: we can generate self-contained binaries, but for Python code that's very experimental and not documented, because it's quite early days
<fijal> it contains quite a few errors :/
<cfbolz> fijal: any really bad ones?
<fijal> I don't know, the article is a bit all over the place, did not read the whole thing
<cfbolz> the wrong thresholds at the beginning is super minor imo
<fijal> yeah
<fijal> haven't found any major ones
<fijal> but really just skimmed it
<cfbolz> haha, so "quite a few" means "more than one"? ;-)
Atque has quit [Quit: ...]
<fijal> "more than one in very short section" ;-)
<mattip> something broke in doc builds
<mattip> something about the versions of the software involved, but what?
<mattip> docutils?
<mattip> yup, pinning docutils to 0.11 fixes it
slav0nic has quit [Ping timeout: 268 seconds]