cfbolz changed the topic of #pypy to: #pypy PyPy, the flexible snake | IRC logs: and | hacking on TLS is fun, way more fun than arguing over petty shit, turns out
stkrdknmibalz has joined #pypy
danchr has quit [Quit: ZNC -]
danchr_ has joined #pypy
the_drow has quit [Ping timeout: 265 seconds]
the_drow has joined #pypy
stkrdknmibalz has quit [Ping timeout: 246 seconds]
stkrdknmibalz has joined #pypy
stkrdknmibalz has quit [Excess Flood]
<ceridwen> Trying to get pypy3+numpy working on Ubuntu and found this: . I have no idea what the Debian people are thinking either. I'm using the Ubuntu PPA to get an up-to-date version of PyPy but it seems like imports are trying the cpython packages. Is there a known workaround for Debian's questionable decisions or do I need to figure one out for myself?
<LarstiQ> ceridwen: that bugreport doesn't seem relevant at all
<LarstiQ> ceridwen: It's quite simple, install pypy (, make a virtual env), pip install numpy
<LarstiQ> that's it
<mattip> tumbleweed: that is to be expected, no? The latest docs version is the developemtn one, which is 7.3.7
<Corbin> Ugh. I can't think of anything nice to say, so I'm not going to leave a comment yet, but it's just so frustrating to experience his constant distortion of reality.
<Corbin> Dunno what the worst part is. Might be that he intentionally forgets the jargon "tracing" so that he can emphasize his disdain for JIT research, or how he misattributes Dropbox's failure to deploy PyPy.
<mattip> tumbleweed: ahh, on the 3.7 and 3.8 release branches. Thanks, fixing. It will be in rc3
<LarstiQ> Corbin: oof
<LarstiQ> I'll have to read this when I'm in a more receptive mood
Atque has joined #pypy
<tumbleweed> ceridwen: yeah, nothing in Debian is building C-extensions for pypy3 yet (and there's no sane way to express the dependencies for it)
<tumbleweed> basically, that setup works great for pure-python, but breaks down for C extensions
jacob22 has quit [Quit: Konversation terminated!]
Atque has quit [Quit: ...]
Dejan has joined #pypy
<Dejan> So... CPython 3.10 is out... Time to do some serious pattern matching...
<Dejan> :D
<cfbolz> Corbin, LarstiQ: I don't know, was an OK insight into their thinking
<LarstiQ> cfbolz: that's good to hear, scheduled for reading during my busride in the evening
<cfbolz> Not that I agree with everything, but in some sense perfectly reasonable
<fijal> cfbolz: can you summarize their thinking or should I read the article?
<cfbolz> fijal: the usual stuff: extensions, pypy too complicated and not always fast, lots of details on bytecode quickening
<fijal> I quite like bytecode quickening from an aesthetics perspective
<fijal> but I really wonder how far can you go
<fijal> I'm pretty sure you get to "too complicated" very quickly
<fijal> like in a way, yes, you avoid the code generation, but you have to do everything else
<fijal> and that "everthing else" is what takes time
<fijal> time/complexity
arigo_ is now known as arigato
<cfbolz> fijal: agreed
<cfbolz> But that's also the cool part, somehow
<cfbolz> They have to do all the actual hard work
<fijal> arigato: hello
<cfbolz> Meaning at the end, adding a jit is easy 😉
<fijal> yeah sure
<fijal> I mean, I would totally enjoy doing that, if you ask me, so I can see the appeal
<fijal> like type-specializing bytecode is probably fun
<fijal> and probably at some point you are like "fuck I would like to have a proper JIT please"
<cfbolz> Yes
<cfbolz> But you can get like half of a good jit
<cfbolz> For less effort
<fijal> well, you get a fast interpreter, which is worht something
<cfbolz> So it's another data point on the effort/effect place
<fijal> like, we could use a fast interpreter that understands maps etc. in PyPy for sure
<marmoute> A fast interpreter is better than what Python provides
<fijal> marmoute: depends how fast, I think it's a bit of an open question
<cfbolz> There's research with really good numbers
<fijal> on something size of python?
<cfbolz> On puthon
<fijal> cool
<fijal> so yeah, maybe we should actually do the same
<cfbolz> Up to a few times faster on some benchmarks
<cfbolz> fijal: for warmup purposes, yes
<cfbolz> But tons of work
<fijal> yeah sure, if someone finds money I'm willing to work on it, I think
<cfbolz> fijal: in some sense we did a lot of the hard work already
<fijal> yeah
<fijal> I don't think it's *that* hard
<fijal> but I don't think I can do that in my free time either
<cfbolz> Yeah
<Hodgestar> At some point if your type specialized bytecodes are small enough, they are assembler, or at least an IR for one.
<arigato> fijal: hi
<fijal> arigato: ok, so have you looked at StackNew I added?
<fijal> that's one question, and question number two is about a mess in sendall in
<arigato> no
<fijal> essentially we do memoryview (to look into stuff) which copies strings to old memory (which is bad, it's worse than making a copy mostly, as we found out in RPython)
<fijal> I was wondering if we can do pinning, so that works
<fijal> so say
<fijal> with pinned(bytes) as b: memoryview(b) would not copy (maybe)
<arigato> memoryview forces a copy of the string into the old generation, and that may be actually a bad idea? then try first to not do that and already make a copy?
<arigato> s/already/always I guess
<fijal> right, but right now it's a bit of a mess
<fijal> do you think that pinning applevel objects is a good idea?
<fijal> I got absolutely lost by buffers, ffi buffers and memoryviews, I must say
<arigato> I'm not sure I see the semantics you want
<cfbolz> Yeah, well, I think it makes sense to think about the eventual goal: passing a bytes to a cffi call without copying, right?
<fijal> the idea would be to have socket.sendall(bytes) and make it work
<cfbolz> And the precise buffer technology for that needs to be determined
<fijal> right now it works for normal socket, but not for ssl socket
<arigato> memoryview(b) cannot just take a pointer to a young object because it happens to be pinned now; you don't know for how long it will be pinned
<fijal> and the reason why not is because we can't do it at applevel I think
<arigato> so what exact cffi call do you want to be non-copying (or at least not forcing any object to be old)?
<fijal> I want to be able to pass bytes (actually a pointer into bytes) somewhere to C
<fijal> the way it works in RPython
<arigato> I think that if you pass a bytes to a cffi call, Things Occur and we need to look there in more details, but I'm not sure it is related to app-level pinning
<fijal> but we don't pass bytes to cffi call here
<fijal> because we are in sendall, so we need to slice bytes somehow
<fijal> that's a part of the problem
<arigato> so you'd like to pass an object that says "a pointer to this bytes' data, plus this offset"?
<arigato> and using memoryview for that is bad for performance, I can see that
<fijal> yeah
stkrdknmibalz has joined #pypy
<arigato> maybe calling from app-level "memoryview(b, ...)" should be more lazy if b is really a bytes:
<arigato> you just record the arguments and don't do anything more at first
<arigato> then we can write special cases in cffi
<fijal> ok, there is a bit more to that
<fijal> like, we need to encode whether the function takes the pointer and stores it somewhere or not
<fijal> because right now if you pass a pointer it's assumed can be stored somewhere
<fijal> so the question is whether we would like an obscure cffi extension like that?
<arigato> OK, passing a memoryview to a cffi call should give a pointer that remains valid as long as the memoryview is alive, right
<fijal> right, so we can't look directly into bytes in the nursery
<arigato> maybe we should just be explicit and have "ffi.offset_in_string(b, offset)"
<arigato> with the same guarantees than when passing a bytes directly
<fijal> but that does not solve my problem I think
<fijal> because while write() does not store the pointer somewhere, we don't know
<fijal> sorry have to run
<arigato> let's make it clear and call it ffi.pointer_inside_bytes_valid_only_for_the_call_duration(b, offset)
<arigato> in rpython you can just pin the string for the duration of the call
<fijal> you still have to carefully choose which function you are calling, but yes, there isn't an abstraction layer in between
<fijal> afterall this is just C, so you are responsible for everything
<mattip> 7.3.6 rc2 is up on, once the cdn caches update to I will send out a email
<mattip> there still may be an rc3, but I want to test virtualenv to get fixed,
<mattip> and then virtualenv will work with the file layout in pypy3.8
jacob22 has joined #pypy
<arigato> fijal: yes
<tumbleweed> mattip: can you push tags for rc2?
<mattip> I thought I did in 5abebcca2477, bc2aac6233f9, 2bc81c16fffc
<tumbleweed> hrm, so they are
<tumbleweed> that was some user-error, clearly. maybe I forgot to hg up
<tumbleweed> ah, no, it was sillier than that, I was searching for rc1
<mgorny> hmm, i see test regressions in pypy2.7; but dunno what yet, still in progress
<mgorny> i mean in rc2 compared to rc1
<mgorny> nevermind, it was probably my mistake
dmalcolm__ has quit [Ping timeout: 252 seconds]
dmalcolm has joined #pypy
<cfbolz> mattip: yay!
<fijal> arigato: does that work for multiple threads? e.g. can't we move the object after releasing the GIL but before using write() from another thread?
<mattip> my bad, venv was broken and not passing stdlib tests. So rc3 will be coming soon
<mattip> the breakage was only when using the --copies flag, but it prevents merging a PR for virtualenv, which is needed so virtualenv works with pypy3.8
danchr_ has quit [Quit: ZNC -]
danchr_ has joined #pypy