cfbolz changed the topic of #pypy to: #pypy PyPy, the flexible snake | IRC logs: and | the pypy angle is to shrug and copy the implementation of CPython as closely as possible, and staying out of design decisions
Atque has quit [Remote host closed the connection]
Atque has joined #pypy
Atque has quit [Ping timeout: 255 seconds]
Atque has joined #pypy
Atque has quit [Remote host closed the connection]
Atque has joined #pypy
Atque has quit [Remote host closed the connection]
Atque has joined #pypy
Atque has quit [Remote host closed the connection]
Atque has joined #pypy
<LarstiQ> mattip: can the various 7.3.10 speedups be summarized as being X faster than 7.3.9, or too micro/dependant on usage?
<mattip> maybe we went from 4.7x to 4.8x of cpython3.7.6, according to
<mattip> but that is only one number. There was also a speedup of the interpreter (without the JIT), but I don't know how to quantify that
<LarstiQ> right
<mattip> maybe 15-20%? The permalink feature is not working, but if you go to,
<mattip> turn off all the exes except cpython 3.7.6, pypy3.9-64 PyPy 7.3.8, pypy3.9-64 latest (3 exes), and
<mattip> set normalization to pypy3.9-64 PyPy 7.3.8, and horizontal,
<mattip> you can see the difference
<LarstiQ> I was momentarily confused by 7.3.10 being inbetween cpython and 7.3.8, but that makes sense for a jitless speedup
<LarstiQ> quite noticeable!
otisolsen70 has joined #pypy
epony has quit [Ping timeout: 268 seconds]
Atque has quit [Remote host closed the connection]
epony has joined #pypy
greedom has joined #pypy
Atque has joined #pypy
<nimaje> did some benchmarks just not run in some configurations? or why do I not see a bar for pypy for scimark_* with only those interpreters?
greedom has quit [Ping timeout: 260 seconds]
nimaje has quit [Quit: WeeChat 3.6]
[KOR]Solidarity has quit [Quit: Leaving]
nimaje has joined #pypy
<mattip> nimaje: I see the benchmarks for pypy, just not for cpython3.7.6. Did you set the Normalization: pypy3.9-64 PyPy 7.3.8 ?
<nimaje> no, it is set to None and if I just have pypy3.9 and just the scimark_* benchmarks I see no bars for jitless pypy runs, does normalization maybe divide by 0 there and just put the choosen interpreter to 1?
<nimaje> If I normalize by some jitless pypy I only see bars for jitless pypys at 1 and no bars for other pypys
lritter has joined #pypy
Atque has quit [Remote host closed the connection]
Atque has joined #pypy
Atque has quit [Remote host closed the connection]
Atque has joined #pypy
Atque has quit [Write error: Connection reset by peer]
Atque has joined #pypy
<quotemstr> I'm a bit confused about the interaction of rffi and lltype --- both seem to have primitives for defining structs, for example
<quotemstr> When would I want to use lltype.Struct and when would I want to use rffi.CStruct?
Techcable has quit [Ping timeout: 260 seconds]
<mattip> ll is for low-level, and rffi is supposed to be slightly more convenient for use.
<mattip> The truth is there is little documentation around the design decisions there, you can use whatever is most convenient for you
<mattip> (the surest way to get a right answer on the internet is to brazenly state a wrong one)
<mattip> although take what I say with a grain of salt
<fijal> traditionally lltype was designed for low level RPython and rffi for calling C
<fijal> so rffi has more c-level types (like rawmalloc, etc.) while lltype has more RPython types
lritter has quit [Quit: Leaving]
<quotemstr> Sure. Thanks
<quotemstr> It seems like lltype has fewer "layers" for when I don't need to integrate with C
<quotemstr> I'm trying to use rmmap to read a file header described as a packed struct; trying to figure out how I can convince the lltype.Struct machinery not to insert padding
<quotemstr> Not sure it's even possible
<quotemstr> (The generated C code would have to describe the struct as packed, AIUI, and doesn't know how to do that)
jcea has joined #pypy
<fijal> you can cheat and declare it as "long" but then unpack the pieces yourself
<fijal> there might be a better plan, but I don't know
<fijal> maybe see how struct.pack does packed structs, but chances are it just calculates the offsets itself
<quotemstr> Yeah. I was hoping to access field as attributes. I'll just do it the hard way
<fijal> you can also do tricks for that
<fijal> but I'm not sure it's worth it
<quotemstr> Ah. I think a good compromise is to subclass lltype.Struct, generate N single-byte raw fields like the fixed array type, then use _adtmethds to provide getters and setters
<quotemstr> Why would I be getting an "arithmetic not supported on <UINT>, its size is too small" error trying to call raw_storage_getitem_unaligned(field_type, p, offset) ?
<quotemstr> And even if I use a 64-bit type for the getitem and zero for offset, the generated value is zero. Running untranslated works fine
<mattip> you need to use widen() to convert it
<quotemstr> Oh
<quotemstr> I didn't even see widen and went straight for inspecting the field type, getting the normalized type, and manually casting :-(
<quotemstr> I feel like a better error message when failing to do arithmetic on smaller types would have been helpful :-)
<quotemstr> Just out of curiosity: why doesn't arithmetic on smaller types work? In C, we automagically just promote everything to int
otisolsen70 has quit [Read error: Connection reset by peer]