cfbolz changed the topic of #pypy to: #pypy PyPy, the flexible snake | IRC logs: and | Matti: I made a bit of progress, the tests now only segfault towards the end
tsraoien has quit [Ping timeout: 255 seconds]
lritter has quit [Quit: Leaving]
tsraoien has joined #pypy
tsraoien has quit [Ping timeout: 268 seconds]
leshaste has quit [Remote host closed the connection]
leshaste has joined #pypy
jcea has quit [Ping timeout: 255 seconds]
j4at has joined #pypy
dustinm has quit [Quit: Leaving]
dustinm has joined #pypy
Atque has quit [Write error: Connection reset by peer]
Atque has joined #pypy
Atque has quit [Remote host closed the connection]
Atque has joined #pypy
j4at has quit [Ping timeout: 276 seconds]
otisolsen70 has joined #pypy
<antocuni> speaking of webassembly, I just found this (although it looks a bit old):
<antocuni> a webassembly interpreter in RPython
<antocuni> it would be a nice experiment to integrate it with pypy and JIT through it
j4at has joined #pypy
j4at_ has joined #pypy
j4at has quit [Ping timeout: 272 seconds]
<j4at_> Trained a very basic proof of concept AI to do in-runtime jit parameters tuning only 7 parameters (threshold, function_threshold, trace eagrness, decay, trace_limit, max_retrace_guards) got x1.2 speed up in sqlalchemy_declarative and sqlalchemy_imperative, the AI was trained on them so that's kinda cheating but it's a proof of a concept, I also run them together instead of separately to make the code longer and then used the geometric mean. And I used only
<j4at_> It got x2 speed up when I trained it first but the tests took only 2 secs(4 secs without the speed) and It didn't do well in longer tests so had to retrain it to do good in the long run.
<j4at_> the informations provided by `get_stats_snapshot()`!
<fijal> j4at_: note that "cheating" here might be ok
<fijal> you often are in a setup where you can train the parameters on something, but you are going to be running it repeatedly
<fijal> like, your tests or your *cough* production
<fijal> j4at_: JIT tests are a good target btw
<fijal> there is also very interesting data point that sqlalchemy is something I thoroughly failed to optimize away, it seems to me that it's a crazy target for the JIT
<j4at_> fijal: yeah probably. I was going to use bm_mdp too but noticed that it takes a lot of time because of this?
<j4at_> So a lot of those benchmarks are hard to optimize bc of "bugs".
<j4at_> I mean the ones pypy doing bad at.
<j4at_> I decided that for now that stoping retracing and tuning jit parameters is enough, the hooks doesn't expose enough to do inlining. I'm planning to use the `sys.settrace()`, and pypyjit hooks to gather informations.
<j4at_> But needs more hooks exposed, needs `set_before_compile_hook()` `stop_retracing_in_a_loop()` `enable_retracing_in_a_loop()` and `get_stats_snapshot()` but with stats of every loop/guard! All of them should work with built-ins functions too.
<nimaje> isn't the "cheating" there similar to profile guided optimisation?
<j4at_> yes. But i want to make it generalise and to be in-runtime ;)
<j4at_> If i can make it get x1.2 consistently according all tests that will be a huge boost
<j4at_> consistently in* all tests
j4at_ is now known as j4at
Atque has quit [Quit: ...]
<j4at> I will apprecicate it, If anyone wants to implement the hooks mentioned above :)
<j4at> atleast the stop retracing one.
<fijal> I can probably help you implement them?
<fijal> I don't think I have the capacity to look into that today
<j4at> fijal: Yeah take your time, I appreciate your help :)
j4at has quit []
tsraoien has joined #pypy
jcea has joined #pypy
<mgorny> > i386 is not supported anymore, skip tests
<mgorny> i get the point but still a bit sad-face
jcea has quit [Ping timeout: 272 seconds]
<cfbolz> mgorny: oh, where is that?
<mgorny> cfbolz: the tip of default branch
<cfbolz> mgorny: that's on for mac os and i386
<mgorny> oh
<cfbolz> linux and x86 is safe for a bit longer ;-)
otisolsen70 has quit [Quit: Leaving]
slav0nic has quit [Ping timeout: 276 seconds]
Dejan has quit [Quit: Leaving]
j4at has joined #pypy