philtor has quit [Remote host closed the connection]
GenTooMan has joined #yosys
GenTooMan has quit [Excess Flood]
GenTooMan has joined #yosys
emeb has quit [Ping timeout: 250 seconds]
emeb_mac has quit [Ping timeout: 268 seconds]
emeb_mac has joined #yosys
emeb_mac has quit [Ping timeout: 250 seconds]
emeb_mac has joined #yosys
emeb has joined #yosys
emeb has quit [Quit: Leaving.]
philtom has joined #yosys
msh has joined #yosys
<msh>
is there a good benchmark for nextpnr speed anywhere? I'm trying to improve routing1 speed which takes a couple of hours for a ecp5 85k, but it's hard to know if I'm improving things (think it does though) since runtimes jump around so much between runs
<killjoy>
This is like the perpetual question.
<killjoy>
In my experience, you find something that seems to run consistently, and you work on improving that, but with many of these algorithms it's next to impossible to tell sometimes.
<killjoy>
And especially with pnr software.
<killjoy>
Since it usually is just "place randomly, route, see if you meet timing."
emeb_mac has quit [Quit: Leaving.]
<tnt>
I thought it was about trying to improve runtime, not the QoR.
<tnt>
The only way is to average ... a lot ... like do 100s runs over 100s designs.
<killjoy>
That's how I've done it before too.
<killjoy>
But msh asked about "a good benchmark," which implies 1 benchmark, and most of the tools I've used keep churning if the QoR isn't high enough.
<killjoy>
Which is why it takes so long.
<tnt>
well "1 benchmark" can be a script running 100s of design with 100s of seeds ... that's often what benchmarks are actually, aggregates of a lot of smaller problems.
<tnt>
I know there was a beginning of this with running a picosoc design a bunch of times ... not sure where that repo is.
<tnt>
but I haven't seen anything "pre-made" / "ready-to-use" with a lot of diversity or anything like that. That's yet TBD.
<jix>
one trick for optimizing algorithms with such runtime behaviour that works sometimes is to find some quantity X (e.g. iterations of something) that you can measure and that correlates with runtime but is (mostly) invariant wrt the optimization you're doing... then you use time per X as target to optimize for
<jix>
for example in a SAT solver you have similar problems, the smallest changes cascade to chaotic runtime differences, but if you want to tune the unit propagation (inner loop of a solver) you can use propagations / sec and get something that's more stable than the overall runtime
<msh>
ok yep. benchmarking inner loops sounds like it would help
GenTooMan has quit [Ping timeout: 250 seconds]
<msh>
is a couple of hours in the normal range of runtimes for nextpnr?
<gatecat>
it shouldn't be, but things aren't perfect as far as congestion is concerned
<gatecat>
you can try with router2 (`--router router2`) but tbh it's a WIP and more likely to make things worse atm
GenTooMan has joined #yosys
GenTooMan has quit [Excess Flood]
GenTooMan has joined #yosys
<msh>
gatecat ok will give it a try tomorrow
<gatecat>
thanks, if you can link your design somewhere I can also take a look and keep it in mind when I work on optimisations