beneroth changed the topic of #picolisp to: PicoLisp language | The scalpel of software development | Channel Log: https://libera.irclog.whitequark.org/picolisp | Check www.picolisp.com for more information
aw- has joined #picolisp
rob_w has joined #picolisp
<beneroth> Regenaxer, how do you determine optimal block size for index trees? picking 2/4/5 based on node count?
<Regenaxer> On the number of nodes, but more importantly on the estimated key size
<Regenaxer> If the keys are possibly long strings, not many entries fit into a node
<Regenaxer> and if very long, a node would take more than one block which would be inefficient
<aw-> hi
<aw-> what happened to picolisp.com ?
<Regenaxer> Hi aw-! It was down yesterday
<Regenaxer> Is it not up?
<aw-> it's up
<aw-> the design changed
<Regenaxer> ah!
<Regenaxer> This was Erik
<aw-> there's a huge blob of code at the top of the page
<Regenaxer> He announced also in the list
<aw-> no idea what it is or what it does
<Regenaxer> oh
<aw-> why is it there?
<Regenaxer> haha, indeed
<Regenaxer> Strange
<aw-> that should be changed to a simple Hello World or something
<Regenaxer> Looks more like an error
<Regenaxer> Seems code from Vip though
<aw-> haha yeah
aw- has quit [Ping timeout: 240 seconds]
rob_w has quit [Quit: Leaving]
Hunar has joined #picolisp
<Hunar> Hello Regenaxer :)    .. I saw this yesterday http://john.freml.in/teepeedee2-vs-picolisp (the website is currently down but my phone has a local cached version) he starts by complementing picolisp then begins the critique .. one of them is that picolisp has "some of the worst bignum performance imaginable" .. is that true? the lines before that
<Hunar> statement is, picolisp "has different (more compact) object representation" I don't understand how that results in a bad performance? shouldn't compact be faster?
<Regenaxer> Hi Hunar!
<Regenaxer> I thought bignum performance is not so bad
<Regenaxer> but who knows, perhaps he has something better
<Regenaxer> With objects he perhaps means OOP symbols?
<Hunar> Is the page also down for you? I'll post it in ix.io if it is
<Regenaxer> Concerning bignums: Many years ago I compared with Java's Bignums, and pil was faster
<Regenaxer> I try
<Regenaxer> Hangs
<Regenaxer> Is it John Fremlin or similar?
<Regenaxer> I somehow know the name
<Regenaxer> "John Fremlin's Blog" or so
<Hunar> yes
<Hunar> Here is it on ix.io  http://ix.io/3PRo
<Regenaxer> yes, 2009
<Regenaxer> I think I know
<Hunar> So back in 2009 picolisp's bignum was bad?
<Regenaxer> About the same as now
<Regenaxer> Now a bit faster cause of 64 bits
<Regenaxer> I dont think "worst performance imaginable" :)
<Regenaxer> we can alway imagine something worse
<Hunar> T :) that was a very emotional statement against picolisp
<Regenaxer> We should try
<Hunar> What would be a bignum test? just */+= with big numbers?
<Regenaxer> I have a test against Python for non-bignums: http://ix.io/3PRr
<Regenaxer> Python takes three times as long
<Hunar> So we are not the "worst" :D
<Regenaxer> buw this is short nums
<Regenaxer> How to make a bignum test in Python?
<Regenaxer> Mom, phone
<Hunar> I'm not sure, i'll think
<Hunar> I will try to avoid */  since I have a feeling that he used */ and got bad performance
<Regenaxer> done
<Regenaxer> */ is not especially bad
<Regenaxer> Mainly a division
<Regenaxer> and (*/ 3 4 2) is faster than (/ (* 3 4) 2)
<Hunar> Interesting :)
<Hunar> I'm also testing something now, it'll take about 20 minutes or more
<Regenaxer> yes, simply because it is one function call overhead less
<Regenaxer> There is no shorter test?
<Hunar> I want to know how much do math operations take on bignums in pil and python
<Regenaxer> yes, me too
<Hunar> I generated two 300 digit prime numbers from a website, then tested these 100000000 times
<Hunar> pil: + 16s     - 20s     * 1m41s     / 3m2s     % 4m18s
<Regenaxer> There is misc/bigtest in pil
<Hunar> I'll start python now
<Hunar> really :(
<Regenaxer> bigtest does some heavy random tests
<tankf33der> I have tested picolisp bignum against many languages, btw.
<tankf33der> Found one miltiplication bug in dlang.
<Regenaxer> oh!
Hunar has quit [Quit: Client closed]
<Regenaxer> in Dlang bignums?
<tankf33der> yeap.
<tankf33der> Fixed.
<Regenaxer> You reported it and they fixed it?
Hunar has joined #picolisp
<tankf33der> Yes.
<Regenaxer> cool
<beneroth> really cool
<Regenaxer> Did you also measure times?
<Regenaxer> It seems in Java
<tankf33der> Nope.
<Regenaxer> Cause Hunar wanted to compare bignum performance (see above "imaginable")
<Hunar> My results are read, :(
<Hunar> ready*
<Hunar> pil: + 16s     - 20s     * 1m41s     / 3m2s     % 4m18s
<Hunar> python: + 15s     - 16s     * 2m31s     / 1m0s     % 37s
<Hunar> java: + 7s     - 9s     * 1m13s     / 37s     % 45s
<Hunar> Only * was faster in pil than python, everywhere else pil was worse
<Regenaxer> yes, but a few percent
<Regenaxer> How did you test in pil?
<Hunar> What is the technical reason, how can python be faster :) Its always slower in every situation (for me at least)
<Regenaxer> Pil uses cells, not a linear array
<Regenaxer> So it is not optimal for bignums
<Hunar> my test was like this http://ix.io/3PRL
<Regenaxer> ok
<Hunar> I once tested some stuff on the bus just for fun.. I found out that (for N num) is faster than (do N) which is weird :/
<Hunar> sorry, (do num)
<Regenaxer> yes, strange, 'for' also does some binding
clacke has quit [Read error: Connection reset by peer]
Regenaxer has quit [Ping timeout: 256 seconds]
Regenaxer has joined #picolisp
Hunar has quit [Quit: Client closed]
aw- has joined #picolisp