beneroth changed the topic of #picolisp to: PicoLisp language | The scalpel of software development | Channel Log: https://libera.irclog.whitequark.org/picolisp | Check www.picolisp.com for more information
rob_w has joined #picolisp
<tankf33der> hi all
<tankf33der> abu[7]: how to load content of variable mike to register r4 on ppc64 ?
<abu[7]> Hi tankf33der! Let me check my old ppc64 sources
<tankf33der> thank you.
<abu[7]> I think it is "ld 4 mike"
<abu[7]> I don't find any example code
<abu[7]> ie no old ppc64 build
<tankf33der> ~ # clang -c code.s && objdump -d code.o
<tankf33der> code.s:7:1: error: too few operands for instruction
<tankf33der> ld 4, fn
<tankf33der> never mind, i will google anyway
<abu[7]> perhaps it is "ld 4 mike@got(2)"
<abu[7]> not sure
<abu[7]> depends on the ABI
<abu[7]> got(2) is the offset in the global offset table
<abu[7]> iirc all global accesses are relative to that
<abu[7]> (position-independency)
<Nistur> mornin' all
<abu[7]> Hi Nistur!
<Nistur> hello!
<Nistur> just getting my soldering iron out to replace the control board in my physical penti keyboard :P
<abu[7]> 👍
<Nistur> I have to rewrite the firmware for it, because I used a non-sandard Arduino before, which worked great... until it didn't. I can't get it now, so I got a better arduino... and the code isn't compatible
<DKordic> beneroth: Comparing functions is equivalent (perhaps nothing more than different wording) to universal quantification.
<Nistur> it loooooks like the only thing I really need to rewrite is the debounce code
<DKordic> Hi everyone.
<abu[7]> Hi DKordic!
<Nistur> hullo DKordic
<beneroth> Good Morning all :)
<abu[7]> Cheers beneroth!
<beneroth> DKordic, well, data can be viewed as functions, so everything can be viewed as functions
<beneroth> abu[7], when in a piece of code (sequence/cascade of validation, for example, (and ...)) you can use @ or variable name (e.g. Var), do you use Var or @ ? It's a question of style (maybe depending on whats more readable depending on specific case?) or would @ give some minuscule performance benefit because it might be in a register/cpu cache? or purely style question?
<DKordic> beneroth: I don't follow. Generally speaking (as You did) that is _exactly_ Combinators.
<beneroth> it's a question about what you do and what you do recommend to do as habit. If it even makes a difference for interpretation speed, the difference would hardly matter except maybe cumulated over a long program I'd think.
<abu[7]> I use *always* @ if possible
<beneroth> DKordic, yeah. I think I agree with your earlier statement, but I'm not sure if you mean it as "fascinating property" or "makes a difference in application".
<beneroth> abu[7], that's a statement :)
<abu[7]> :)
<beneroth> why exactly? :D
<abu[7]> It is faster and smaller
<abu[7]> '@' is bound anyway in a function
<beneroth> it's surely faster during reading, I know.
<abu[7]> aka 'use'
<beneroth> ah is it?
<beneroth> ah makes sense
<abu[7]> reading but also execution time
<beneroth> so dynamic lookup is always ensured to be short/fast
<beneroth> thats the point?
<abu[7]> T
<beneroth> grokked. thanks!
<abu[7]> 'let' is expensive
<DKordic> As an excellent example, True, False, 1 and (+ 1) are _Unknown_ therefore we must agree which Parameter they are.
<beneroth> yeah that would be another question. I often have cases where I have to bind/initialize like 1-2 variables and multiple others which I initialize on NIL but didn't need to be, e.g. @pats for matches. When do you switch from a single let to a let + use?
<abu[7]> perhaps 2
<abu[7]> One can be at the end of let
<abu[7]> (let (A 1 B 2 C)
<beneroth> abu[7], so you say, the cost of let is not only the presence of it but the length of the binding list also makes quickly a (small but real) difference?
<beneroth> yeah right, implicit NIL - speeding up reading and size
<abu[7]> Yes, binding structures on the stack
<beneroth> ok, then I try to improve my habits on @ and let/use :))
<abu[7]> and unbinding in the end
<abu[7]> :)
<DKordic> Excellent question. Fascinating property is the first symptom of maddness ;) . It is very practical first of all because it could be a clear statement of the problem.
<beneroth> abu[7], thanks for your insights, I appreciate this very much. My understanding of the pil VM is not grokked enough to answer this myself, and benchmarking for it is also quite a bit of a tricky hazzle :D
<abu[7]> No worries :)
<beneroth> DKordic, I don't think maddness, just more focus on research (gaining knowledge) then application (using knowledge/information model to gain advantage in reaching your goals aka a slight advantage in outcompeting other life forms in the struggle for neg-entropy)
<beneroth> abu[7] :)
<beneroth> DKordic, an optimal strategy (judged over time X) requires a balance of both activities. but where the balance is, is a quite hard and use-case-specific optimization problem)
<beneroth> abu[7], one example I have, is e.g. one (let (S 'initial-state-for-status <many NIL initializations>), so I will rewrite this into a (use (...) (let S 'initial-state-for-status ...)
<DKordic> Good point. I will need some time for further examples.
<abu[7]> beneroth, good
<beneroth> DKordic, both can be mixed up and be hard to differentiate without many samples and a bigger picture :D
<beneroth> abu[7] :)
<beneroth> abu[7], am I correct in my thinking that the length of the symbols used in (state) doesn't matter during state evaluation, only reading (and ofc re-using anyway internalized symbols saves memory) ?
<abu[7]> Exactly
<beneroth> abu[7], the question is basically: state does (== ) and not (=) right?
<abu[7]> T
<abu[7]> = would compare names
<beneroth> T
<beneroth> and with = the length of names obviously makes a difference, shorter will be faster to compare, especially if multiple candidates have same prefix
<abu[7]> yeah
<beneroth> btw. I've found the biggest cause for slowing down my picolisp programs (especially server programs when replying to a client request)
<beneroth> it's the logging. to stdout and/or file. Commenting/removing logging statements make easily a very big difference.
<abu[7]> I/O in general
<beneroth> ofc the logging to a file could be optimized using a fd from (open) instead of using (out 'str ...), but there still will be some costs especially when multiple processes use the same file
<beneroth> abu[7] T
<beneroth> I mean, its obvious, but I still was quite surprised to see how big the difference is when you measure it. With extensive logging processing one request costs hundreds of milliseconds, without its tens.
<beneroth> that compounds to differences which really matter, if one cares about performance (both server<->client but more importantly usability, apparent-reaction-times of the software for a human user)
<Nistur> ok, new controller board wired up. buttons just outputting numbers 1-6 for now: 123456
<Nistur> now I have to rewrite the firmware
<Nistur> ok, don't have the chords working fully yet... but I can type: sein
<Nistur> so... that's something :D
<Nistur> that's most of the code working again though
<Nistur> oh, wait, I know what the problem is, I just need to detect the rising edge of buttons, I'm not triggering the keyboard print when a button is released, I'm triggering it every tick
<Nistur> woops
rob_w has quit [Remote host closed the connection]
gahr_ has joined #picolisp
<beneroth> DKordic, the first link, LMDB: I think this is a solution which in practice hardly matters, the problem they state is usually not existing in real applications. Other databases do practically the same or have other ways to mitigate that issue, e.g. pilDB handles this A) by partly ignoring it and trusting the cached file system to solve it, which it usually does (and then such a solution like LMDB is just additional bloat and additional overhead with
<beneroth> according performance impacts) and B) pilDB has quite optimized caching solely because of the implicit lazy-loading and active event-driven cache-invalidation
gahr has quit [Ping timeout: 260 seconds]
<beneroth> DKordic, the second link, is not really about performance but about concurrency when logging, on top of a performance-optimized logging implementation. And it is solving this by just adding some more context information to the logged data. So no difference to how this works in picolisp, it's a use-case-specific problem with an very obvious solution. They just give a library for python where in lisp the programmer is more likely to implement the
<beneroth> principally same solution by hand (because the implementation costs are lower in lisp, things are more often solved ad-hoc with a direct implementation instead of having a prevalence of libraries)
<beneroth> DKordic, the core of the issue I described is related to the things in the first link, but the their solution will not make a difference in usual practice because their solution is already done (in a bit more indirect worse way but with less overhead) in the filesystem anyway.
<beneroth> DKordic, my understanding is, that the issue I described (performance cost of logging) is primarily the actions to open/flush/close the file plus the checks to ensure that the current process is allowed to write to the file and no other process is writing to it right now (concurrency, locking). Both links do nothing about that.
z4k4ri4 has quit [Quit: WeeChat 4.5.1]
<beneroth> DKordic, the mainstream way to solve issues in practice (performance issues, or any issues, including the need for feature/functionalities - now even more with LLMs which make this much more accessible/searchable) is just to replace libraries/software components until something sticks. My preferred approach is to have a deeper understanding of the specific problem and hence maybe a more informed pick of a solution. I guess my approach has more overhead
<beneroth> in the general case (requires more time spent on research and learning) but less risks (lower probably for picking solutions which come with possibly-improbable but extreme disadvantages, black swans), so I think my strategy is more stable and reliable in the long term.
z4k4ri4 has joined #picolisp
<beneroth> both strategies obviously work and YMMV, it's also a matter of taste and preference - then again a dev who doesn't care/mind about not-having control doesn't use picolisp, picolisp is the anti-thesis of their preferences (now that is called "vibe coding" xD)
<beneroth> essential vibe coding example: how it started: https://xcancel.com/leojr94_/status/1900767509621674109 and how it is going: https://xcancel.com/leojr94_/status/1901560276488511759 (twitter links)
corecheckno has quit [Remote host closed the connection]
corecheckno has joined #picolisp
corecheckno has quit [Remote host closed the connection]
078AA6DO6 has joined #picolisp
047ABD9UT has joined #picolisp
ello_ has quit [Ping timeout: 252 seconds]
ello has quit [Ping timeout: 265 seconds]
<DKordic> I am quite certain ""AI"" is _a_ W/Rorschach_test . I think You only provided more evidence. It is an insult on the level of Reddit, Facebook, and whatever. I will have no part in spreading their... Brain-Rot.
<DKordic> Ed Zitron (Better Offline @ YouTube) calls it Rot-Com. bigboxSWE @ YouTube was absolutely right: ""tech twitter is [...]"".
<DKordic> Twitter is worse than I thought.
<beneroth> DKordic, oh yes Twitter is. But I still read selected knowledgable people on there (using proxies like nitter.net or xcancel.com). I don't read tech people there though.
<beneroth> DKordic, about the "AI" or more specifically LLM field and hype - yep that is the main and largely correct picture, I think so too. Though I must say I actually used it for some research questions (where I know that material and discussions are available in great quantity) and was surprised how well it works, for a limited subset of questions and use cases it really works very well as a natural-language search interface. I was surprised. But it is very
<beneroth> limited and easily misused and mis-interpreted, and I don't have much use for it in my work. I think it does bring more harm than good for most people and human society in general.
<beneroth> I also think we reached the limit of the LLM algorithmic approach, largely. There will be some more gains in minification/resource-optimization, like the Chinese showed recently, and maybe we will see some more field-specific achievements based on better (training) data preprocessing (almost all of OpenAI's success is based on their superior data pre-processing, not on any "secret AI sauce"). But I do think that development has already reached a new
<beneroth> local maximum and the next jump will need combination/synthesis with very different algorithms/approaches (maybe by combining it with old-style symbolic reasoning and finding some new ways to deduce them from data - humans clearly have there abilities we have no clue about yet)
<beneroth> Using LLMs for coding is even now not working for anything bigger than a few thousand lines, depending on complexity even less, the "attention-span" of the LLMs (transformer size/cache) is just too small. And in any case there are some inherit strong limits for the type of work this approaches can achieve. Most likely this will only increase the demand of senior software devs, as the LLM coding hype is somewhat obstructing the already thin
<beneroth> pipeline/evolvement path from beginner to senior.
<DKordic> I wonder ""AI"", like JavaScript ""language"" that it is, wouldn't cause maximal damage were it any worse or better _at this moment_? It will be updated, and modernized if needed.
<beneroth> I think it will become worse before it gets better. The problem with AI is, that it doesn't has to work well for people believing that it does, and then delegating work/decisions to it with bad consequences.
<beneroth> it will certainly lead to worse javascript, haha :D
<beneroth> (worse uses of it)
<beneroth> The so-called "AI alignment" problem is also in essence the same problem as with "human aligment" and "alignment and compatibility of big company/big money interests with human civilization"
<abu[7]> Sounds like Homoeopathy. Become worse before it gets better ;)
<DKordic> [ROFL] that is poetic.
<beneroth> nah. homepoathy based on a way worse theory - the two main parts are A) the middle age believe of illness being caused by "evil miasma/fumes coming out of the ground" (basically leaking up from Christian hell underground) and logical deductions based on the false axiom that "small amounts of something have the opposite effect of big amounts of the same, and it becomes more powerful the smaller it is" (leading to the use of diluted poisons to heal
<beneroth> something which has similar symptoms as the poison in larger quantities)
<beneroth> the irony is, that homeopathy is older than the scientific-based medicine, so compared to the beginnings of what became "western medicine" (which started often as: just lets remove blood until the illness goes away) it worked better, because that bleeding and other weird ideas made more damage while homeopathy ideally just had zero effect.
<beneroth> it is also a big irony that most people who like homeopathy hate vaccination, when vaccination is highly compatible with the concepts behind homeopathy and the modern founder of homeopathy was a fan of vaccination.
<beneroth> abu[7], do you know about the spread/use of homeopathy in Japan and Asia?
<beneroth> because I would imagine that homeopathic "globulis" (which are mostly or exclusively milk sugar)
<beneroth> are a bit more problematic for most Asian people compared the people with indo-german roots
<beneroth> brb
beneroth has quit [Quit: Leaving]
<abu[7]> No idea (Asia)
<abu[7]> Masaso says there is none in Japan
beneroth has joined #picolisp
<beneroth> back
<beneroth> abu[7], I see, makes sense
<beneroth> greetings to her :)
<abu[7]> Thanks :)
abu[7] has left #picolisp [#picolisp]
abu[7] has joined #picolisp
<Nistur> sein rad flog zum pkw
<Nistur> success!