jackdaniel changed the topic of #commonlisp to: Common Lisp, the #1=(programmable . #1#) programming language | Wiki: <https://www.cliki.net> | IRC Logs: <https://irclog.tymoon.eu/libera/%23commonlisp> | Cookbook: <https://lispcookbook.github.io/cl-cookbook> | Pastebin: <https://plaster.tymoon.eu/>
doyougnu- has joined #commonlisp
recordgroovy has joined #commonlisp
doyougnu- has quit [Remote host closed the connection]
xlarsx has joined #commonlisp
<doomduck> being new to this, what do people use for the regular functional list/sequence manipulation? I know there's mapcar, but I'm imagining things like stuff in LINQ with all the where/first/flat-map/etc. ... I mean I know how to write these myself, it's more like "is there a nice/popular library?" ... not exactly sure how to search for quicklisp stuff
doyougnu- has joined #commonlisp
<aeth> a lot of it winds up in LOOP
<aeth> Not always functional but e.g. (loop :for i :from 0 :below 20 :collect i) or (loop :for i :from 0 :below 20 :sum i)
<_death> series is an interesting library
SAL9000 has quit [Ping timeout: 265 seconds]
xlarsx has quit [Ping timeout: 268 seconds]
<fe[nl]ix> aeth: I compiled that on my laptop, so not Ubuntu 18.04 but I want to test the build
<fe[nl]ix> aeth: src/runtime/sbcl.extras is statically linked to libfixposix and openssl-1.1.1l with openSUSE patches
doyougnu has quit [Ping timeout: 265 seconds]
doyougnu- has quit [Ping timeout: 265 seconds]
<doomduck> aeth: is it recommended to learn loop instead of composable tools? it feels very very adhoc, but maybe that's just me being a noob, but not sure if i'm a fan of a big magic macro
<verisimilitude> It's always preferable to use a standard function than some library, at least for me.
<verisimilitude> It's not fun to audit some code and see a single usage of some library where LOOP would've sufficed.
<fe[nl]ix> doomduck: learn loop because it's in the standard. its deficiencies won't matter until you want to write and maintain large amounts of code
<aeth> doomduck: The big messy monoliths of LOOP and FORMAT don't have to be composable, but they can be used that way.
rurtty has quit [Quit: Leaving]
rurtty has joined #commonlisp
<aeth> FORMAT can take in an arbitrary stream, while LOOP can be used as I just used it to return values
<aeth> So you can mix-and-match them with other things even though some people probably (ab)use them as an all-in-one solution
<aeth> for a toy example that probably isn't useful: (defun mapeven (function list) (loop :for item :in list :for evenp := t :then (not evenp) :when evenp :collect (funcall function item)))
<doomduck> hmm fair points, okay, I'll sacrifice a few braincells to LOOP :<
<aeth> (mapeven #'/ '(1 2 3 4 5)) => (1 1/3 1/5)
<aeth> note as I defined it, it's by even NTH index, not by even elements
<aeth> if you're writing composable functions, you're probably defining them from fairly simple LOOPs
<verisimilitude> (defun mapeven (function list)
<verisimilitude> (loop :for item :in list :by 'cddr :collecting (funcall function item)))
<verisimilitude> Oh, I missed an improvement.
<aeth> I guess mapodd is harder :-p
<verisimilitude> (DEFUN MAPEVEN (FUNCTION LIST)
<verisimilitude> (LOOP :FOR ELT :IN LIST :BY 'CDDR :COLLECTING (FUNCALL FUNCTION ELT)))
<verisimilitude> (DEFUN MAPODD (FUNCTION LIST) (MAPEVEN (CDR LIST)))
<verisimilitude> Oh, again I made a mistake.
<verisimilitude> (DEFUN MAPODD (FUNCTION LIST) (MAPEVEN FUNCTION (CDR LIST)))
dra_ has quit [Ping timeout: 264 seconds]
xlarsx has joined #commonlisp
<doomduck> speaking of this, how do I do something like (and ,@(list t t nil)) but not inside a macro? like I have something that gives me a list of truthy things, and I just want to `and` them, and I can't (apply and things) because it's not a function
* edgar-rft now wants an Apple Mapod
<aeth> (defun mapfizzbuzz (function list) (loop :for item :in list :for i :from 1 :for fizz := (zerop (mod i 3)) :for buzz := (zerop (mod i 5)) :for result := (funcall function item) :when fizz :collect result :into fizzes :when buzz :collect result :into buzzes :unless (or fizz buzz) :collect result :into rest :finally (return (values fizzes buzzes rest))))
<aeth> (mapfizzbuzz #'identity (loop :for i :from 1 :to 15 :collect i))
<verisimilitude> Use EVERY, doomduck.
<doomduck> oh nice, thanks!
xlarsx has quit [Ping timeout: 252 seconds]
<doomduck> can I somehow do that in a loop so I don't have to do (every p (loop for x in xs collect (thing x)))?
<doomduck> I guess that's kinda a silly question considering I literally just asked how to get a "more composable loop" lol
<doomduck> but now that I have loop already I'm tempted to use it :P
<verisimilitude> (LOOP :FOR ELT :IN LIST :THEREIS (THING ELT))
<aeth> and for completeness
<aeth> (mapfizzbuzz (lambda (x) (when (zerop (mod x 3)) (format t "Fizz")) (when (zerop (mod x 5)) (format t "Buzz")) (unless (or (zerop (mod x 3)) (zerop (mod x 5))) (format t "~D" x)) (write-char #\Space) x) (loop :for i :from 1 :to 15 :collect i))
Oladon has quit [Quit: Leaving.]
<doomduck> verisimilitude: hmm I'm not sure if that does the same thing, this (loop for x in '(1 2 3 4 5) thereis (progn (print x) (evenp x))) just prints 1 2 and returns T, evne tho clearly not evenp for every element
<verisimilitude> It's important to be aware of THEREIS, regardless.
<verisimilitude> If only even numbers should be printed, then use an IF.
pjb has joined #commonlisp
<doomduck> oh what I want is basically (every evenp '(1 2 3))
<doomduck> I mean (every #'evenp '(1 2 3))
<doomduck> except in my case the list is generated by a loop, hence my question how to include that in the loop to avoid nesting
<doomduck> ha found it! (loop for x in '(1 2 3 4 5) always (progn (print x) (evenp x)))
<doomduck> the printing is irrelevant, I just added it to better understand wtf it's doing :D
<doomduck> I guess the very nice things a bout using `always` in place of `every` is avoiding the ugly LAMBDA
gateway2000 has quit [Remote host closed the connection]
<doomduck> I think I'm sold on LOOP now
<_death> (collect (mapping ((x (scan-range :from 1 :upto 15)) (fb (mask (scan-range :from 14 :by 15))) (f (mask (scan-range :from 2 :by 3))) (b (mask (scan-range :from 4 :by 5)))) (cond (fb "Fizzbuzz") (f "Fizz") (b "Buzz") (t x))))
<_death> maybe more series-y would be (collect (mapping ((x (scan-range :from 1 :upto 15)) (b (expand (mask (scan-range :from 4 :by 5)) (series "Buzz"))) (f (expand (mask (scan-range :from 2 :by 3)) (series "Fizz")))) (if (and f b) (concatenate 'string f b) (or f b x))))
gateway2000 has joined #commonlisp
yottabyte has quit [Quit: Connection closed for inactivity]
Lycurgus has joined #commonlisp
Lycurgus has quit [Quit: Exeunt juan@acm.org]
tyson2 has quit [Remote host closed the connection]
Devon has joined #commonlisp
<jcowan> doomduck: You can always define λ instead, if you think it's prettier
Devon has quit [Ping timeout: 268 seconds]
Josh_2 has quit [Ping timeout: 265 seconds]
goober has quit [Quit: WeeChat 3.6]
Oladon has joined #commonlisp
Devon has joined #commonlisp
rurtty has quit [Quit: Leaving]
dra_ has joined #commonlisp
hosk1 has joined #commonlisp
terrorjack has quit [Quit: The Lounge - https://thelounge.chat]
terrorjack has joined #commonlisp
hosk1 is now known as goober
Josh_2 has joined #commonlisp
eddof13 has joined #commonlisp
asarch has joined #commonlisp
causal has joined #commonlisp
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
eddof13 has joined #commonlisp
waleee has quit [Ping timeout: 248 seconds]
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
asarch has quit [Quit: Leaving]
xlarsx has joined #commonlisp
dra_ has quit [Remote host closed the connection]
bilegeek has joined #commonlisp
pjb has quit [Ping timeout: 260 seconds]
Oladon has quit [Quit: Leaving.]
eddof13 has joined #commonlisp
xlarsx has quit [Ping timeout: 252 seconds]
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<beach> So I guess we decided that (NTHCDR 0 234) => 234 is conforming behavior, yes?
<hayley> I missed that conversation, but it seems reasonable. The HyperSpec suggests no type checking is done in that situation.
<beach> It suggests that no type checking need to be done, yes.
cosimone has joined #commonlisp
attila_lendvai has joined #commonlisp
xlarsx has joined #commonlisp
Cymew has joined #commonlisp
aartaka has joined #commonlisp
xlarsx has quit [Remote host closed the connection]
xlarsx has joined #commonlisp
aartaka has quit [Ping timeout: 246 seconds]
aartaka has joined #commonlisp
aartaka has quit [Ping timeout: 265 seconds]
aartaka has joined #commonlisp
skeemer has quit [Remote host closed the connection]
genpaku has quit [Read error: Connection reset by peer]
genpaku has joined #commonlisp
prokhor has joined #commonlisp
recordgroovy has quit [Ping timeout: 260 seconds]
recordgroovy has joined #commonlisp
xlarsx has quit [Remote host closed the connection]
aartaka has quit [Ping timeout: 265 seconds]
aartaka has joined #commonlisp
recordgroovy has quit [Ping timeout: 264 seconds]
recordgroovy has joined #commonlisp
attila_lendvai has quit [Quit: Leaving]
shka has joined #commonlisp
attila_lendvai has joined #commonlisp
bilegeek has quit [Quit: Leaving]
kg7ski- has joined #commonlisp
kg7ski has quit [Ping timeout: 240 seconds]
razetime has joined #commonlisp
xlarsx has joined #commonlisp
cosimone has quit [Remote host closed the connection]
cosimone has joined #commonlisp
lisp123 has joined #commonlisp
pjb has joined #commonlisp
xlarsx has quit [Remote host closed the connection]
xlarsx has joined #commonlisp
makomo has joined #commonlisp
xlarsx has quit [Ping timeout: 268 seconds]
chimp_ has joined #commonlisp
Psybur has quit [Ping timeout: 264 seconds]
MajorBiscuit has joined #commonlisp
_cymew_ has joined #commonlisp
aeth_ has joined #commonlisp
aeth has quit [Quit: ...]
xlarsx has joined #commonlisp
makomo has quit [Quit: WeeChat 3.5]
makomo_ has quit [Ping timeout: 248 seconds]
xlarsx has quit [Remote host closed the connection]
xlarsx has joined #commonlisp
xlarsx has quit [Ping timeout: 246 seconds]
aartaka has quit [Ping timeout: 264 seconds]
aartaka has joined #commonlisp
aartaka has quit [Ping timeout: 268 seconds]
aartaka has joined #commonlisp
aeth has joined #commonlisp
aeth_ has quit [Quit: ...]
xlarsx has joined #commonlisp
makomo_ has joined #commonlisp
jmdaemon has quit [Ping timeout: 250 seconds]
jeffrey has joined #commonlisp
King_julian has joined #commonlisp
epony has joined #commonlisp
verisimilitude has quit [Ping timeout: 268 seconds]
xlarsx has quit [Remote host closed the connection]
xlarsx has joined #commonlisp
King_julian has quit [Ping timeout: 248 seconds]
King_julian has joined #commonlisp
xlarsx has quit [Ping timeout: 265 seconds]
xlarsx has joined #commonlisp
<phantomics> What's the difference under the hood between doing C-c C-k to compile a file and doing C-x C-e on an expression in Emacs with Slime? When doing the former, the compiler will print all the available style warnings. When doing the latter, it will print much less, usually only the serious warnings.
<phantomics> I also find that C-x C-e is much faster to compile in certain cases than C-c C-k, even if I'm comparing compiling a file with just the one expression in it to doing a C-x C-e on that expression
zups has quit [Quit: WeeChat 3.6]
zups has joined #commonlisp
rurtty has joined #commonlisp
<phoe> I usually do C-c C-c to compile-and-load single forms
pve has joined #commonlisp
<_death> C-x C-e simple sends the string representation of the form to swank, which reads and evaluates it
<phoe> but I think the difference is that C-c C-c creates a new file under the hood, copypastes the single form into it, compile-and-loads it, C-c C-k does that with the whole file, and C-x C-e, what _death said
<_death> I tend to use C-M-x, which is like C-x C-e but has a special case for defvar.. I almost never use C-c C-k, and sometimes use C-c C-l which just loads the file
<phantomics> C-c C-c works well, thanks
<_death> I use C-c C-c only when optimizing stuff, to get the notes
<phantomics> Any idea why there would be a massive difference in the time taken to compile via sending a string to swank with C-x C-e and doing C-c C-c? A difference of 19 sec vs. 3 sec
<phantomics> C-x C-e is faster, to be specific
<_death> eval doesn't necessarily compile stuff, even on sbcl
<phantomics> Ok so it's loading some cached stuff that was already compiled
<phantomics> Although when I've made changes to stuff in the material to be compiled, I've never not had those changes manifest when doing C-x C-e
<_death> no, it interprets some simple cases
<lisp123> _death: whats the special defvar case for c-m-x?
<phantomics> So interprets rather than compiling? I'm wondering doing a full compilation compiles many dependencies of the compiled code, thus the long time taken
<lisp123> usually there are phantomics: fasl files already created I thought, so the dependencies shouldn't recompile?
<phantomics> I would think so
<phantomics> Are there any recommended ways to profile compilation and find the hot spots?
<splittist> Any suggestions for interesting codebases using postmodern I could read to learn from?
<_death> phantomics: you can profile compilation the same way you profile ordinary code (say with sb-sprof)
<splittist> phantomics: the sbcl manual has a chapter on it: http://www.sbcl.org/manual/index.html#Profiling
<lisp123> _death: Interesting
<lisp123> Any CMUCL users still left?
<lisp123> Getting a linux soon, so thinking of migrating over
cosimone has quit [Ping timeout: 265 seconds]
xlarsx has quit [Remote host closed the connection]
xlarsx has joined #commonlisp
doyougnu has joined #commonlisp
xlarsx has quit [Remote host closed the connection]
<phantomics> Thanks _death and splittist, when I do (sb-sprof:with-profiling (:loop nil :show-progress t :max-samples 100 :report :flat) ...) it returns no samples and prints the message "No sampling progress; run too short, sampling frequency too low, inappropriate set of sampled threads, or possibly a profiler bug." This is even when the actual elapsed time is around 19sec
<phantomics> Maybe because the code generated by the compilation includes an (eval ...) form?
<_death> what does the "..." contain
<phantomics> It contains (april-load (with (:space numeric-lib-space)) (asdf:system-relative-pathname :april-lib.dfns.numeric "numeric.apl"))
<phantomics> This loads the numeric.apl file and compiles it using April
<_death> it should contain the code to compile the form that takes time to compile... like a call to compile-file or compile
<_death> april-load probably does work at compile-time
doyougnu has quit [Remote host closed the connection]
tyson2 has joined #commonlisp
lisp123 has quit [Remote host closed the connection]
<phantomics> Interestingly, the actual April compilation, as much as I can measure, is relatively fast even after deleting the relevant .fasl file. Compiling that numeric.apl file without running the compiled code takes only 606 cycles according to (time), there must be something wrong with that but the time I perceive is still under a second
<phantomics> When I just return the compiled code, no (eval) happens either, so I don't know why it measures so little activity, I know compiling all that APL code takes more than 606 cycles
<_death> first you probably don't want max-samples 100..
<_death> then, you may try both compiling and calling or loading
King_julian has quit [Remote host closed the connection]
<_death> the default sampling interval is 10ms, so 100 samples is 1s
aartaka has quit [Ping timeout: 268 seconds]
waleee has joined #commonlisp
King_julian has joined #commonlisp
aartaka has joined #commonlisp
<phantomics> max-samples of 100000 still returns 0 samples, whether I just compile or compile and load
<phantomics> Even up to 10 mil samples returns nothing, it doesn't take that long to compile, I perceive a time of about 19s to compile and load it but it's not reflected, really strange
<phoe> what if you time compilation and loading separately?
<phantomics> When I (time) the compilation, it was just taking 606 cycles, measured time was flat 0 of course
<_death> well, difficult to know the issue without more context.. like the full with-profiling form.. also note that the default sampling mode is :cpu .. maybe try with :time if you have some sleeps or weird I/O
<phoe> phantomics: does the issue still happen in the terminal, to rule out any emacs/slime/sly interaction?
<phantomics> I tried timing compilation and loading, and despite my perceived several seconds it tells me 144,628 cycles elapsed and 0 seconds
<phantomics> I'll try from terminal
<phantomics> Terminal performance is almost identical, looks like it makes no difference
<phoe> can you pastebin the terminal stuff?
<phantomics> Here: https://dpaste.com/8FNLB2SAA
<phantomics> Using :mode :time helped, though it's still weird. It returned just 33 samples with several seconds of time I perceived
<phoe> does april work at macroexpansion time?
<phoe> this will measure runtime performance of already macroexpanded code
<phoe> try (time (macroexpand `(april-load ...))) maybe
<phantomics> And 84% of the time was spent in "foreign function syscall", there's also some threading stuff in there so I think what's measured is from lparallel
<phantomics> Yes, (april) is a macro, that's probably it
eddof13 has joined #commonlisp
<phoe> then you need to measure macroexpansion time
<phantomics> That's the ticket, when I measure (macroexpand `(april-load ...)) with the compile-only option, I get 0.316 seconds of real time, 1,190,337,574 processor cycles. That's more like it
<phoe> goodie
<phantomics> But it's still weird... when I do sb-sprof:with-profiling of the macroexpansion, it's very fast, even if I'm doing compile and run
<phoe> how do you call it?
<phantomics> The macroexpansion takes 0.6 sec, and the compiler outputs some useful info, but when I compile and run without macroexpanding, the profiler returns nothing and I perceive ~19sec
<phantomics> I'll paste
<_death> I already mentioned that you need a call to compile or compile-file..
<_death> like (compile nil '(lambda () (april-load ...)))
<_death> or if you want to measure both compilation and loading, (funcall (compile ...))
<phantomics> Ok, trying that...
jeosol has quit [Quit: Client closed]
<phantomics> Ok, that's better, 3664 samples
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<phantomics> Lots of time spent in gethash, puthash, sb-c::walk... sounds like that's what I need to look at
<phantomics> Thanks _death and phoe
atgreen has quit [Ping timeout: 246 seconds]
cosimone has joined #commonlisp
<jcowan> minion: memo to lisp123: when C-M-x is asked to evaluate a defvar, it overrides the normal behavior of ignoring any changes to it, and treats it like defparameter or setf.
<minion> Remembered. I'll tell lisp123 when he/she/it next speaks.
yottabyte has joined #commonlisp
King_julian has quit [Ping timeout: 250 seconds]
King_julian has joined #commonlisp
thuna` has joined #commonlisp
doyougnu has joined #commonlisp
razetime has quit [Ping timeout: 265 seconds]
atgreen has joined #commonlisp
razetime has joined #commonlisp
szkl has joined #commonlisp
doyougnu has quit [Remote host closed the connection]
doyougnu has joined #commonlisp
cosimone` has joined #commonlisp
cosimone has quit [Ping timeout: 268 seconds]
cosimone` has quit [Ping timeout: 265 seconds]
King_julian has quit [Remote host closed the connection]
aartaka has quit [Ping timeout: 250 seconds]
aartaka has joined #commonlisp
Lycurgus has joined #commonlisp
eddof13 has joined #commonlisp
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
townsfolkPravda has joined #commonlisp
waleee has quit [Ping timeout: 250 seconds]
gxt has quit [Remote host closed the connection]
gxt has joined #commonlisp
eddof13 has joined #commonlisp
random-nick has joined #commonlisp
fittestbits has joined #commonlisp
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
gxt has quit [Ping timeout: 258 seconds]
gxt has joined #commonlisp
fittestbits has left #commonlisp [#commonlisp]
razetime has quit [Ping timeout: 248 seconds]
razetime has joined #commonlisp
irc_user has joined #commonlisp
gxt has quit [Ping timeout: 258 seconds]
gxt has joined #commonlisp
Lycurgus has quit [Quit: Exeunt juan@acm.org]
eddof13 has joined #commonlisp
Catie has joined #commonlisp
razetime has quit [Ping timeout: 252 seconds]
Cymew has quit [Ping timeout: 246 seconds]
razetime has joined #commonlisp
aartaka has quit [Ping timeout: 268 seconds]
razetime has quit [Ping timeout: 268 seconds]
tyson2 has quit [Remote host closed the connection]
razetime has joined #commonlisp
cosimone has joined #commonlisp
dorem has joined #commonlisp
<dorem> hi ! is there some way to load a system with ASDF without compiling, only evaluating the expressions? I mean if one is using a CL implementation with an interpreter ?
<beach> Why do you care about the difference?
<beach> Even a system with no compiler must supply a COMPILE-FILE function. What it does is then not compile to native code, of course.
<dorem> because I want to get all the code interpreted, not compiled
<beach> And why would you want that?
<dorem> to analyze the interpreted code internals
<beach> Also, what implementation are you using? Some implementations don't even have an interpreter.
<Bike> what internals?
<dorem> beach: ECL
<dorem> Bike: how are interpreted functions made and its parts
<dorem> in runtime
razetime has quit [Ping timeout: 252 seconds]
<beach> I know ECL has a bytecode interpreter, but I don't know whether it has a direct interpreter. Maybe jackdaniel can help us.
<Bike> you're probably better off reading the manual for that. ECL's manual has a whole section on the interpreter.
thuna` has quit [Quit: brb]
<Bike> unless ecl has some kind of usable interface for inspecting interpreted code, i don't think interpreting code is going to tell you much about how the interpreter works
<dorem> Bike: ok, I'm doing that but is possible to configure ASDF to do that in general, in case I want to the same for other implementations with an interpreter?
razetime has joined #commonlisp
<jackdaniel> you may call disassemble on a bytecompiled function
<jackdaniel> but mind, that the code compiled to the bytecode is still compiled (i.e not interpreted); bytecodes compiler performs a so-called "minimal compilation"
<Bike> asdf is flexible enough that you can probably get it to use LOAD instead of COMPILE-FILE, but i don't know if there's any way to do that built in
<Bike> if i really wanted to look at interpreted functions i'd probably eschew looking at whole systems anyway, and just eval or load smaller things myself
<jackdaniel> if you want to enforce using bytecodes compiler, then do (ext:install-bytecodes-compiler)
<beach> ASDF would have to use some implementation-specific functionality to make sure the code is not compiled, because there is no standard functionality to control that.
<jackdaniel> and if you are interested in the implementation of both the compiler and the interpreter of bytecodes, check out files src/c/compiler.d and src/d/interpreter.d
<Bike> so i guess load-source-op?
<beach> dorem: You could also ask here for general principles of interpretation vs compilation.
<dorem> Bike: that's nice! thanks a lot!
<jackdaniel> I think that they want to see how ecl's bytecodes interpeter works, that's why they have confused interpretation and compilation (it is just a guess though)
razetime_ has joined #commonlisp
razetime has quit [Ping timeout: 265 seconds]
<beach> dorem: But there is no guarantee that load-source-op will create interpreted code.
<beach> As far as I know, ASDF then just uses LOAD on the source, but that can still result in compilation.
<dorem> thanks a lot Bike, jackdaniel, beach! I think that link is what I was looking for, let's see how much interpreted code can create for my projects ;)
<dorem> *it can create
<beach> Speaking of which, does the standard require minimal compilation even for interpreted code?
<jackdaniel> afair it doesn't
<beach> Do you have a reference for that?
<jackdaniel> but I don't have a reference
<beach> Ah.
<beach> Thanks. I think that supports your suggestion.
<Bike> requiring minimal compilation would pretty much rule out using a conventional interpreter, which seems like the kind of restriction the standard usually avoids
<jackdaniel> sure
karlosz has joined #commonlisp
dorem has left #commonlisp [#commonlisp]
waleee has joined #commonlisp
frgo has quit []
frgo has joined #commonlisp
tyson2 has joined #commonlisp
makomo_ has quit [Ping timeout: 268 seconds]
karlosz has quit [Quit: karlosz]
karlosz has joined #commonlisp
waleee has quit [Ping timeout: 250 seconds]
MajorBiscuit has quit [Ping timeout: 264 seconds]
cage has joined #commonlisp
razetime_ has left #commonlisp [https://quassel-irc.org - Chat comfortably. Anywhere.]
Dynom_ has joined #commonlisp
makomo has joined #commonlisp
Dynom_ is now known as Guest6660
<Josh_2> Good Morning :sunglasses:
makomo_ has joined #commonlisp
aartaka has joined #commonlisp
irc_user has quit [Quit: Connection closed for inactivity]
tyson2 has quit [Remote host closed the connection]
tyson2 has joined #commonlisp
Lycurgus has joined #commonlisp
doyougnu has quit [Ping timeout: 252 seconds]
tyson2 has quit [Remote host closed the connection]
doyougnu has joined #commonlisp
tyson2 has joined #commonlisp
Everything has joined #commonlisp
chimp_ has quit [Ping timeout: 265 seconds]
karlosz has quit [Quit: karlosz]
karlosz has joined #commonlisp
jeosol has joined #commonlisp
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
tyson2 has quit [Remote host closed the connection]
eddof13 has joined #commonlisp
cosimone has quit [Ping timeout: 252 seconds]
Lycurgus has quit [Quit: Exeunt juan@acm.org]
Lord_of_Life_ has joined #commonlisp
Lord_of_Life has quit [Ping timeout: 260 seconds]
Lord_of_Life_ is now known as Lord_of_Life
son0p has quit [Ping timeout: 268 seconds]
ec has quit [Remote host closed the connection]
<doomduck> are there any API docs that are more modern than the hyperspec? something searchable and written for human consumption
<doomduck> the hyperspec gives me PTSD flashbacks from reading the C++ spec lol
yottabyte has quit [Quit: Connection closed for inactivity]
<jackdaniel> minion: tell doomduck about pcl
<minion> doomduck: direct your attention towards pcl: pcl-book: "Practical Common Lisp", an introduction to Common Lisp by Peter Seibel, available at http://www.gigamonkeys.com/book/ and in dead-tree form from Apress (as of 11 April 2005).
<jackdaniel> minion: tell doomduck about paip
<minion> doomduck: paip: Paradigms of Artificial Intelligence Programming. More about Common Lisp than Artificial Intelligence. Now freely available at https://github.com/norvig/paip-lisp
<jackdaniel> these are not api docs, but they are very good books that teach common lisp
<doomduck> oh I am reading PCL
<doomduck> I just wanted something like https://docs.rs/ where I can type a thing and get search results quickly
<jackdaniel> hyperspec is basically a copy of the standard - standards usually are dense
<jackdaniel> try l1sp.org
<jackdaniel> mind that the second letter is a digit "one"
<doomduck> hmm, that does the search part, but still links to the hyperspec :D
<jackdaniel> and a few other places, yes
<jackdaniel> also implementations have their own documentation that often supplements the standard and documents extensions
<doomduck> hmm, to be specific I'm just trying to figure out how to use LOOP with the #3d-vectors package, as in trying to iterate between two vectors by adding a third one
cage has quit [Remote host closed the connection]
<doomduck> something like (loop for x from (vec 0 0) to (vec 10 0) by (lambda (i) (v+ i (vec 1 0))))
<doomduck> but ofc this doesn't work
barrybridgens_ has joined #commonlisp
<jackdaniel> beats me, maybe 3dvectors has its own documentation
<doomduck> it does, but I don't see any looping in there, I guess I'll just write this as "loop/break" or whatever the mechanism is for looping manually
<doomduck> the thing is I can't iterate with a number because I'm basically moving along a vector direction until a condition stops being true
<jackdaniel> you may always loop over indexes; (loop for x from 0 to 10 by (lambda () (v+ x i-dont-really-understand-this-part)))
<jackdaniel> (loop for x from 0 below 10 until condition do (whatever x))
<jackdaniel> etc etc
<doomduck> yeah, tho the problem is I don't know how many iterations, I need to do something like (loop until (something) do (...))
<jackdaniel> that's not a problem, (something) is a very valid test form given that the function is defined
<doomduck> ohhh lol, I didn't realize UNTIL was something LOOP actually implements
<jackdaniel> loop and format are arguably the most messy operators in common lisp
verisimilitude has joined #commonlisp
_cymew_ has quit [Ping timeout: 265 seconds]
<doomduck> loop in particular feels very very overengineered for a "simple composable" language hehe
<doomduck> but I can see the utility in learning it, tho I'm not sure _how much_ should one actually learn it to not create too arcane things
<jackdaniel> it is basically a domain specific language for iteration
<jackdaniel> it often comes handy; for more arcane things I'm sometimes building DO-SOMETHING macros on top of either DO or LOOP (whichever is better)
<doomduck> what I find confusing about it the most is the inconsistency with other things, as in (let ((x 3)) ...) but then (loop with x = 3 ...), I'd kinda expect (loop with (x 3) ...)
<jackdaniel> many people praise iterate, there were also a few other libraries proposed (like for) - said libraries are meant to be extensible
<jackdaniel> (n.b loop is often extensible in practice, but standard doesn't give any provisions towards that goal)
<phoe> doomduck: loop is more like english than it is like lisp
aartaka has quit [Ping timeout: 268 seconds]
<aeth> the problem I have with iterate is that it isn't loop and people know loop
recordgroovy has quit [Quit: leaving]
aartaka has joined #commonlisp
<doomduck> hmm I've seen iterate in the cookbook but so far skipped it since it feels like I should probably learn enough loop to hate it first
<aeth> you could fix loop mostly just by adding a bunch of parentheses, e.g. (do-loop (:with x := 3) ...)
<jackdaniel> as I've said, LOOP is a domain specific language; if you want something more "natural" from s-expression perspective then you may use the macro DO
<aeth> the only things that wouldn't trivially switch into plist tails in a "Lispified loop" are the ones that require multiple words
<aeth> the compound prepositions
<aeth> e.g. being the hash-value of
cosimone has joined #commonlisp
doyougnu has quit [Read error: Connection reset by peer]
barrybridgens_ has quit [Quit: Leaving]
<_death> doomduck: one should learn CL as much as one can
pjb has quit [Read error: Connection reset by peer]
rogersm has joined #commonlisp
pjb has joined #commonlisp
<pjb> doomduck: (loop :for x := (vec 0 0) :then (v+ x (vec 1 0)) :until (equalp x (vec 10 0)) :do …)
<doomduck> _death: I guess what I wanted to say "is this one of those things where knowing too much is detrimental?" ... reminds me of my CS studies and spending all this time learning about algos that I'd never want to use later because they're too fancy to be used
<_death> doomduck: can't say I understand such an attitude.. then again my "studies" were always informal
<jackdaniel> there is a quote which says (more or less, paraphrasing), that you can learn lisp in three days but it is only if you do not know C beforehand - in that case it takes three years
Everything has quit [Quit: leaving]
<aeth> pjb: in practice it would probably look more like this: (loop :for v :of-type vec2 := (vec2 0f0 0f0) :then (vec+-into! v v (vec2 1f0 0f0)) :until (vec= v (vec2 10f0 0f0)) :do (print v))
rogersm has quit [Quit: Leaving...]
<aeth> since something like a vec+/v+ would allocate a new one while something like a vec+-into! wouldn't. Also vec= would run = on each item, assuming matching types
<pjb> Well, in practice if you don't modify more than one coordinate of the vector, you'd just loop on it, to avoid the vec= test.
<_death> or like this (loop for v = #C(0 0) then (+ v #C(1 0)) until (= v #C(10 0)) ...)
<aeth> well, yes, that's probably more efficient, but only works on vec2
<pjb> eg. more something like (loop :for x :from 0 to 10 collect (vector x 0)) #| --> (#(0 0) #(1 0) #(2 0) #(3 0) #(4 0) #(5 0) #(6 0) #(7 0) #(8 0) #(9 0) #(10 0)) |#
<aeth> in practice, vec3 is the most common
waleee has joined #commonlisp
<aeth> pjb: yes, that too
<pjb> Even computing x y and z in the loop and building the vector just before needed might be more efficient.
<doomduck> _death: I'm not sure if it's an attitude thing ... I was excited when I was learning it, it's now later that I'm realizing it was wasted time, because the things are too intricate to be ever used in any context
<aeth> doomduck: a lot of undergrad stuff is just learning-to-learn, though
<doomduck> aeth: the tricky part with this I guess is (vec= ...), tho I'm not sure how this behaves exactly with vec, but since the fields are floats, I'd assume it could just loop forever?
<doomduck> aeth: yeah, I did masters and phd too tho :D undergrad was very useful, the rest was not
aartaka has quit [Ping timeout: 265 seconds]
<doomduck> does 3d-vectors use this?
<aeth> doomduck: Custom generics on types rather than classes (which permits dispatch on arrays and numbers that aren't necessarily their own classes, e.g. single-float arrays of length 2)
<aeth> it's kind of slow, though... needs more optimization
<aeth> Since vector code cares about performance they might just give up being generic entirely.
<aeth> You kind of see that in my use of vec2 instead of vec
<aeth> doomduck: idk what 3d-vectors does, I was just using my own vector library, which atm is only single-float (since making it also work on double-float requires using a macro to basically double everything... doable, but it's work)
<aeth> I'd probably just call them dvecs and only be somewhat generic
<doomduck> ah okay, I guess I'm still a little unsure about how I want to do things, and being a bit careful with comparisons since I have int based coords stored in vec's, but then I compare them with this lol (defun vclose (a b) (< (vdistance a b) 0.1)) ... 0.1 being "good enough" since I use it for ints only"
<aeth> int vecs are so uncommon I don't bother with those and most libraries probably wouldn't, either
<aeth> they're nastier
<doomduck> I'm making a grid based game and constantly converting from pixel to grid coords, so things being vectors kidna makes it easier I guess, but it's definitely not optimal --- mainly doing it because it keeps things conscise
<aeth> integers are either bignums (no optimizations) or fixnms of varying sizes (which isn't quite true because some implementations also optimize ([un]signed-byte 64) in places even though those aren't fixnum)
<aeth> But you have to be careful (e.g. LOGAND or MOD) to keep them within that size. And you can't really do most meaningful vector operations on them because things like SQRT will just turn them into floats anyway. And things like / will turn them into rationals. So you'd basically need to implement your own (slow) fixed point math or something.
<aeth> so vector libraries don't bother with it.
<aeth> Plus, oh, wow, that type explosion would be awful. [un]signed-byte 1,8,16,32,64 and maybe (implementation-specific) fixnum, too (including perhaps positive-fixnums).
<doomduck> heh this is scary stuff
<aeth> so they may be VECTORs but they're not VECs (which is what most people call mathematical, rather than programming, vectors)
<aeth> well, graphics/shader APIs probably still support that sort of thing
<aeth> Plus, do you wrap them (LOGAND/MOD) or do you clamp them (MAX and MIN)?
<_death> doomduck: I guess we could go on about "wasted time" (a perspective that may shift as life happens; not easy for humans to fully appreciate intervals of their lives, etc.) or "used in any context" (may be an exaggeration, could be just not useful to you; things that were historically thought useless or impractical turned out differently later on; some people place value on such things either way, etc.) but it'd be offtopic.. away for a
<_death> bit
<doomduck> aeth: I guess the upside at least for me is that in the context of graphics/games a lot of this simplifies to "my values need to be small anyway"
Oladon has joined #commonlisp
shka has quit [Read error: Connection reset by peer]
<aeth> yes, a trap that many people go down is to write, for instance, a library that properly solves all of the issues I described. So now you have a vector math library on [un]signed-byte 1,8,16,32,64 and fixnum and positive-fixnum, complete with your solutions for operations that might otherwise turn it into another type and a configurable way to wrap (MOD/LOGAND) or clamp (MAX and MIN), etc. etc. etc.
<aeth> Forgetting to actually make the application.
<aeth> And then you might change your requirements and use double-floats or single-floats anyway.
<aeth> or not need 95% of the vector operations you implemented
<aeth> (my own vector math library doesn't do much... I don't need it to)
<aeth> (in fact, it's not a separate library and I don't want other people to use it... this allows me to completely change what it does to suit my own needs without breaking compatibility)
<aeth> I think there are more vector math libraries than engines/frameworks because that's where most people stop.
<aeth> For me, though, it was slightly different. As I said, I kept my vector stuff minimal. Unfortunately, I put too much effort into an entity component system with an elaborate series of macros to query it. So that's what I have instead of a game engine. Although tbf it really does affect almost all of the code of the engine so I probably would've needed it, anyway, so my real issue is one of time.
Guest6660 has quit [Quit: WeeChat 3.6]
SAL9000 has joined #commonlisp
aartaka has joined #commonlisp
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
aartaka has quit [Ping timeout: 260 seconds]
<doomduck> yeah I usually don't even attempt to make something a library, tho I do like having a solid foundation to build on, even if it has fewer features - but it's tricky to find that in ways that are both "easy to use" and "actually extensible" --- having tried quite a few engines/languages so far it always ends up not being very nice, but most languages also struggle with extensibility
<doomduck> at the language level I mean, like for example extension methods in C# or extension traits in Rust help a lot, but when it's a language that doesn't have any ways to extend the syntax it becomes super annoying to do anything
eddof13 has joined #commonlisp
<Josh_2> Well
<Josh_2> guess what language lets you extend the syntax easily? :sunglasses:
<aeth> to be fair, CL doesn't really let you extend +, -, /, *, etc.
<aeth> but on the other hand, they're namespaced so it's not as bad as it could be
<jackdaniel> to be fair + in CL are symbols (i.e not part of the syntax), so you may simply shadow them
cosimone has quit [Remote host closed the connection]
<phoe> just go the same way you would with the extensible sequence protocol
<phoe> (defclass quaternion (number) ...)
<aeth> I'd say that the real issue here is that specialization-store is the only real way to implement a vec+ or vec:+ and it's third party, one person's effort, and not particularly fast if you don't DECLARE everything
aartaka has joined #commonlisp
karlosz has quit [Quit: karlosz]
son0p has joined #commonlisp
aartaka has quit [Ping timeout: 265 seconds]
aartaka has joined #commonlisp
<aeth> The other issue that you'll probably run into with vector-related code is the lack of an "array-of-structs" data structure that can look like, say, [ svec3 | dvec4 | bit || svec3 | dvec4 | bit || ... ]
<aeth> You can make three parallel arrays or, if the types match, you can make a 2D array (even if some of the rows are a bit too large since the array row length has to match)
<phoe> or, eww, use foreign memory for that and do your own typing
phantomics has quit [Ping timeout: 252 seconds]
<phoe> (if you're lucky with cffi maybe even not do your own typing)
<aeth> seems inconvenient when a lot of people (maybe even most?) don't even CFFI arrays anymore since you can use static-vectors or a similar library to make bilingual arrays.
<aeth> but that afaik doesn't help with 2D arrays (thus forcing e.g. matrices to be 1D if you want to feed them to GPUs) or structs (while if an array-of-structs solution was sufficiently robust, that gets you a bilingual CFFI data structure, too... just use length 1 and potentially have some nonsense "padding" slots)
<doomduck> aeth: you mean tightly packed array-of-structs right? since doing this with a regular CL struct just means heap allocated objects?
<pjb> given that lisp has object identity for all its values, it doesn't make much sense to allocate structs in array slots.
<pjb> You could end up with a lot of garbage retained for just a slot or two in an array!
<pjb> Instead, use lisp abstraction tools to implement your own arrays as you wish them.
<pjb> ie. don't use aref, use myref, etc.
<aeth> pjb: fill-pointer
<aeth> except it wouldn't be an array so you wouldn't use AREF... it would just be another sequence type that's similar to an array with a fill-pointer that may or may not be adjustable and that also may or may not be pinned (so you could give a pointer to the CFFI)
<aeth> doomduck: correct... either for efficiently porting over code from other languages, or for creating something that the CFFI can use
<aeth> I don't know Fortran, but I wouldn't be surprised if such a thing could greatly speed up Fortran->CL compilation, for instance.
<Shinmera> doomduck: 3d-vectors works just fine even if your stuff is mostly integers rather than floats.
<Shinmera> I started on a rewrite at some point that would support integer based vectors as well, but that was years ago and idk when I'll have time.
Oladon has quit [Quit: Leaving.]
<aeth> pjb: it wouldn't be too hard to implement your own array types on top of this low-level abstraction, but it's basically impossible to (efficiently) implement this on top of CL arrays (although you could probably abuse structs and make each "slot" an index... or even translate "array-of-structs" to the definitely-doable-in-CL "structs-of-arrays" transparently, but that wouldn't help with CFFI and might be
<_death> fortran is all about parallel arrays
<aeth> worse in performance)
phantomics has joined #commonlisp
thawes has quit [Quit: Konversation terminated!]
<aeth> _death: but is it all about parallel arrays in that it's the only way to do it (like in CL) or that it's the preferred way to do it?
dra has joined #commonlisp
<aeth> at any rate, Fortran->CL is the past (but maxima is a significant user of CL)... more efficient communication with graphics APIs in the data formats that they expect would be the main reason, I'd think
<_death> aeth: it's the way preferred by some (most? all? :).. check out http://web.archive.org/web/20220923021052/http://www.the-adam.com/adam/rantrave/st02.pdf
<aeth> Any kind of low level data structures would have to be aimed at interoperability, such as networking, parsing/writing binary data files, and sending things to graphics drivers (even a LispOS would have to send data in the formats that GPUs actually expect).
attila_lendvai has quit [Ping timeout: 268 seconds]
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jeffrey has quit [Ping timeout: 265 seconds]
son0p has quit [Remote host closed the connection]
molson has quit [Remote host closed the connection]
aartaka has quit [Ping timeout: 252 seconds]
aartaka has joined #commonlisp
random-nick has quit [Ping timeout: 265 seconds]
atgreen has quit [Ping timeout: 260 seconds]
White_Flame has quit [Remote host closed the connection]
Lord_of_Life_ has joined #commonlisp
White_Flame has joined #commonlisp
aartaka has quit [Ping timeout: 265 seconds]
Lord_of_Life has quit [Ping timeout: 265 seconds]
Lord_of_Life_ is now known as Lord_of_Life
zyni-moe has joined #commonlisp
aartaka has joined #commonlisp
zyni-moe has quit [Client Quit]
townsfolkPravda has quit [Quit: townsfolkPravda]
aartaka has quit [Ping timeout: 265 seconds]
<verisimilitude> Trying to write efficient code in Common Lisp is mostly a waste of time, beyond using proper data structures and algorithms along with some minor declarations. Most people seem to pretend writing efficient SBCL code be the same thing, but it's not.
<hayley> It helps that CL does not have a cost model, so there is no portable sense of what code is fast. But this arguably is true regardless of language; anyone could make a contrived machine taking the piss out of your model.
<verisimilitude> Yes.
<verisimilitude> Most languages have some basic ideas of cost, however, whereas Common Lisp really has nothing.
<verisimilitude> ``Lisp programmers know the value of everything and the cost of nothing.''
pve has quit [Quit: leaving]
<aeth> verisimilitude: hardware's hardware
<verisimilitude> Software's software.
<doomduck> does the runtime pause the world while EVAL-ing things in the REPL? at least from what I understand the REPL runs things in a separate thread, but I don't do anything in a thread-safe way?
<phoe> doomduck: there's no pausing
<doomduck> like, how come I can just eval things while my game runs and nothing goes poof?
<aeth> something that's efficient on SBCL probably can be efficient on VSMLTDCL if you took the effort to write it
<aeth> might not be the best way to write it for some exotic architecture, though, if such things exist in the future
<phoe> doomduck: very late binding and good language design
<phoe> verisimilitude: in practice one can disregard most of what you said and write optimized lisp code anyway
<phoe> for a given combination of compiler and architecture, sure, but it's not really different from the situation in the C world
<doomduck> phoe: huh, so the C/++ problem of "touch a variable while another thread is in the middle of reading a few of its bits" doesn't exist in CL?
<doomduck> as in are variables read atomically?
<phoe> the more man-years you put in a compiler, the more efficient it becomes
<phoe> doomduck: it exists, and you still need to design threading in a meaningful way
<hayley> Depends on your definition of "bits".
<phoe> but most common things like "replace a function binding" or "replace a variable binding" are atomic enough to work out of the box
<aeth> importantly, you can write optimized numerical code that will more or less work as expected if the implementation is actually using the hardware representation of numbers
<aeth> because there aren't really that many
<hayley> There isn't a memory model for CL, but consensus is that if you read a place, you aren't going to see "tearing" and you'll see either the entirety of some value that was stored.
<aeth> sure, CL floats aren't specified to be IEEE floats, but in practice they almost certainly are because that's what the hardware uses so you're just going to make your implementation both harder to write and slower if you don't use it
son0p has joined #commonlisp
<phoe> a bit more complex stuff like redefining classes might actually make a tiny pause in all threads afaik, but it's still smooth enough to not be noticeable during development
<hayley> This property does not hold for reading multiple places; then you need some other way to get atomicity (locking, transactions, a clever lock-free algorithm, etc).
<aeth> and so what if your code breaks on CLISP if it runs on the other 10 implementations
<phoe> (and you don't usually redefine classes in ready software, unless you're doing magic)
<aeth> you can't write fully portable code unless you want to limit your arrays to 1024 in size
<phoe> nowadays the whole "cost of nothing" adage stuff is, in its usefulness, as good as "lisp is slow because it's interpreted", "lisp is slow because it's garbage collected", "lisp is dead" and what else; practice simply treats all these words as non-existent and does its own stuff
<phoe> maybe it held some truth some time ago, but the world has moved on, hardware has gotten a bit better, compilers got much better
<phoe> and it's not written like Common LISP anymore
<aeth> it's funny because by 1990's standards everyone's like "lol isn't CL so slow and high level wow"
<aeth> but by 2022's standards, performance is a huge reason why people choose CL vs similar languages that usually don't focus on performance
<phoe> it's funny because by 90s standards CL standard library had batteries included
<phoe> but yeah, the world has moved on
<aeth> As much as I enjoy CL, I could've settled on a bunch of similar or not-so-similar languages that all had various performance issues, in part because they're more opinionated
<aeth> The main thing CL has going for it is performance
<verisimilitude> In the C language world, there are only two compilers; Common Lisp is nominally different.
<phoe> ah yes, borland c++ and tcc
<verisimilitude> Be not obtuse.
<phoe> apologies, you must have meant msvc and compcert
<_death> djgpp and codewarrior!
<hayley> Zeta-C and LLVM on Graal?
<aeth> C/C++ have three. Each major desktop OS has its favorite C/C++ compiler. (And, yes, C++ fans hate when people say C/C++, but in this case, they really are paired.)
<verisimilitude> I actually check against ARRAY-DIMENSION-LIMIT and ARRAY-TOTAL-SIZE-LIMIT in my code, aeth.
<phoe> ...because surely it isn't intel C compiler and watcom c++, is it?
<aeth> CL is actually really good for portability in part because you don't have to deal with both cross-OS and cross-implementation issues at the same time, unlike C or C++ (where Windows has MSVC++, Apple has Clang, and Linux usually uses GCC)
<verisimilitude> This, understandably, makes writing Common Lisp not much fun in some ways.
<aeth> I write code that works in 64-bit SBCL. Then I test to see if it's broken in 64-bit ECL and 64-bit CCL. The latter tends to break more things because it requires wrapping DEFCONSTANT in EVAL-WHEN when a macro is using a constant.
<phoe> but yeah, "In the C language world, there are only two compilers" is probably the most silly thing I've read today
<verisimilitude> Trying to use DEFCONSTANT at all is a pain.
<aeth> If someone wants to use another implementation, oh well. I tried, but they're already all broken at that point, before my code even comes into play. Some dependency-of-dependency did it. And I actually work to minimize my dependencies, so most people are probably in a worse situation.
<verisimilitude> Oh, my mistake.
<verisimilitude> In the C language world, there are only two compilers that matter, GCC and Clang; proprietary derivatives of Clang count not.
<aeth> lol
<aeth> There might be a desktop OS with > 90% market share that disagrees with that statement.
<verisimilitude> The context is Free Software development.
<verisimilitude> Anyway, the point is writing code that only works in SBCL is negligent.
<phoe> so is code that only works in gcc
<aeth> Even RMS/GNU made sure their stuff ran on proprietary UNIX systems of the time. That was how they became popular in the first place. Some UNIX vendors decided to sell the C compiler separately, and UNIX without a C compiler is near useless... but people could just get GCC
<phoe> ever tried to build the linux kernel in the early '10s or something?
<aeth> Most free software can run on Windows.
<verisimilitude> All C language code is shit, phoe.
<phoe> at times I think you seriously must be trolling
<phoe> good night
<verisimilitude> The Linux kernel is a monument to convention, and people only respect it because they know not what software should be.
atgreen has joined #commonlisp
<aeth> verisimilitude: I didn't say that I wrote code that only works in SBCL. I said that I write code in 64-bit SBCL and then test to see if it is broken in (64-bit) ECL and CCL, and if so, I fix it. Trying to support 32-bit systems in 2022, or trying to support every implementation (especially CLISP, which seems to be particularly different/difficult), etc., is pointless. Finite time. If someone really
<verisimilitude> It's millions of lines of code, and yet has no serious memory exhaustion strategy.
<aeth> cares, they can submit a patch.
eddof13 has joined #commonlisp
<aeth> I don't care if an (unsigned-byte 32) is a bignum whose arrays are T in a 32-bit or 16-bit CL, or just in a particularly unoptimized CL (since the standard only mandates character, simple-character, and bit... and de facto implementations need to add octets and just those).
<aeth> Because that's a thing of the past, and given enough time, nobody will care.
<aeth> Optimizing under the assumption that (unsigned-byte 32) is a subset of fixnum is perfectly acceptable.
<aeth> And it probably won't break, it will probably just cons like mad. Because that's how CL optimizations tend to work.
<verisimilitude> It's possible to write code that needn't operate under assumptions.
<aeth> Yes. Or I can write code that's faster on the 64-bit machines (ARM, x86-64, RISC-V, whatever) that 99.9% of users use and if you are still on 32-bit, oh well
<aeth> (or 16-bit! The standard makes it perfectly possible to cater to 16-bit CL implementations!)
<verisimilitude> This is why we have MOST-POSITIVE-FIXNUM.
<aeth> If I have an (unsigned-byte 32) I have an (unsigned-byte 32). It is not a positive-fixnum. It is an (unsigned-byte 32)
<verisimilitude> Yes.
atgreen has quit [Read error: Connection reset by peer]
<verisimilitude> I agree that fretting over that particular aspect of an implementation isn't necessarily useful.
<aeth> That (unsigned-byte 32) oriented code will behave the same on every implementation. It's just that the one that 0.1% of users use will treat it as a bignum and cons like mad. Oh well.
<verisimilitude> That has nothing to do with correctness, however.
<aeth> Meanwhile, I get to keep something as an (unsigned-byte 32) simply by (mod x (expt 2 32)) or (logand x (1- (expt 2 32))) or by using MAX/MIN. Will this be optimized? Who knows. Can it be optimized? Yes.
<aeth> I essentially never need to use MOST-POSITIVE-FIXNUM because when I have to deal with an integer of finite size I have something more specific to work with
<verisimilitude> We seem to be agreeing in general on this, so let's end it here, I suppose.
<aeth> in practice, people who use MOST-POSITIVE-FIXNUM are probably actually looking for something else like ARRAY-TOTAL-SIZE-LIMIT and probably want a type like ALEXANDRIA:ARRAY-INDEX instead of ALEXANDRIA:POSITIVE-FIXNUM. They're just erroneously assuming that a positive-fixnum is the same thing as an array-index or at least is close enough.
<aeth> Fixnum is an implementation detail and is not particularly meaningful at all.
<aeth> At least, if you write correct code that implementations are free to optimize (or not) rather than writing for an implementation.
<aeth> Implementation-specific features are trickier, but those usually used indirectly via some portability library that runs on more than one.
<verisimilitude> It's better to use the type (INTEGER ...) for the true range, yes.
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<verisimilitude> On a different note, I've never been able to contact the MOCL maintainers for a question. Has anyone?
<aeth> I like (unsigned-byte ...) instead of (integer ...) because that's easier to efficiently keep in range. If the implementation cares.
<verisimilitude> I usually use UNSIGNED-BYTE as well, yes.
<aeth> Besides, programmers usually want you to round to the nearest power of 2
<aeth> If they see 1024, they think you did your homework. If they see 1000, they wonder why you picked an arbitrary limit.
<verisimilitude> Ideally, because that was the real limit.
<aeth> At least for games, I usually reach a limit empirically. "This slows things down too much." Obviously won't really apply in 10-20 years, but by that point you can just update the constant or whatever.
<verisimilitude> To me, Common Lisp occupies an uncomfortable spot between APL and Ada.
<verisimilitude> It's not as pleasant as APL, but I care not about APL's performance, and it's nowhere near as practical as Ada, where performance matters greatly.
gxt has quit [Remote host closed the connection]
<verisimilitude> Every time I write something in all three, the APL is most concise, the Ada most practical, and all the Common Lisp shows me is an unhappy medium between them.
<aeth> I can definitely see the Ada in CL's type system and vice versa
gxt has joined #commonlisp
<verisimilitude> The APL is written for beauty and nothing else, the Ada is still beautiful in its own way, but the Common Lisp ends up with paralysis from portability concerns or other issues.
<aeth> But the problem with Ada is that people can actually make money selling Ada compilers and tools, so the best Ada is probably not an Ada that I can use... whereas people don't really care about commercial CLs so unless you need GUI integration, they don't really present the best experience compared to, say, SBCL
<verisimilitude> Ada, amusingly, has the nicest arrays of the three.
<verisimilitude> The best Common Lisp implementations are obviously the proprietary ones. I don't use them either, but it's clear their implementations don't randomly break at the least, right? I don't even have a working M-. and at least I could bitch about it to a real business.
<aeth> some random Emacs library must've broke my M-., too, but I haven't bothered going through it to fix it yet
<verisimilitude> The main Ada business is Adacore, which maintains a GCC Ada frontend.
<aeth> But GNU Emacs is the thing that's problematic, not the FOSS implementations. SBCL is better than the commercial implementations in most aspects. CCL probably is, too.
<verisimilitude> I'm willing to use Free Software because I care, but I'm not willing to trick myself into believing it to be the best software.
<verisimilitude> The best software is written by businesses or the government.
<aeth> Commercial languages tend to go for feature checklists to appeal to enterprises, rather than for things like raw performance.
<aeth> If you need an item on the checklist (and it's actually implemented well instead of just to check a box), then maybe it's worth it. Otherwise, not really.
<verisimilitude> This is because random volunteers understandably can't be expected to build anything, let alone something that works.
<aeth> And in CL, which is so extensible, it doesn't seem that important at all.
<verisimilitude> Here, I'll link to something about this.
dra_ has joined #commonlisp
<aeth> In practice, one motivated individual can do the work of 10+ just-there-for-a-paycheck programmers. And FOSS also has the advantage that requirements are based on wants/needs closer to the programmer rather than whatever arbitrary requirements and goals management thinks up.
<verisimilitude> Why is most open source shit, then?
<aeth> Because most software is shit, but you can't see inside the closed source
<verisimilitude> That touches on a nice aspect, however.
<aeth> e.g. game publishers would rather force people to install rootkit-level spyware anticheat than actually write proper netcode for game engines even though the right way to write that was solved in the late '90s
<aeth> no FOSS game is going to do that, though
<verisimilitude> The requirements closer to the programmer are irrelevant, as compared to the user.
dra has quit [Ping timeout: 268 seconds]
<aeth> (and FOSS games tend to be rare because of the art... programmers are used to giving things away for free, but artists are used to people demanding free work to try to take advantage of them, so even hobby artists usually demand money)
<verisimilitude> What's the Linux kernel? It's a shitty reimplementation of part of a shitty operating system from the 1970s. What a jewel.
<verisimilitude> Unfortunately, we long ago reached the point at which most software needn't actually work, so long as it be sold.
<verisimilitude> If open source is for programmers, then why are their systems so shitty for any programming work whatsoever?
<aeth> the contrary
<aeth> anything programmer-facing (infrastructure, tools, etc.) is usually high quality and competitive
<verisimilitude> I like the article's explanation: graphomania.
<aeth> anything that's not for programmers (Libre Office, Gimp, etc.) tends to be awful
<aeth> the main exceptions are usually things that started proprietary (Firefox, Blender, etc.)
<verisimilitude> Firefox is a particularly bad piece of software.
<verisimilitude> LibreOffice is another piece of software originally written by a real business.
<verisimilitude> LibreOffice, at the least, does what I expect of it.
molson has joined #commonlisp
<aeth> (a->b) <=/=> (b->a)
<verisimilitude> Yes, yes.
<aeth> LibreOffice is shit, despite its origins. Although it doesn't have an easy time because its task is basically to open every Microsoft Office file ever for every version ever
<aeth> Firefox similarly has a difficult domain. Probably harder. This makes it hard for competitors to pop up.
<verisimilitude> It helps that I rarely need to use LibreOffice.
<verisimilitude> I use Firefox every day and it has glaring flaws.
<aeth> well, that doesn't help LibreOffice improve. The programmer type would prefer to use Markdown or LaTeX
<verisimilitude> Anyway, all computing, including most Lisp, needs to be destroyed for any real improvement to occur.
<aeth> Firefox's flaws is that it arose in a situation where it had to compete with IE6 and it could do that. It offered nice QOL improvements like tabs (even though it wasn't the first).
<aeth> However, it now has to go up against giants that don't play fair, to a much worse extent than IE6
<aeth> e.g. you can't ship proper Firefox on iOS at all, while Google Chrome has Google pushing it on some of the most popular web properties around, as well as Google's giant warchest to pay OEMs to install it with the other Windows crapware. As well as having it have an Android version that ships with Android.
<aeth> Plus, both Apple and Google can pay a lot of money to develop their browsers, while Microsoft's whole thing with IE6 was to let it stagnate on purpose
<aeth> it would probably be easier to replace the web with something else than it would be to make a from-scratch CL web browser
eddof13 has joined #commonlisp
<verisimilitude> I agree.
<aeth> This also suggests why it's folly to try to "destroy computing" as your programming goal... because Microsoft, Apple, and Google are the main driving forces behind today's programming in the first place, so why would they let you come along with a new platform?
<verisimilitude> The WWW is irredeemable garbage, from the beginning.
<verisimilitude> The issue is that it's currently infeasible to leave at all.
<verisimilitude> Just bootstrapping most hardware is an insurmountable challenge.
<aeth> I like graphics because you can largely bypass the OS with OpenGL/Vulkan. It's not really important at all at that point. The same thing can run on any OS, on any platform. There are only so many ways to do input/etc. and the language and a portability library like SDL can handle it. And then Vulkan runs on almost anything.
<aeth> This is similar to why the web became popular, except the web has gotten awful over the past 10 years.
<aeth> Sure, a bunch of domain-specific monopolies can do a bunch of internal-promotion-oriented design instead of actually driving computer forward, but that just means you can pick another domain.
<aeth> s/computer forward/computing forward/
<aeth> Hopefully Meta doesn't wind up owning "the metaverse" or that goes away, though.