jackdaniel changed the topic of #commonlisp to: Common Lisp, the #1=(programmable . #1#) programming language | Wiki: <https://www.cliki.net> | IRC Logs: <https://irclog.tymoon.eu/libera/%23commonlisp> | Cookbook: <https://lispcookbook.github.io/cl-cookbook> | Pastebin: <https://plaster.tymoon.eu/> | News: ELS'22 this Monday (2022-03-21), see https://european-lisp-symposium.org
recordgroovy has joined #commonlisp
karlosz has quit [Ping timeout: 244 seconds]
recordgroovy has quit [Client Quit]
recordgroovy has joined #commonlisp
<recordgroovy> Hi, all. Does anyone have experience extending iolib's streams? I want a stream that buffers data and file descriptors together, and then `finish-output` would sendmsg them off in unison.
taiju has joined #commonlisp
mon_aaraj has quit [Ping timeout: 268 seconds]
mon_aaraj has joined #commonlisp
thuna` has quit [Ping timeout: 272 seconds]
karlosz has joined #commonlisp
karlosz has quit [Ping timeout: 260 seconds]
nij- has joined #commonlisp
Lord_Nightmare has quit [Quit: ZNC - http://znc.in]
Lord_Nightmare has joined #commonlisp
nij- has quit [Remote host closed the connection]
mon_aaraj has quit [Ping timeout: 240 seconds]
mon_aaraj has joined #commonlisp
pillton has joined #commonlisp
ebrasca has quit [Remote host closed the connection]
waleee has quit [Ping timeout: 272 seconds]
pranavats has left #commonlisp [Error from remote client]
karlosz has joined #commonlisp
karlosz has quit [Ping timeout: 268 seconds]
anticomputer has quit [Quit: quit]
mon_aaraj has quit [Ping timeout: 272 seconds]
anticomputer has joined #commonlisp
mon_aaraj has joined #commonlisp
brettgilio has quit [Remote host closed the connection]
brettgilio has joined #commonlisp
gxt has quit [Remote host closed the connection]
gateway2000 has quit [Quit: Leaving]
tyson2 has quit [Remote host closed the connection]
shka has joined #commonlisp
causal has joined #commonlisp
mon_aaraj has quit [Ping timeout: 268 seconds]
mon_aaraj has joined #commonlisp
<beach> seok-: Did you get enough information about how garbage collection works?
mon_aaraj has quit [Ping timeout: 268 seconds]
akoana has quit [Quit: leaving]
kg7ski has quit [Quit: ZNC 1.8.2 - https://znc.in]
kg7ski has joined #commonlisp
hineios8 has joined #commonlisp
hineios has quit [Ping timeout: 268 seconds]
hineios8 is now known as hineios
pranavats has joined #commonlisp
anticomputer has quit [Remote host closed the connection]
anticomputer has joined #commonlisp
taiju has quit [Ping timeout: 240 seconds]
attila_lendvai has joined #commonlisp
son0p has quit [Ping timeout: 268 seconds]
attila_lendvai has quit [Ping timeout: 272 seconds]
mon_aaraj has joined #commonlisp
contrapunctus has joined #commonlisp
aartaka has quit [Ping timeout: 268 seconds]
aartaka has joined #commonlisp
<SR-71> I came across this, Can someone suggest an elegent insertion sort implementation in CL?
<SR-71>
<beach> SR-71: Are you doing this for fun? I mean insertion sort has quadratic time complexity.
azimut has quit [Remote host closed the connection]
anticomputer has quit [Remote host closed the connection]
<beach> SR-71: Then, your SLICE function adds another linear factor it seems.
anticomputer has joined #commonlisp
<SR-71> I'm trying to write clean CL code.
<SR-71> *cleaner
<beach> SR-71: Even thought it is horribly inefficient?
<SR-71> I know
azimut has joined #commonlisp
<beach> *though
<beach> No, I mean, are you trying to write clean code yet still inefficient?
<beach> It is important in order to determine what advice to givec.
<beach> *give
aartaka has quit [Ping timeout: 276 seconds]
<SR-71> Well, I was using insertion sort on a linked list.
<SR-71> As an exercise.
aartaka has joined #commonlisp
<beach> OK, so then let's start with the simple stuff. (+ <mumble> 1) is better expressed as (1+ <mumble>).
<beach> It is an example of a general rule in programming where you always want to use the most specific construct that will do the trick.
<beach> Then, it is a horrible idea to use recursion on a linear structure like a list.
* SR-71 nods.
<beach> And SLICE does what the built-in SUBSEQ does.
<beach> So you can remove SLICE.
<beach> Then, there is no reason to stick a newline after (COND. It just wastes vertical space if the clauses will fit on their respective lines anyway.
<SR-71> right
<beach> (= <mumble> 0) is better expressed as (zerop <mumble>)
<beach> I am not sure what the code is doing, but insertion sort is implemented as a loop over the elements of a list, starting with an empty resulting list. In each iteration, you remove the first element of the original list and insert it in the resulting list in the right place. There is no reason to thing about what numeric position an element has.
<SR-71> beach: What do you think abut this?
<beach> It looks very inefficient, and I have a hard time abstracting away performance in favor of elegance.
<SR-71> Hmmm
<beach> Who wrote it?
anticomputer has quit [Quit: quit]
<beach> Wait, isn't Rosetta supposed to measure performance at all?
anticomputer has joined #commonlisp
<SR-71> Who knows?
livoreno has quit [Read error: Connection reset by peer]
<SR-71> Then there is this:
azimut has quit [Remote host closed the connection]
azimut has joined #commonlisp
notzmv has quit [Ping timeout: 276 seconds]
<beach> That last one looks like it has the right algorithm, but it violates some style rules.
<SR-71> I think it'll take quiet a while before I would be able to translate proccedural implementations into cool lisp code.
<beach> I think the problem here is the definition of "cool". Insertion sort is already bad with its quadratic performance. So it is already not a "cool" sorting algorithm. Then it looks like your implementation adds another quadratic factor, making it O(n^4), which is totally not "cool". The second link you showed is probably cubic, because it splits the list and then uses APPEND, and that is also not "cool" in my book.
<SR-71> By cool, I meant efficient and elegent.
<SR-71> For which, I had been translating classic algorithms into lisp.
<beach> Then anything beyond quadratic is not "cool".
<SR-71> lol
<beach> Would you like me to try to write one for you?
<SR-71> Cool
<beach> I am a bit busy today, so I'll work for a few minutes and if I can't do it, it shall have to wait.
<SR-71> Nice
mon_aaraj has quit [Ping timeout: 276 seconds]
<beach> Nah, it shall have to wait. I need to start my chores. Maybe someone else can pitch in.
<beach> The thing is that it's a bit messy to reuse the CONS cells, which is what you want to do in order to avoid consing.
<SR-71> Yes
mon_aaraj has joined #commonlisp
<SR-71> I'm working on my own *horrible* version of insertion sort.
<SR-71> Lets see how it goes.
pranavats has left #commonlisp [Error from remote client]
azimut has quit [Remote host closed the connection]
anticomputer has quit [Read error: Connection reset by peer]
anticomputer has joined #commonlisp
azimut has joined #commonlisp
<beach> SR-71: Try running some benchmarks with lists of length, say 10, 100, and 1000, and see how the running time evolves.
pranavats has joined #commonlisp
<SR-71> I'll be comparing my implementations of Heap sort, merge sort, quick sort ... etc.
mon_aaraj has quit [Ping timeout: 268 seconds]
mon_aaraj has joined #commonlisp
<hayley> If you want "clean", writing the functional/list-comprehension style "quick sort" algorithm is usually interesting.
<hayley> I wrote a parallel and very monomorphized merge-sort that blew SBCL's SORT out of the water for vectors of small integers. A vectorised sort algorithm would also be interesting, though performance will be in constant factors and not your O().
<SR-71> Btw, here is my insertion sort implementation.
azimut has quit [Remote host closed the connection]
anticomputer has quit [Remote host closed the connection]
azimut has joined #commonlisp
anticomputer has joined #commonlisp
anticomputer has quit [Remote host closed the connection]
anticomputer has joined #commonlisp
akoana has joined #commonlisp
<SR-71> Here are the performance logs:
<SR-71> 100 elements
<SR-71> Evaluation took:
<SR-71> 0.000 seconds of real time
<SR-71> 0.000096 seconds of total run time (0.000000 user, 0.000096 system)
<SR-71> 100.00% CPU
<SR-71> 214,256 processor cycles
<SR-71> 16,048 bytes consed
<SR-71>
<SR-71>
<SR-71> I think I should use D paste insted.
<Bike> yeah, please use a pste for multiple lines
<SR-71> Lesson learnt: Polynomial running time algorithms sucks.
<SR-71> Also, don't use recursive insertin sort for 100000 elements.
<SR-71> *insertion
<SR-71> It's still computing......
livoreno has joined #commonlisp
azimut_ has joined #commonlisp
anticomputer has quit [Ping timeout: 268 seconds]
pranavats has left #commonlisp [Error from remote client]
anticomputer has joined #commonlisp
MajorBiscuit has joined #commonlisp
azimut has quit [Ping timeout: 268 seconds]
jmdaemon has joined #commonlisp
MajorBiscuit has quit [Ping timeout: 244 seconds]
taiju has joined #commonlisp
MajorBiscuit has joined #commonlisp
son0p has joined #commonlisp
pve has joined #commonlisp
<pjb> SR-71: I'm partial to http://termbin.com instead.
<aeth> RIP lisp paste
<hayley> I bet that measurement is going to be noisy; it's nice to use e.g. the-cost-of-nothing to measure timing. (Anyone interested in taking more statistics? Minimum/maximum? Recording a whole distribution?)
<jackdaniel> hayley: Shinmera's dissect calculates these things afair
<jackdaniel> or was it trivial-benchmark? rather the latter
<hayley> dissect gives backtraces, I think. Probably the latter.
<Shinmera> trivial-benchmark is in dire need of a rewrite
<Shinmera> but it does have the stats capture and calculation parts somewhat well separated, so would be easy to nick those parts for a better library :)
karlosz has joined #commonlisp
karlosz has quit [Ping timeout: 260 seconds]
dataRonix has joined #commonlisp
dataRonix has left #commonlisp [#commonlisp]
notzmv has joined #commonlisp
Algernon69 has joined #commonlisp
varjag has joined #commonlisp
Inline has joined #commonlisp
Th30n has joined #commonlisp
<beach> SR-71: https://dpaste.com/FRPSWYQCB is quadratic, so that's optimal in some ways. But you also do a quadratic number of allocations of CONS cells which is not so great. And of course, both INSERTION-SORT and INSERT are recursive.
rogersm has joined #commonlisp
<hayley> Henry Baker also has a paper on a "linear" quicksort, which doesn't take linear time, but doesn't cons (compare to "linear logic").
<beach> SR-71: Here is my version. It reuses the CONS cells. http://metamodular.com/insertion-sort.lisp
<beach> SR-71: But, like I said the other day, no constant-time optimization can fix a quadratic algorithm. Mine takes a bit more than 30 seconds (on my not-so-fast computer) for 10000 elements. That's 75 billion processor cycles.
recordgroovy has quit [Ping timeout: 268 seconds]
recordgroovy has joined #commonlisp
<beach> SR-71: An algorithms with O(n log n) complexity should take a few million cycles or so.
<beach> SR-71: SBCL SORT takes around 50ms and 100 million processor cycles.
gxt has joined #commonlisp
jmdaemon has quit [Ping timeout: 240 seconds]
verisimilitude has left #commonlisp [ERC (IRC client for Emacs 27.2)]
<seok-> beach: nope, didn't get information on how GC works
<beach> Ah. OK, it's a matter of determining what objects are live.
<beach> So the registers of the processors, including the stack pointer, are considered the "roots", i.e., the objects that the application can reference immediately.
<beach> Then, the GC follows the objects referenced by the roots, and so on recursively, until it has "traced" every reachable (i.e., "live") object.
<beach> The remaining objects are then garbage.
<beach> That's mainly it for the basic idea.
<Inline> immediately dead objects, temporarily/ephemeral dead objects and dead ones for sure...
<beach> Er, what?
<Inline> does it mark things eventually alive ?
<beach> I don't know what that means. It marks every object that the application could possibly reference, but it can't know whether the application will actually do that, because that would be undecidable.
<Inline> ah
<beach> seok-: Does that make sense to you?
<beach> seok-: If not, feel free to ask questions.
Topre35g has joined #commonlisp
<seok-> So do implementations usually have this recursion running on a separate thread every once in a while?
<pjb> seok-: often it's when you try to allocate memory and there's no more in the heap. But there are parallel GC that run in separte threads,
mon_aaraj has quit [Ping timeout: 272 seconds]
<pjb> seok-: and incremental GC that run a little bit each time you allocate memory.
<seok-> woah, gc only runs when there's no more memory?
<beach> seok-: Some implementations use a single thread, "stop the world", so that the application pauses when the GC runs.
<pjb> (defun allocate (size) (if (< *available-size* size) (garbage-collect)) (if (< *available-size* size) (error 'out-of-memory) (%allocate size)))
<beach> seok-: More sophisticated GC techniques can work in parallel with the application.
<seok-> What kind would SBCL use for instance?
<beach> pjb: You should have told seok- from the start so that I didn't have to do the work.
<beach> seok-: I don't know, but hayley does.
<pjb> :-)
<pjb> seok-: it's documented in sbcl manual.
<seok-> one liner answer : )
<beach> seok-: The GC SBCL uses is not particularly good, because new techniques have been invented since the SBCL GC was created.
<pjb> seok-: to have fun, fetch some paper about those new paralelle gc, and implement it for sbcl or ccl…
<seok-> I'd have imagined SBCL is up to date with things since it is still in development
<beach> seok-: If you are really interested in garbage collection, I recommend "The Garbage Collection Handbook" by Jones.
<Topre35g> does ACL use parallel gc?
<pjb> seok-: you can contribute!
lf00 has joined #commonlisp
<seok-> I could !
<Topre35g> I read a flame war from many years ago where the ACL guys defended why they hadn't implemented parallelism yet
<beach> seok-: Garbage collectors are notoriously difficult to write and test, so it is not something that is changed lightly. Plus, the garbage collector is not an independent pluggable module. The rest of the system must collaborate with it.
<beach> seok-: So if you change the garbage collector, it is likely that large parts of the rest of the system must be modified as well.
<seok-> Ugh
<beach> Indeed.
mon_aaraj has joined #commonlisp
<seok-> Is that one of the reasons why new CL implementations are being made? - might as well start over?
<beach> That is one reason I started SICL, yes. But not the only one.
<seok-> I see
<beach> Also, it is not just about new GC algorithms being invented. It also has to do with processors changing.
Th30n has quit [Ping timeout: 268 seconds]
<beach> It used to be the case that there was very little cache memory, and that a memory operation took roughly the same time as a register operation.
<beach> So copying GC algorithms were invented, because they compact memory nicely.
<beach> But they are now pretty bad when it comes to memory accesses and cache performance.
<seok-> Ah, I've never thought about cache size increases in modern CPU's changing algorithm efficiencies
igemnace has quit [Remote host closed the connection]
<beach> I see. Well, that's a very important aspect. The entire RAM model used in asymptotic complexity assumes that memory operations cost the same as register operations.
<seok-> So it would seem this would apply to not only GC algorithms but various data structure manipulation algorithms too?
<beach> Exactly!
<seok-> If the current implementations are not up to date with these algorithms, then I can see how a new one written again would be of benefit
<seok-> It would make everything faster
<beach> seok-: So to relate to your previous question, I invented a new algorithm for generic dispatch, because the one used by PCL is based on hashing with a table kept in memory. My algorithm uses cheaper register comparisons.
<seok-> PCL? Practical Common Lisp?
random-nick has joined #commonlisp
<beach> Portable Common Loops, a portable CLOS implementation used by many current Common Lisp systems.
<hayley> beach, seok-: I modified SBCL enough to allow for using a non-moving collector.
<beach> Another thing that has become important is that conditional branches are expensive if the branch prediction logic of the processor makes the wrong prediction.
<seok-> 1992, sheesh we've used that for a while
<beach> hayley: Good to know.
<seok-> I've heard of this optimization - it is better to write assembly with bitwise operations than conditionals whenever possible
<beach> seok-: In the past, processor pipelines were short, so taking a branch cost just a cycle or two. Now, if the branch is incorrectly predicted, a deep pipeline may have to be abandoned and recomputed, which might take dozens of cycles.
azimut_ has quit [Ping timeout: 268 seconds]
<beach> seok-: These are just examples of how processor evolution might change what is the optimal technique to use, so existing Common Lisp implementations may become bad even though they don't change.
azimut has joined #commonlisp
<seok-> I see, that was very informative
<seok-> Thank you
<beach> Pleasure.
gateway2000 has joined #commonlisp
<hayley> seok-: Implementing a concurrent GC is hard. And it's even harder if you need to move concurrently.
Algernon69 has quit [Quit: Leaving]
taiju has quit [Ping timeout: 244 seconds]
<hayley> But I've been discussing if we can do a sort of incremental compaction, where we pick a part of the heap to compact in a stop-the-world pause, where the heap is small enough that copying and fixing up pointers (determined by a concurrent marking pass before) that the pause should be no longer than some user-specified target duration.
Topre35g has quit [Remote host closed the connection]
puhi has quit [Quit: (;´Д`)]
pillton has quit [Remote host closed the connection]
<beach> So in fact, just as every good programmer needs to know about compiler design and processor technology, every good programmer needs to know about techniques for memory management. Or else, applications written by that programmer may not be very good in terms of performance.
<hayley> pjb: I am implementing a parallel GC for SBCL (eventually, I just finished implementing an algorithm that's easier to parallelise). How's that?
puhi has joined #commonlisp
notzmv has quit [Ping timeout: 240 seconds]
<hayley> s/where the heap/where the part of the heap/
<hayley> I would tell Topre35g that indeed ACL uses a parallel GC.
lf00 has quit [Ping timeout: 240 seconds]
<pjb> hayley: great!
waleee has joined #commonlisp
mon_aaraj has quit [Ping timeout: 268 seconds]
mon_aaraj has joined #commonlisp
Th30n has joined #commonlisp
lifeisfoo has joined #commonlisp
thuna` has joined #commonlisp
lifeisfoo has quit [Ping timeout: 255 seconds]
lifeisfoo has joined #commonlisp
glaucon has joined #commonlisp
tyson2 has joined #commonlisp
waleee has quit [Ping timeout: 272 seconds]
lifeisfoo has quit [Ping timeout: 240 seconds]
Topre35g has joined #commonlisp
<Topre35g> How does SBCL GC compare to LW / ACL / CCL / ECL?
<jackdaniel> ecl uses boehm garbage collector (it may work in both conservative and precise mode), historically it had a homegrown gc
<hayley> I would assume the ACL GC is the most developed; ECL uses Boehm-Demers-Weiser, and SBCL and CCL use their own moving, single-threaded stop-the-world collectors (though please say if I'm wrong about CCL).
<Topre35g> Would concurrency help avoid stopping the world?
<hayley> It would, but implementing a concurrent GC is hard. (Pedantically, many concurrent GCs do stop the world, but for a shorter period of time.)
<Topre35g> I see
recordgroovy has quit [Ping timeout: 244 seconds]
<Topre35g> What about the venerable JVM - any GC ideas from there that can be used?
<hayley> Many.
<Topre35g> Or is it all closed sourced and hidden from the world at large
<hayley> Much is open source, due to OpenJDK.
<hayley> But I prefer to read the papers than implementations; <https://www.cs.utexas.edu/users/speedway/DaCapo/papers/immix-pldi-2008.pdf> and <https://users.cecs.anu.edu.au/~steveb/pubs/papers/lxr-pldi-2022.pdf> are my major inspirations.
azimut has quit [Ping timeout: 268 seconds]
<Topre35g> AMD Athlon, that brings back memories
recordgroovy has joined #commonlisp
<Topre35g> Seems pretty cool what they are doing
<hayley> Very cool.
glaucon has quit [Quit: Leaving.]
asarch has joined #commonlisp
shka has quit [Quit: Konversation terminated!]
jeosol has quit [Quit: Client closed]
shka has joined #commonlisp
perrierjouet has quit [Quit: WeeChat 3.5]
perrierjouet has joined #commonlisp
akoana has quit [Quit: leaving]
livoreno has quit [Remote host closed the connection]
livoreno has joined #commonlisp
pranavats has joined #commonlisp
OlCe has quit [Remote host closed the connection]
Topre35g has quit [Quit: Leaving...]
<jcowan> the bdw gc has been developed since the late 1980s; I would be very slow to say that it can easily be beaten except in specialized circumstances (e.g. a language that has to run in very limited amounts of code space). In particular, it's nearly portable and it's, contra what beach was saying, quite pluggable even if you want precise collection
<jcowan> also, beach: I think that 1+ and zerop and similar things, which were written in an attempt to optimize common operations in the era of very dumb compilers, are anything but a personal choice nowadays: any even slightly reasonable compiler should inline them in the same way. In particular, I find (1- x) very confusing and I never use it: it reads too much like 1 - x whereas it means x - 1.
<beach> I totally disagree. It is a way to show the intentions to the person reading the code as soon as possible, so that that person does not need to remember to look for more arguments.
<beach> If I lead someone to think that it had to do with performance, I must have omitted some information.
Th30n has quit [Quit: WeeChat 3.5]
<beach> But I won't enter into this discussion, because of what I have said recently about the effect they have on me.
<jcowan> I recognize the force of both your arguments; my reference was historical only
<hayley> I've only read the papers on BDW, which might not reflect the current version of BDW. But address-order location (and compacting as a backup) are useful for improving mutator locality, and using memory protection for a write barrier isn't ideal (as SBCL runs a bit faster with the write barrier done in software).
<hayley> On the latter, SBCL just uses card marks to store old->new pointers, but I wonder if one of those "refinement" techniques would be useful, to store a set of addresses rather than cards, when old->new pointers are sparse on cards.
<jackdaniel> in that case perhaps FIRST, REST and LIST* would better reflect intention of operating on the list than CAR, CDR and CONS
<jcowan> jackdaniel: Exactly
<jcowan> And yet.
lf00 has joined #commonlisp
notzmv has joined #commonlisp
taiju has joined #commonlisp
<Nilby> One could then make the logcial but argument that 0- 3- 4- should exist and cddadr is very clear, but it fails average human psychological reality.
<jcowan> I'm glad someone else pointed that out
<jcowan> my first Lisp implementation, when handed an undefined identifier in operator position, ran a little interpreter on the name which succeeded if it was of the form c[ad]+r
<jcowan> so you could write not merely cddadr but caddadddaaadaaaaaddddaaaaaaaddddddddddddddr
<Nilby> jcowan: cool. now one could make macros that might let you choose any two unicode characters use like as cons locator
<pjb> jcowan: I doubt 1+ zerop etc were written to optimize anything. They were written to be used with HOF, to avoid having to copy-and-paste (lambda (x) (+ x 1)) etc, everywhere.
<jcowan> to generalize that we could have a rule that !x is read as (! x) where x is any emoji
<jcowan> sorry, where ! is any emoji
<jcowan> pjb: you may be right at that
asarch has quit [Quit: Leaving]
<jcowan> though if you are ever copying and pasting in any Lisp you are failing to abstract
<pjb> exactly, hence 1+ and zerop.
<pjb> (let ((list '(3 5 4))) (loop until (some (function zerop) list) do (setf list (mapcar (function 1-) list)) collect list)) #| --> ((2 4 3) (1 3 2) (0 2 1)) |#
<jcowan> that however is not an argument for using them in operato posiiton
<jcowan> @*#$ keyboard
<pjb> jcowan: indeed, and this is why I'm not so insistent, personnaly, on using 1+ 1- or even zerop, etc in those cases. (= x 0) is as good, (- x 1) is better.
<pjb> Notably when you have a serie of substractions: (list (- a 0) (- b 1) (- c 2)) vs. (list a (1- b) (- c 2)) ; beurk
<pjb> All compilers will generate the same code for those two expressions anyways.
glaucon has joined #commonlisp
rogersm has quit [Quit: Leaving...]
<hayley> If I was a Lisp compiler I would """simply""" generate a vectorised subtraction of a b c - 0 1 2 and then CDR-code the list produced. So I assume SBCL has the same optimisations?
<hayley> (:
cage has joined #commonlisp
<jcowan> I think Interlisp and maybe Genera are the only surviving cdr-coding systems.
<hayley> There's a paper by Henry Baker from 1991 which argues that we should start CDR-coding on stock hardware, to save on memory traffic. Any day now.
<jcowan> hmm
<hayley> "Although CDR-coding is currently out of fashion due to not being required by Tak and 8Queens, the inexorable pressure to reduce off-chip bandwidth will force its revival."
<jcowan> Kernel has both mutable and immutable lists, and I pointed out that a vector can be understood as a list whose cars are mutable and whose cdrs are immutable, which means that you don't have to expose specialized vector or sequence operations
<_death> how does (cons x (cdr y)) look w/ a vector
<jcowan> the result isn't a vector any more, that's all
<jcowan> "vector" being simply a name for "list with guarnateed O(1) get/set behavior"
<White_Flame> also I wonder what software cdr code tagging would look like, as it would have to be there in addition to the type tags
<seok-> vector is a list?
<jcowan> you need 2 more bits for the 4 cases of: cdr in the next word, pointer to cdr in the next word, cdr is nil, this is a pointer-to-cdr (to keep the gc happy)
<White_Flame> right, and I think that would generally blow out the current scheme of using allocation granularity in the address to hide the tags
<jcowan> there are some spare bits in the middle of the word if you are both nanboxing and low type tagging
<jcowan> you only need 48 bits for an address and you have a 52-bit mantissa to pllay with
<White_Flame> but that's only 256TB of addressable memory!
<jcowan> which is all you get at present anyhow
<White_Flame> I thought hardware went past 48 bit in the last cpu generation or two
<jcowan> you could use the top 2 bits of the middle tag and then you could handle up to 1 EB
<beach> seok-: "CDR coding" is a technique where a list is encoded in essentially the same way as a vector: https://en.wikipedia.org/wiki/CDR_coding
<hayley> Well. I was kidding about the CDR coding part. But the way SBCL allocates lists could allow for vectorised setting of elements (if we had a scatter instruction, which might only be an AVX-512 feature).
<jcowan> I think that even the latest x86-64 and AArch64 systems support only 48 bits of virtual memory, though they can go up to 52 bits of physical memory. Extending virtual memory would require either yet another level of paging tables or increasing the page size
<hayley> The allocation code allocates space for the entire list at once, which could allow for avoiding some pointers chasing (and I had to fix it to make sure it writes all the bits in my allocation bitmap).
aartaka has quit [Ping timeout: 272 seconds]
<varjag> is clocc basically abandoned?
<varjag> or does it live on under some different name(s)
dlowe has quit [Remote host closed the connection]
mon_aaraj has quit [Ping timeout: 244 seconds]
<pjb> varjag: IIRC, there has been a period when some of it has been scavenged in some other library. Perhaps alexandria? But in case of doubt, just check clocc an see if there's anything useful in it.
jeosol has joined #commonlisp
dlowe has joined #commonlisp
<varjag> pjb: yeah, am wondering if i should repackage the union-find bit
<varjag> or if it has already went someplace
<varjag> can't seem to find it elsewhere
mon_aaraj has joined #commonlisp
morganw has joined #commonlisp
anticomputer has quit [Remote host closed the connection]
anticomputer has joined #commonlisp
mon_aaraj has quit [Ping timeout: 268 seconds]
mon_aaraj has joined #commonlisp
glaucon has quit [Quit: Leaving.]
rogersm has joined #commonlisp
rogersm has quit [Quit: Leaving...]
glaucon has joined #commonlisp
<jcowan> I was about to say sumpn stoopid until I realized I had conflated CLOCC with CLICC
Furor has joined #commonlisp
<drakonis> wait what
Colere has quit [Ping timeout: 268 seconds]
thuna` has quit [Remote host closed the connection]
lf00 has quit [Quit: Leaving]
MajorBiscuit has quit [Ping timeout: 264 seconds]
MajorBiscuit has joined #commonlisp
anticomputer_ has joined #commonlisp
anticomputer has quit [Ping timeout: 268 seconds]
mon_aaraj has quit [Read error: Connection reset by peer]
mon_aaraj has joined #commonlisp
avocadoist has quit [Quit: Leaving]
taiju has quit [Ping timeout: 264 seconds]
anticomputer_ has quit [Ping timeout: 268 seconds]
thuna` has joined #commonlisp
anticomputer has joined #commonlisp
karlosz has joined #commonlisp
MajorBiscuit has quit [Ping timeout: 255 seconds]
anticomputer_ has joined #commonlisp
anticomputer has quit [Ping timeout: 268 seconds]
karlosz has quit [Ping timeout: 268 seconds]
tane has joined #commonlisp
tane has joined #commonlisp
tane has quit [Changing host]
anticomputer has joined #commonlisp
anticomputer_ has quit [Ping timeout: 268 seconds]
anticomputer has quit [Remote host closed the connection]
anticomputer has joined #commonlisp
waleee has joined #commonlisp
xix has joined #commonlisp
mon_aaraj has quit [Ping timeout: 272 seconds]
glaucon has quit [Quit: Leaving.]
Dynom_ has joined #commonlisp
Dynom_ is now known as Guest5031
xix has quit [Quit: Connection closed]
azimut has joined #commonlisp
aartaka has joined #commonlisp
recordgroovy has quit [Ping timeout: 276 seconds]
recordgroovy has joined #commonlisp
tyson2 has quit [Remote host closed the connection]
cage has quit [Quit: rcirc on GNU Emacs 27.1]
glaucon has joined #commonlisp
karlosz has joined #commonlisp
nij- has joined #commonlisp
nij- has quit [Remote host closed the connection]
recordgroovy has quit [Ping timeout: 244 seconds]
karlosz has quit [Ping timeout: 255 seconds]
recordgroovy has joined #commonlisp
azimut has quit [Ping timeout: 268 seconds]
azimut has joined #commonlisp
orestarod has joined #commonlisp
karlosz has joined #commonlisp
karlosz has quit [Ping timeout: 268 seconds]
mrcom has quit [Remote host closed the connection]
mrcom has joined #commonlisp
mrcom has quit [Remote host closed the connection]
mrcom has joined #commonlisp
mrcom has quit [Remote host closed the connection]
mrcom_ has joined #commonlisp
mrcom_ has quit [Remote host closed the connection]
MajorBiscuit has joined #commonlisp
karlosz has joined #commonlisp
MajorBiscuit has quit [Ping timeout: 268 seconds]
rendar has quit [Quit: Leaving]
MajorBiscuit has joined #commonlisp
karlosz has quit [Ping timeout: 240 seconds]
shka has quit [Ping timeout: 272 seconds]
rendar has joined #commonlisp
rendar has quit [Changing host]
rendar has joined #commonlisp
MajorBiscuit has quit [Ping timeout: 240 seconds]
Guest5031 has quit [Quit: WeeChat 3.5]
rogersm has joined #commonlisp
tyson2 has joined #commonlisp
jmdaemon has joined #commonlisp
glaucon has quit [Read error: Connection reset by peer]
pve has quit [Quit: leaving]
karlosz has joined #commonlisp
sinisa has joined #commonlisp
sinisa has quit [Quit: ERC 5.4.1 (IRC client for GNU Emacs 29.0.50)]
rogersm has quit [Quit: Leaving...]
sinisa has joined #commonlisp
akoana has joined #commonlisp
sinisa has left #commonlisp [#commonlisp]
attila_lendvai has joined #commonlisp
morganw has quit [Remote host closed the connection]
notzmv has quit [Ping timeout: 276 seconds]
notzmv has joined #commonlisp
random-nick has quit [Ping timeout: 268 seconds]
jack_rabbit has quit [Quit: ZNC 1.8.2 - https://znc.in]
aartaka has quit [Ping timeout: 244 seconds]
knusbaum has joined #commonlisp
waleee has quit [Quit: WeeChat 3.6]
attila_lendvai has quit [Ping timeout: 272 seconds]
alvaro121 has quit [Quit: Bye]
knusbaum has quit [Quit: ZNC 1.8.2 - https://znc.in]
azimut has quit [Quit: ZNC - https://znc.in]
waleee has joined #commonlisp
azimut has joined #commonlisp
alvaro121 has joined #commonlisp
knusbaum has joined #commonlisp
knusbaum has quit [Quit: ZNC 1.8.2 - https://znc.in]
knusbaum has joined #commonlisp
tane has quit [Quit: Leaving]
knusbaum has quit [Client Quit]
knusbaum has joined #commonlisp
dra has joined #commonlisp
grawlinson has quit [Quit: SIGTERM]
knusbaum has quit [Client Quit]
knusbaum has joined #commonlisp
knusbaum has quit [Quit: ZNC 1.8.2 - https://znc.in]
knusbaum has joined #commonlisp
grawlinson has joined #commonlisp
knusbaum has quit [Quit: ZNC 1.8.2 - https://znc.in]
aartaka has joined #commonlisp
knusbaum has joined #commonlisp
knusbaum has quit [Remote host closed the connection]
bilegeek has joined #commonlisp
contrapunctus has left #commonlisp [#commonlisp]
dra_ has joined #commonlisp
Oddity has joined #commonlisp
dra has quit [Ping timeout: 268 seconds]
Lord_of_Life_ has joined #commonlisp
Lord_of_Life has quit [Ping timeout: 272 seconds]
Lord_of_Life_ is now known as Lord_of_Life
dra_ has quit [Ping timeout: 264 seconds]
Furor is now known as Colere