epolanski has quit [Quit: Connection closed for inactivity]
Bike has quit [Remote host closed the connection]
Bike has joined #commonlisp
Bike has quit [Remote host closed the connection]
Bike has joined #commonlisp
livoreno has joined #commonlisp
azimut has quit [Ping timeout: 268 seconds]
Bike has quit [Remote host closed the connection]
tyson2 has quit [Ping timeout: 256 seconds]
tyson2 has joined #commonlisp
orestarod has quit [Quit: Leaving]
tyson2 has quit [Remote host closed the connection]
surabax has quit [Ping timeout: 246 seconds]
Bike has joined #commonlisp
ttree has joined #commonlisp
Bike has quit [Remote host closed the connection]
Bike has joined #commonlisp
Bike has quit [Remote host closed the connection]
jack_rabbit has joined #commonlisp
jeosol has joined #commonlisp
knusbaum has quit [Ping timeout: 240 seconds]
waleee has quit [Ping timeout: 260 seconds]
[deleted] has joined #commonlisp
livoreno has quit [Ping timeout: 264 seconds]
igemnace has quit [Remote host closed the connection]
weary has quit [Remote host closed the connection]
ad-absurdum has quit [Quit: Leaving]
ttree has quit [Ping timeout: 240 seconds]
ttree has joined #commonlisp
ahlk has joined #commonlisp
seok- has quit [Ping timeout: 276 seconds]
yauhsien has joined #commonlisp
yauhsien has quit [Ping timeout: 246 seconds]
pranavats has left #commonlisp [Error from remote client]
pranavats has joined #commonlisp
hineios has joined #commonlisp
hineios4 has quit [Ping timeout: 276 seconds]
aartaka has joined #commonlisp
defaultxr has quit [Remote host closed the connection]
defaultxr has joined #commonlisp
aartaka has quit [Ping timeout: 268 seconds]
aartaka has joined #commonlisp
[deleted] has quit [Read error: Connection reset by peer]
livoreno has joined #commonlisp
tibfulv has quit [Remote host closed the connection]
tibfulv has joined #commonlisp
notzmv has quit [Ping timeout: 240 seconds]
pseigo has quit [Quit: left]
pseigo has joined #commonlisp
pseigo has joined #commonlisp
pseigo has quit [Changing host]
pranavats has left #commonlisp [Error from remote client]
akoana has quit [Quit: leaving]
pranavats has joined #commonlisp
pranavats has left #commonlisp [Error from remote client]
mon_aaraj has quit [Ping timeout: 256 seconds]
mon_aaraj has joined #commonlisp
lisp123 has joined #commonlisp
<lisp123>
Sometimes it feels like a shame of humanity that CL didn't catch on before
<lisp123>
Its probably billions of dollars of productivity lost because of that
yw8 has joined #commonlisp
<Shinmera>
there's probably thousands if not millions of productivity lost from people constantly bemoaning and arguing this, too.
<beach>
Heh!
<hayley>
"The other billion dollar mistake"
utis has quit [Ping timeout: 264 seconds]
<Alfr>
As for a measure of the total loss, would you add or subtract those tousands/millions?
pranavats has joined #commonlisp
SR-71 has joined #commonlisp
<hayley>
Somewhat late to the joke, but if Picolisp ran on 32-bit ARM CPUs, we could have corruption due to type errors on embedded devices too, I think.
<hayley>
Would call it the C of the Lisp family, but very clever people want to invent something even closer to C still, so I can't joke about that.
<lisp123>
Alfr: the scale is much lower (number of complainers vs. number of programmers not using CL)
<Alfr>
lisp123, that doesn't answer my question. ;)
<lisp123>
Alfr: Yes lets subtract those then :) My cost of complaining is $20/hour
<lisp123>
;)
<lisp123>
On another topic, if anybody is interested in implementing the TeX program in CL, I'm willing to sponsor its development to a certain level
utis has joined #commonlisp
<lisp123>
Send me a private message if interested. There is a C source code which is probably easier than reading the book (sorry D Knuth)
<beach>
Didier Verna wrote a paper about that.
yw8 has quit [Quit: Client closed]
<lisp123>
Oh yes, I forgot about that. At the last European Lisp Synoposium
<beach>
Oh? No, it's a lot older.
<beach>
Submitted to the TeX user group I think.
<lisp123>
Oh the older version...that was was more theoretical
<lisp123>
If its the same one I am thinking of. He did a demo in this year's event
<hayley>
gilberth compiled Pascal to CL and then got TeX running.
yauhsien has joined #commonlisp
<beach>
Why am I not surprised. :)
frgo has quit [Read error: Connection reset by peer]
treflip has joined #commonlisp
shka has joined #commonlisp
yauhsien has quit [Ping timeout: 240 seconds]
<lisp123>
beach: 5.3.1 (bottom of page 1008) :( It was more of a thought experiment
<lisp123>
hayley: Do you have a link?
<beach>
OK.
<hayley>
I don't. Best ask him in #lispcafe (or ask him to fix his IRC client, so that he can join other rooms).
<beach>
Is that why he is not here?
<pjb>
…
<hayley>
I don't think so, but it would still be nice if he fixed it.
<hayley>
It would be nicer to have our discussions on automata in #one-more-re-nightmare rather than scattered between #lispcafe and private messages, too.
<hayley>
(Now who the hell cloned my repository 96 times per day for the past week? If that's what happens when I submit it to Ultralisp...)
rodicm has quit [Ping timeout: 276 seconds]
<hayley>
(That would appear to be the case, as Ultralisp does not detect any of the libraries in the "Telekons" organisation, so I had to input the URL manually, and the site warns "project will be updated only by cron" if one inputs the URL. Darnit)
<phantomics>
lisp123: As much disgust as I have for many mainstream technologies and the industry surrounding them, some of their dysfunction may have helped us avoid a vastly worse situation than we have now
<phantomics>
For example, Microsoft's efforts to dominate the computer world were predicated on ubiquitous open-spec hardware. If the PC had failed, all consumer-purchasable computing devices might have ended up being locked-down iOS-like ecosystems
<phantomics>
And the dysfunction of the Unix-model OSes helped to propel the FOSS movement. If Symbolics had won the desktop computing race and released a near-perfect but proprietary Lisp machine, they might have been bought by IBM with their LispM used as the basis of a centrally-controlled walled garden with software of sufficient reliability that no one would be able to justify attempting to compete with it, leading to perpetual IBM control
rodicm has joined #commonlisp
treflip has quit [Quit: bye!]
<lisp123>
phantomics: Hard to say where the world would end up...but hopefully in the future it ends up in a better place (although all the lock-downs of technology might make that more difficult)
random-nick has joined #commonlisp
pve has joined #commonlisp
ttree has quit [Ping timeout: 240 seconds]
<pjb>
phantomics: perhaps. I'd move to make more simulation before time travelling to perform the change.
<contrapunctus>
lisp123: I'd rather have a word processor where 1. (like LaTeX) users work with semantically structured documents 2. most users don't need to fiddle with the layout, it's handled for them 3. programmatic creation of new data types is easy, but 4. (unlike LaTeX, but like CL) there's no edit-compile-refresh cycle.
<pjb>
5- has understandable error messages.
<contrapunctus>
lol yes
<lisp123>
contrapunctus: I'm working on something similar
<lisp123>
with the exclusion of 3
<contrapunctus>
lisp123: nice, what do you have so far?
<lisp123>
Still a while away, but its mostly in-browser with a custom text editor (since I was not happy with the offerrings like ProseMirror / AceEditor / etc.)
<contrapunctus>
lisp123: hm...is it WYSIWYG?
<lisp123>
yes
<lisp123>
I will open source parts of it later, it should have a lot of the features of Emacs (naturally, given I use Emacs so much)
<contrapunctus>
Okay. Wanted to clarify, since you say "text editor"...
<lisp123>
yeah sure
MajorBiscuit has joined #commonlisp
<lisp123>
You may want to look at ProseMirror if you are into these things - it has probably the best implementation in-browser and you can do a lot of what you wrote above
<contrapunctus>
I see, thanks
<lisp123>
A lot of these editors are tree-based, similar to HTML
<lisp123>
LispWorks Editor (and maybe Emacs, but I'm not sure) does it in an interesting way - the content is a flat stream of characters, and then there is a concept of 'property regions' (not sure if correct terminology)
<lisp123>
where properties are marked against certain points in the stream (e.g. from position 4 to 10, add property bold)
<lisp123>
Naturally then you have to modify the properties region every time you insert/delete text
<lisp123>
Org-mode IIRC is tree-based
<lisp123>
Programmatically, I prefer the LW approach (although I actually do something different, for my particular needs), its easier to manipulate the buffer under that approach vs. having to go in and out of nodes
scymtym has quit [Remote host closed the connection]
Brucio-61 has quit [Read error: Connection reset by peer]
<contrapunctus>
mmhm, flat stream + property regions is indeed how Emacs does it
<contrapunctus>
org-element offers a parse tree API but it's a PITA to use
<lisp123>
So if you have <b>sometext<i> and this</i></b> -> you can see how property regions make life easier in modifying the properties
<lisp123>
contrapunctus: the flip side is that tree-based approaches allow for more semantic meaning - you can traverse down the tree for example
seok has joined #commonlisp
dra has joined #commonlisp
seok has quit [Read error: Connection reset by peer]
yauhsien has joined #commonlisp
rodicm has quit [Ping timeout: 255 seconds]
mon_aaraj has quit [Ping timeout: 276 seconds]
mon_aaraj has joined #commonlisp
yauhsien has quit [Ping timeout: 246 seconds]
lisp123 has quit [Remote host closed the connection]
random-nick has quit [Quit: quit]
lisp123 has joined #commonlisp
yauhsien has joined #commonlisp
yauhsien has quit [Ping timeout: 264 seconds]
mon_aaraj has quit [Ping timeout: 246 seconds]
mon_aaraj has joined #commonlisp
thuna` has joined #commonlisp
rodicm has joined #commonlisp
Dynom has joined #commonlisp
vassenn has joined #commonlisp
tyson2 has joined #commonlisp
MajorBiscuit has quit [Quit: WeeChat 3.5]
epolanski has joined #commonlisp
azimut has joined #commonlisp
mon_aaraj has quit [Ping timeout: 240 seconds]
mon_aaraj has joined #commonlisp
SR-71 has quit [Ping timeout: 272 seconds]
yauhsien has joined #commonlisp
pseigo has quit [Ping timeout: 276 seconds]
aartaka has quit [Ping timeout: 246 seconds]
yauhsien has quit [Ping timeout: 248 seconds]
mon_aaraj has quit [Remote host closed the connection]
mon_aaraj has joined #commonlisp
pseigo has joined #commonlisp
pseigo has joined #commonlisp
gxt_ has quit [Remote host closed the connection]
gxt_ has joined #commonlisp
mon_aaraj has quit [Ping timeout: 248 seconds]
boogs has joined #commonlisp
mon_aaraj has joined #commonlisp
lisp123 has quit [Quit: Leaving...]
rodicm has quit [Ping timeout: 256 seconds]
surabax has joined #commonlisp
Oddity has quit [Ping timeout: 256 seconds]
pseigo has quit [Ping timeout: 248 seconds]
Oddity has joined #commonlisp
azimut has quit [Remote host closed the connection]
gxt_ has quit [Remote host closed the connection]
gxt_ has joined #commonlisp
azimut has joined #commonlisp
gxt_ has quit [Remote host closed the connection]
gxt_ has joined #commonlisp
<hayley>
Currently reading Mark Stuart Johnstone's thesis (supervised by Paul Wilson, for those playing along at home) for which he estimates "We think it is likely that the widespread use of poor allocators incurs a loss of main and cache memory (and CPU cycles) of over a billion and a half US dollars worldwide per year" in 1997. lisp123's prior estimate of productivity lost to not using Common Lisp seems quite small in comparison.
waleee has joined #commonlisp
<beach>
Heh, that kind of calculation sounds familiar. I heard it a lot when I spent the year with them.
<beach>
Paul Wilson also created a system for compressed memory where paging was done in two levels. The first level was compression and the second was to disk. He did a similar estimate for how much RAM his system would save.
MajorBiscuit has joined #commonlisp
MajorBiscuit has quit [Client Quit]
MajorBiscuit has joined #commonlisp
<Nilby>
hayley: I'm sure that's likely. I think the losses due to many other software practices are even more staggering. But sadly, in practice sbcl hogs more unused memory on my system then even a browser.
<hayley>
Guess I have to go through my university for access. "Your library or institution may give you access to the complete full text for this document in ProQuest." Yes, that's why I went on ProQuest, thanks.
tfeb has joined #commonlisp
<hayley>
Nilby: I don't think I would be able to reproduce that, without configuring SBCL to collect quite infrequently. But some have wanted SBCL to collect more frequently, to reduce the amount of floating garbage.
<Nilby>
Unfortunately, I have to set dynamic space to physical memory to prevent hard crashes when running out.
<beach>
I think SBCL's memory manager is not that great by today's standards.
<Nilby>
It tends to eat about 6% overhead
<hayley>
Working on it, but very slowly.
<beach>
Nilby: "overhead"?
<Nilby>
beach: resident memory in unix
<beach>
OK, but "overhead" over what?
tfeb has quit [Client Quit]
MajorBiscuit has quit [Ping timeout: 240 seconds]
<Nilby>
likely over what it's actually using for active objects
<beach>
How did you measure that?
<Nilby>
(room t) and htop
<beach>
As I recall, any system of automatic memory management needs quite a lot of additional memory, i.e., way more than 6%, so that the collector won't be triggered too often.
<hayley>
(ROOM T) counts dead objects too.
<beach>
Nilby: As I recall (maybe hayley can correct me) around 100% overhead is needed.
<Nilby>
well then (progn (gc) (room t))
<Nilby>
beach: yes, technically i have like 1000's % overhead, but the active pages are the only thing that causes trouble
<hayley>
The time taken in garbage collection is inversely proportional to the space overhead allowed for floating garbage, so in a way no particular value is needed. But 100% to 200% space overhead is common.
orestarod has joined #commonlisp
<Nilby>
unfortunately, it has little to do with real program memory usage, it's just to prevent fatal crashes before "really" running out of memory
<hayley>
If your maximum heap size is much larger than the memory used, even after including space overhead, it is quite likely most of the heap has no physical memory mapped; last I checked, SBCL does unmap unused pages when possible.
<Nilby>
hayley: right, i'm only really concerned with mapped and recently used memory, which the os considers "resident"
<hayley>
Then your resident memory usage should be somewhere between the size of all live objects in your program, and the maximum heap size.
<hayley>
Hm, maybe beach is referring to how much memory is needed to do a copying collection in the worst case (that nearly all objects survive and need to be copied). The worst case would require 100% space overhead, but the worst case doesn't happen too often. And generational collection tends to reduce the amount of memory that must be copied at a time, too.
<Nilby>
yes, it is, but a the live objects are 120Mb and the resident size is 1020Mb
MajorBiscuit has joined #commonlisp
<beach>
What is the point of unmapping pages. Is it to avoid that they migrate to secondary memory?
<hayley>
I can't reproduce that. After loading McCLIM, I see 110MB (nitpick: b for bits, B for bytes) of resident memory, and 1212MB of virtual memory.
<hayley>
beach: To return unused physical memory to the operating system, I think.
<Nilby>
unmapping probably doesn't make that much of a practical diffence, it just makes the o/s's job a little easier
<beach>
hayley: But, as I said, the virtual memory system would do that too.
<hayley>
In a way, unmapping a page informs the kernel that the page is garbage.
<Nilby>
when you multiply the practical overhead of about 6-10% of physical memory by 50-100 processes, it's pretty bad, when it's mostly unused.
<hayley>
By paging? Sure, that would work, but unmapping avoids the paging, sure.
<beach>
Yes, so it doesn't have to migrate it to secondary memory. Is there any other reason?
<Nilby>
many systems may not even have secondary memory now. it's common to run without swap
frgo has joined #commonlisp
<hayley>
beach: I think not having to migrate garbage pages to secondary memory is enough of a reason. At least, there is a similar problem when using bump-allocation with cache memory; even though everything after the allocation pointer is garbage, attempting to allocate will require garbage to be pulled into cache for seemingly no reason.
<beach>
I see.
<hayley>
Cliff Click claimed that avoiding the latter phenomenon, by using special instructions in the Azul hardware, reduced the memory bandwidth of Java programs by 30% or so.
<hayley>
Nilby: In the same conversation with David Moon and Dan Weinreb, Cliff also stated that "swapping is death for GC."
yauhsien has quit [Ping timeout: 260 seconds]
<Nilby>
yes, old lisps, and lisps on older hardware (like azul) i think were more careful with that
<Nilby>
i think it would just be lovely if someone made memory with room for tag bits. you would think current ecc memory could do it, but of course the architecture would have to be modded
<beach>
What would you use those tag bits for?
<beach>
Oh, to have full-word integers?
<Nilby>
yes, among other optimizations
<beach>
Like what?
MajorBiscuit has quit [Ping timeout: 272 seconds]
<Nilby>
well, masking out tag bits is not without cost, but also knowing you always have those bits, you can omit a number of things that lisp has to do that, say C, doesn't
<pjb>
ie. a memory with 36-bit or 72-bit words.
<Nilby>
pjb: yes
<pjb>
What's wrong with 29-bit or 56-bit words?
<beach>
Nilby: It is rare that you actually have to mask out the tag bits in current systems.
<Nilby>
hmmm. it doesn't seem that rare when i look at my disassembled code
igemnace has joined #commonlisp
<pjb>
perhaps you're not using the right processor?
<pjb>
Sparc processors have microinstruction to deal with 29-bit and tags.
<beach>
Nilby: If the tag 0 is used for fixnums, then addition still works. And in many architectures, it is possible to include the tag as a small constant offset in memory operations.
<hayley>
I believe it is quite rare. On many processors a constant offset can be added to an address when performing loads and stores to memory. Suppose we have a CONS tag of 7 (as in SBCL), then the instruction for CAR needs to look like Ra <- load (Rb - 7). Similarly CDR adds 1 (8 bytes offset - 7 byte tag).
<hayley>
Though I have seen SBCL being less clever than it could be, and repeatedly unboxing array indices sometimes.
<pjb>
you still have to check the tag bit is that of a cons cell first.
<beach>
pjb: That would have to be done in the kind of architecture Nilby wants as well.
<Nilby>
well, right now i only have one compiler that's fast enough, and y'all know which one it is, and you can check for yourself, and especially try comparing to the output of gcc/llvm
<pjb>
Definitely.
<beach>
pjb: The thing was about masking out the tag bits.
<pjb>
The only complain here is 32-bit vs. 29 or 30-bit.
<beach>
Nilby: It would be better to improve the compiler than to wait for tagged memory.
<Nilby>
I wish it was still feasible to run on sparc. I held out as long as I could.
<pjb>
Perhaps it was still valid in 32-bit, but it hardly matters in 64-bit vs. 62-bit.
<Nilby>
I was still running lisp in production on sparc in 2008
<hayley>
gilberth and I came up with the idea to put parallel type-checking (and auto-increment) hardware on an old microprocessor, to see if we could make a low-budget "Lisp machine" in some useful sense of the term.
<hayley>
Depending on the sort of tag check, though, it is likely that most do not slow down the program much. A branch predictor can easily predict that type checks will not fail, and a superscalar processor can then run the (probably unnecessary) check in parallel with other code.
<Nilby>
hayley: I think a small set of mods to current archs could be worthwhile. one does't have to go "whole hog" like the lisp machines. just making the typical lisp function call take less ops, and maybe mild type/tag things, would be great.
<Nilby>
current toolchain compilers do so many freakish optimizations, it's weird to see what they generate
<hayley>
My wishlist for hardware features is quite similar to what Azul did: a read barrier in hardware, and hardware transactional memory. Everything else can be Sufficiently Smart Compiler-ed.
<Nilby>
hayley: i agree. memory is the biggest speed issue now.
<hayley>
(It is also worth mentioning that, in my domain, some unboxing ops are inconsequential compared to having to "interpret" the matching automaton. So I still win despite the unboxing overhead.)
random-nick has joined #commonlisp
<Nilby>
yes, even the increasing use of llvm as library, can't really be as nice as cl:compile
<Nilby>
hayley: nice paper. that makes me think that if someone just added a memory/object compiler to a lisp, we'd jump a 20% in speed
<hayley>
What is "a memory/object compiler"?
<hayley>
The architecture described in that paper would be implemented in hardware. Similarly though, in a general sense, CDR coding is a limited sort of compression, and Henry Baker said CDR coding should return due to memory bandwidth limits.
<Nilby>
something that would analyze objects and their use, make them less filled with zero bits
<hayley>
So, a compression scheme in software?
<hayley>
For what it's worth I think Cliff Click did in-memory compression for tabular data, that could "decompress into registers". This compression would also reduce memory bandwidth substantially.
<Nilby>
yes, but with compiler knowledge, e.g. when it can be reasoned that some bits will never be used
<Nilby>
of course even genric things like the zippads in the paper could help
<Nilby>
because i'm weird and like to scroll around memory, it's very obvious how much is wasted, with the exception of compressed media
waleee has quit [Quit: WeeChat 3.5]
tfeb has joined #commonlisp
kpoeck has joined #commonlisp
causal has quit [Quit: WeeChat 3.5]
n1to has joined #commonlisp
tfeb has quit [Client Quit]
waleee has joined #commonlisp
<pjb>
hayley: macOS does in memory compression, before swapping out pages to disk, it swap them out to compressed memory.
infra_red[m] has joined #commonlisp
hisacro has left #commonlisp [#commonlisp]
Oddity has quit [Ping timeout: 246 seconds]
rodicm has joined #commonlisp
serbest has joined #commonlisp
vassenn has quit [Quit: Good bye!]
akoana has joined #commonlisp
cage has joined #commonlisp
MajorBiscuit has joined #commonlisp
rodicm has quit [Ping timeout: 264 seconds]
vassenn has joined #commonlisp
mon_aaraj has quit [Ping timeout: 240 seconds]
mon_aaraj has joined #commonlisp
epolanski has quit [Quit: Connection closed for inactivity]
mon_aaraj has quit [Ping timeout: 246 seconds]
yosef` has joined #commonlisp
igemnace has quit [Remote host closed the connection]
mon_aaraj has joined #commonlisp
pseigo has joined #commonlisp
pseigo has joined #commonlisp
pseigo has quit [Changing host]
puchacz has joined #commonlisp
waleee has quit [Quit: WeeChat 3.5]
waleee has joined #commonlisp
yosef` has quit [Quit: Client closed]
theBlackDragon has quit [Remote host closed the connection]
mon_aaraj has quit [Ping timeout: 272 seconds]
mon_aaraj has joined #commonlisp
<dbotton>
Is there a lisp function to return the file name with out path and extension from a path or string?
<random-nick>
maybe PATHNAME-NAME does what you want?
<random-nick>
might not be in all cases, not sure
<random-nick>
not do, even
vassenn has quit [Quit: Good bye!]
<gjvc>
you want the "stem" of the "basename" of a path, right
<dbotton>
I want /abc/t.txt the "t"
<gjvc>
yes
<gjvc>
(pathname-name "/abc/t.txt")
<gjvc>
"t"
<gjvc>
as random-nick said
<dbotton>
thanks all!
pseigo has quit [Ping timeout: 272 seconds]
sloanr has quit [Remote host closed the connection]