beneroth changed the topic of #picolisp to: PicoLisp language | The scalpel of software development | Channel Log: https://libera.irclog.whitequark.org/picolisp | Check www.picolisp.com for more information
pablo_escoberg1 has joined #picolisp
pablo_escoberg has quit [Read error: Connection reset by peer]
seninha has quit [Remote host closed the connection]
pablo_escoberg has joined #picolisp
pablo_escoberg1 has quit [Quit: Leaving.]
<aw-> pablo_escoberg: maybe ls /dev/fd/
<pablo_escoberg> Just tried it, with and without a pipe open in the shell and getting the same result both times, so that doesn't appear to be it. I will check other places in fs frontends. There really should be something in /dev or /proc
<aw-> well if you have the pid, you can check in /proc/<pid>/fd/
<pablo_escoberg> Just tried that. Again, with an open pipe and without one, and there is no difference. I get 0,1,2 as expected, plus 255 which I don't know what that is, but there is no diff regardless of whether there is a pipe running, which is weird.
<aw-> i think you need to learn a bit more about pipes
<pablo_escoberg> apparently. I'll read up.
<aw-> stdin is 0
<aw-> stdout is 1
<aw-> stderr is 3
<aw-> err sorry, stderr is 2
<pablo_escoberg> that I know
<pablo_escoberg> no idea what 255 is
<aw-> so you need to read from one of those, or write to one
<pablo_escoberg> I need to know whether or not I have a pipe open. If that's not a thing, then I need to figure out something else.
<aw-> you do, if you see the fd/0 and fd/1 and fd/3 then the pipe is open
<aw-> then it's up to you to read or write to them
<pablo_escoberg> hmmmm... Well, I get the same result after closing it.
<pablo_escoberg> I was actually expecting an extra 2 fd's for the pipe
<pablo_escoberg> but they don't seem to be there.
<aw-> different pid probably
<pablo_escoberg> I was hoping that (like everything else) there's something in Pil that lets me monitor what fd's are open by the process itself.
<aw-> but it's not a pil thing it's a linux thing
<aw-> this is why we said it's easier to use named pipes
<pablo_escoberg> I am using named pipes
<aw-> they're decoupled from processes
<aw-> any process that has permissions can read/write to the named pipe
<pablo_escoberg> I have an in and an out pipe
<pablo_escoberg> using the pipe function in pil to open sqlite between the two of them like so (pipe (exec "echo" ".read " "|cat" "./in " "|sqlite3" "Fn" ">out" "&"))
<aw-> ok so, something like (call 'mkfifo "in") and (call 'mkfifo "out") ?
<aw-> ahhh i see
<pablo_escoberg> Actually the full lines: (dm T (Fn)
<pablo_escoberg> (call 'mkfifo "in" "out") # this is from the point of view of the app being automated, not from the pil perspective
<pablo_escoberg> (push '*Bye '(call 'rm "in" "out"))
<pablo_escoberg> (task (: Fd) (prinl (in :Fd))))
<pablo_escoberg> (=: Fd (pipe (exec "echo" ".read " "|cat" "./in " "|sqlite3" "Fn" ">out" "&")))
<aw-> yeah that's.. strange
<pablo_escoberg> so now when I call another method, (: Fd) gives me the right number, but I still get an error
<pablo_escoberg> Which is to say, the variable is still set, but it seems the pipe's file descriptor has closed. So I'm trying to figure out if that's actually the case or if it's something else.
<aw-> you're doing "echo .read | cat ./in | sqlite3 Fn >out &" ?
<pablo_escoberg> yeah, I was doing that at the command line to test the various /dev and /proc hypotheses, just to be sure.
<aw-> right
<pablo_escoberg> ok, lots more playing to do. I was getting spoiled by abu[7] solving all my problems for me :). Time to solve one for myself, perhaps tomorrow :D
<aw-> btw i dont think "|cat" will work since that's a bash thing
<aw-> not sure
<aw-> but you should know that pil has (out) so you dont need (exec "echo")
<pablo_escoberg> "|cat " does work per the sqlite site, and I've verified it.
<aw-> so something like (out "in" (prin ".read")) should work, but you won't get output from that
<aw-> i meant "|cat" wont work in Picolisp i think.. i dont know, i've never managed to use | in any picolisp programs, always end up using (in) and (out)
<pablo_escoberg> well, the point here is full process automation as an alternative to building a wrapper for the C API.
<pablo_escoberg> Hmmm... Well, it doesn't complain and it does return a valid fd
<aw-> ok but right now you're just simulating bash, why dont you just write your program in Bash and stop giving yourself a headache?
<pablo_escoberg> I'm trying to avoid a fork for every query.
<aw-> or use the concepts built into PicoLisp the way they are meant to be used
<aw-> sure, but you do realize that (exec) == fork ?
<pablo_escoberg> I need access to a sqlite DB, and I'd like to use pil for it. I could just go back to Ruby, but this seems like more fun :D
<pablo_escoberg> I figured the two were equivalent under the hood, yes.
<aw-> and i published a super tiny sqlite library for you to use with pil, has been in production for 6 years, 0 problems and yes it's "forking" but who cares, you're not running Facebook
<pablo_escoberg> No, but I may well get a few requests per second for short bursts
<pablo_escoberg> and I don't want to spend a ton on hw
<aw-> pft, even a 16MB RAM linux 486 can handle that
<pablo_escoberg> if it forks for every single query?
<aw-> few requests per seconds is nothing
<pablo_escoberg> depends on the number of querys
<aw-> are you doing 100 queries in every process?
<pablo_escoberg> not 100%, but possibly. and I really don't want to worry about this kind of thing. I'd rather just do it efficiently to begin with.
<pablo_escoberg> It's really not premature optimization, I promise :D
<aw-> but you're just playing around in dirt when you could be out enjoying the sunshine
<pablo_escoberg> I love the dirt!
<pablo_escoberg> not a systems guy, really, but it's nice to get my hands dirty once in a while.
<aw-> yeah that's fine, it's a good way to learn
<pablo_escoberg> ty
<aw-> but we've been trying to tell you the same thing for days and you are stuck on an idea that makes no sense
<aw-> if your program is making 100 queries for each forked process, then your program needs to be re-evaluated
<pablo_escoberg> well, you've told me that. abu[7] seems to like the idea
<aw-> and if you're REALLY trying to optimize early, then you need to write a wrapper for the sqlite3 C library
<aw-> that would be extremely useful for many people
<pablo_escoberg> and it makes a great deal of sense to me. I've been bitten by the "it won't need to scale that much" thing once too many to go through it again.
<pablo_escoberg> And, per abu[7]'s suggestion, I'm working on a C API wrapper in parallel.
<pablo_escoberg> I'll see which one works better/is more performant and elegant.
<aw-> yes the C API would be so much better
<aw-> i can guarantee it will be much more performant
<aw-> no need for benchmarks
<pablo_escoberg> Great! I'm doing both :D
<pablo_escoberg> the pipes one will not need nearly as much maintenance
<aw-> every single picolisp library i've written that wraps a C program was extreeeeeeeeemely faster than any other picolisp implementation
<pablo_escoberg> cool. but there's really no reason to believe there is any significant overhead from and open pipe and a running sqlite3 process that is not incurred by opening a C library.
<pablo_escoberg> At least none that I can think of.
<aw-> you'll run into concurrency issues with the named pipes: can't have more than 1 concurrent writer
<pablo_escoberg> That's true of sqlite anyway. Writes lock the entire db
<pablo_escoberg> Plus, for my application, that won't be an issue because I'll be doing very few (possibly no) writes.
<aw-> and since it's a FIFO, if 2 readers are listening on the pipe then only 1 will receive the data, which one?
<pablo_escoberg> ok, that's an excellent point.
<pablo_escoberg> I would need a set of pipes for each process, but there's no reason I can't do that. Sqlite can have several running processes accessing the same DB
<aw-> well if you're not making any writes, why not load your SQLite data ONCE into a picolisp in-memory DB, and then no forks no pipes just good ol' fast picolisp for everything
<pablo_escoberg> Because other processes are writing to it all the time.
<aw-> you can't have more than 1 process write to an SQLite db
<aw-> not concurrently
<pablo_escoberg> right
<pablo_escoberg> so what I'm doing is accessing a db that is already in production (once the thing is ready).
<aw-> so now you're back to forking a process for each read of the pipe?
<pablo_escoberg> I'll be reading lots from it, and only occasionally writing back some aggregates.
<pablo_escoberg> if the db is locked when I'm trying to write, no big deal. it can wait.
<aw-> it can wait?
<aw-> you'll implement this waiting scheme?
<pablo_escoberg> I'm pretty sure sqlite implements it
<aw-> hah
<aw-> nope
<pablo_escoberg> if a process writes to a locked db, it waits.
<pablo_escoberg> really? what does it do if you try to write to locked db???
<aw-> your process exits with an error "sqlite busy"
<pablo_escoberg> oh, ok, then ((while not 'sql_busy) (write_to_db))
<pablo_escoberg> or something like that
<pablo_escoberg> just try to write until it does not return that error.
<aw-> one sec
<aw-> sorry, the message is "database is locked"
<pablo_escoberg> ok, so that msg, to my app, means "try again"
<pablo_escoberg> there's no way the thing will be locked for more than a few ms
<aw-> but what if 1 write is happening every second? you will retry forever
<pablo_escoberg> highly unlikely.
<aw-> ok
<aw-> well anyways
<aw-> i'm speaking from experience
<aw-> i even wrote a blog post about this very issue
<pablo_escoberg> yeah, really appreciate all the help. I will also build the C wrapper
<aw-> in January 2017
<pablo_escoberg> ooh, link?
<pablo_escoberg> I've done pipe process automation in Ruby before, but I had much more cooperation from the controlled program; no need to trick the thing into reading from stdin
<aw-> TL;DR: we passed all our SQLite DB writes to an MQTT queue in order to serialize them, rather than "trying" all the wacky ways to write to SQLite directly
<pablo_escoberg> I'll have a read. Perhaps it will save me the trouble of doing double work. Initially this really struck me as a much easier solution than a whole C wrapper.
<pablo_escoberg> But now I'm having second thoughts. You've certainly been instrumental in swaying me to the other solution. I think I'll work on that for a while and see how it goes. Thanks much.
<aw-> forking a processing an running SQLite queries, is not very demanding on a Linux server, even if you're forking 1 new process per second, even if you're doing 20 queries in each fork, unless you're processing MASSIVE amounts of data on each request, your server can likely handle it
<aw-> forking a process*
<aw-> i would suggest a SQLite C library in picolisp, that would be super fast, no forking, etc
<pablo_escoberg> I'm also piggybacking on a running server, so trying not to tax it too much. But again, I'll work on the C API wrapper in parallel, and see how that works out. I may well abandon one or the other approach.
<aw-> but you'll still run into concurrency issues for writes so you need to ensure only one process ever tries to "write" to it at any given moment, so either implement your own locking system, or serialize the writes somehow
<pablo_escoberg> Yeah, I may be able to route the write through the running system. In fact, I can probably do that. But still, it's something I'll probably want in the future, so I'll keep building it. I may want it for postgres as well, which argues at least a bit for not abandoning the pipes approach altogether,
<aw-> the pipes idea would work if you only have 1 process reading/writing from them
<aw-> but you'll run into problems if you try to have multiple readers or writers because of the way FIFO works
<pablo_escoberg> hmmm... that's interesting in an of itself. I'll build the pipes interface so it's easy to write drivers for different db's (one or two lines, really), and the C API so I have a really solid, flexible implementation for sqlite.
<aw-> btw the C API for sqlite is extremely complex lol
<aw-> i tried writing a library for it and quickly gave up
<pablo_escoberg> Damn. I looked at the docs, and it didn't look terrible. There are 255 API calls, but you only use about half a dozen or so AFAICT.
<aw-> it has pointers to pointers to pointers and each structure has like 50 members
<pablo_escoberg> ok, back to pipes :D
<aw-> yeah but in picolisp you need to understand the structure and definition of the function in oder to pull data from it
<aw-> it could be a great learning experience for (native)
<pablo_escoberg> of course. If the structures are that bad, I'll def give up, too. We'll see.
<aw-> i encourage you to try it, maybe lots of head scratching, but abu[m] can help
<pablo_escoberg> awesome. No hurry here, so I can take a lot of wrong turns.
<aw-> good luck
<pablo_escoberg> thanks! And thanks for the help.
<abu[7]> Good morning! Intensive interesting discussion.
<abu[7]> Concerning the closing of a pipe: The normal way is that the reader checks for NIL, and if so closes the pipe
<abu[7]> (task Fd (in @ (if (read) (doSomething @) (task (close Fd)))))
<pablo_escoberg> About to crash, but I'll look into that tomorrow. Thanks again. One more thought: What I'm doing here applies to a lot more than just mysql. What I'm really doing here is building a generic application automation platform on top of pil, and on top of which I will build a sql layer, on top of which I will build a (very very small) sqlite layer. This should be usable at any of those layers, even if it turns out the performance is un
<pablo_escoberg> acceptable for this specific use case. That's one of the reasons I'm building the wrapper for the (indeed quite abstruse) C API in parallel.
<pablo_escoberg> oops sqlite*
<abu[7]> I understand. Such research is useful by itself, and surely fun
<abu[7]> brb
abu[7] has left #picolisp [#picolisp]
abu[7] has joined #picolisp
abu[7] has left #picolisp [#picolisp]
abu[7] has joined #picolisp
rob_w has joined #picolisp
msavoritias has joined #picolisp
buffet has quit [Quit: WeeChat 3.8]
buffet has joined #picolisp
seninha has joined #picolisp
pablo_escoberg has quit [Ping timeout: 260 seconds]
Guest91 has joined #picolisp
Guest91 has quit [Ping timeout: 246 seconds]
Guest91 has joined #picolisp
Guest91 has quit [Client Quit]
rob_w has quit [Remote host closed the connection]
pablo_escoberg has joined #picolisp
pablo_escoberg has quit [Quit: Leaving.]
pablo_escoberg has joined #picolisp
<pablo_escoberg> OK, so turns out my problem from yesterday was due to the fact that I don't quite get how objects work in pil. I've been over and over the docs and it keeps doing what I don't expect. Within a T method, I do e.g. (=: x 1). I would then expect (with [obj] (: x)) would yield 1. Instead, it yields NIL. Obviously I'm missing something in the docs. I'll keep playing around, but if somebody has a quick answer, please pop it in here.
<abu[7]> What you do looks correct
<abu[7]> Perhaps some mistype?
<pablo_escoberg> no, I've done it several times. It's really strange. The thing just seems to disappear...
<abu[7]> hmm
<pablo_escoberg> here's the code, including the test variable:
<pablo_escoberg> (setq x (new +Sql "db"))
<pablo_escoberg> (setq *Sqlite_postfix "\n.read in")
<pablo_escoberg> (dm T (Fn)
<pablo_escoberg> (symbols 'sql 'pico)
<pablo_escoberg> (class +Sql)
<pablo_escoberg> (call 'mkfifo "in" "out") # this is from the point of view of the app being automated, not from the pil perspective
<pablo_escoberg> (push '*Bye '(call 'rm "in" "out"))
<pablo_escoberg> (=: 'fff 33)
<pablo_escoberg> (=: Fd (pipe (exec "echo" ".read " "|cat" "./in " "|sqlite3" "Fn" ">out" "&")))
<abu[7]> (with (new '(+Cls)) (: x))
<pablo_escoberg> (task (: Fd) (in @ () if (read) (prinl @) ()))
<pablo_escoberg> )
<pablo_escoberg> (dm q> (QueryStr)
<pablo_escoberg> (out (: Fd) QueryStr)
<pablo_escoberg> )
<abu[7]> (=: 'fff 33)
<abu[7]> The quote is wrong
<pablo_escoberg> ah. tried it with and without. Just put it back in to make sure.
<pablo_escoberg> Also, at the repl:
<pablo_escoberg> : (with x (: fff))
<pablo_escoberg> -> NIL
<pablo_escoberg> : (with x (: 'fff))
<pablo_escoberg> -> NIL
<abu[m]> ':=:' evaluates only the very last argument
<pablo_escoberg> gotcha. But I just removed the quote character and the same thing is happening
<abu[m]> btw, instead of 'x' it is recommended to use 'X'
<abu[m]> not the point here though
<pablo_escoberg> sure. This is all play code. I will need to completely restructure it once I figure out how everything works.
<pablo_escoberg> and, well x is easier to type than X :D
<abu[m]> Yeah
<abu[m]> indeed quoting vs. non-quoting is sometimes easy to mess up
<pablo_escoberg> BTW, version 23.6.6 if it helps.
<abu[m]> perfect
<pablo_escoberg> And yes, I'll lose what little I have left over the quoting thing. Lost most of it doing guile 20 yrs ago :D
<abu[m]> No changes in this regard during the last decades ;)
<pablo_escoberg> yeah, but now you guys don't understand why this is happening and you built the thing!
<pablo_escoberg> :)
<abu[m]> I think the problem was only the quote
<pablo_escoberg> but I took out the quote and get the same result.
<pablo_escoberg> Also, that was just a test variable. The one I'm interested in is Fd, which was never quoted. IOW, I did it the right way the first time, but then changed stuff around for testing. I did this because I couldn't see Fd.
<abu[m]> where does it differ in your case?
<pablo_escoberg> Good question. AFAICT, and I'm not sure about this, your last line is still in the context of the class definition, so This is probably still +Cls. However, I can't access the Fd variable from other methods, so that's likely not it. should I paste my code into pastebin?
<pablo_escoberg> it's all above in the chat, but quite messy...
<abu[m]> Must switch client
<abu[7]> (setq x (new +Sql "db"))
<abu[7]> must be (new '(+Sql) ..
<pablo_escoberg> Aha!!!!
<pablo_escoberg> Thank you so much! Never thought to look there.
<abu[7]> It is hard to decipher the source fragments
<pablo_escoberg> yeah, it's all a huge mess right now. Once I have a working pipe in place and can send and receive data, I'll start over and structure it properly.
seninha has quit [Ping timeout: 240 seconds]
<abu[7]> You could use (show Obj) first after creating an object
seninha has joined #picolisp
seninha has quit [Quit: Leaving]
msavoritias has quit [Remote host closed the connection]
drakonis has quit [Ping timeout: 240 seconds]
drakonis has joined #picolisp
<pablo_escoberg> OK, so close I can taste it. Code here: https://pastebin.com/Mmb46jKm . The only remaining problem is the "in" returns the file descriptor rather than the content of the pipe. I imagine this is trivial for you guys, but it's driving me nuts. Everything else appears to be solved.
<pablo_escoberg> BTW, I tried (in (: Fd) (prinl (read))) and several other things, all of which return the FD
<pablo_escoberg> nvm, I think I figured out why this is happening. Now to figure out how to make it work...
<pablo_escoberg> OK, so what's happening is that the task isn't firing. It shows up in *Run but never goes off. It also occurs to me that I may be creating a circular task here. If I read from within the task from the same fd, won't it fire off the task again? I guess my more general question is how the *Run thing gets triggered and if there's any way I can track it.
seninha has joined #picolisp
teddydd has quit [Server closed connection]
teddydd has joined #picolisp