companion_cube changed the topic of #ocaml to: Discussion about the OCaml programming language | http://www.ocaml.org | OCaml 5.2.0 released: https://ocaml.org/releases/5.2.0 | Try OCaml in your browser: https://try.ocamlpro.com | Public channel logs at https://libera.irclog.whitequark.org/ocaml/
<companion_cube> Btw you still doing C++? :)
<discocaml> <joris0588> the hell no πŸ˜„
<discocaml> <joris0588> i'd rather be SRE than doing cpp
<discocaml> <joris0588> cpp is like, worse than go
<companion_cube> Ha ! Ok
<discocaml> <joris0588> it's like you are trying to be smart, but complexity reaches 9000
<discocaml> <joris0588> if you are doing cpp in my book you are just crazy
<discocaml> <joris0588> like you can be a good person i like you but you are just crazy
<companion_cube> πŸ˜‚ Depends for what I guess
<companion_cube> The future is rust with a side of zig/odin anyway
<discocaml> <joris0588> πŸ˜„
<discocaml> <joris0588> nah the future for me is ocaml
<discocaml> <joris0588> i feel like there is a really strong "balance" in ocaml
<discocaml> <joris0588> it just extremly average
<discocaml> <ada2k> future for me is ocaml, rust and ruby (which is also finally getting good concurrency)
<discocaml> <joris0588> it is go. but better
<discocaml> <joris0588> @ada2k ruby ? wow
<discocaml> <ada2k> what does wow mean here
<discocaml> <ada2k> it's still going! node didn't completely win thankfully
<discocaml> <ada2k> i wouldn't write a large application in it but i use it everywhere for scripts and small websites
<discocaml> <joris0588> why though. why ruby ?
<discocaml> <joris0588> @companion_cube you are still at imandra i guess ?
<discocaml> <ada2k> i like ruby :p
<discocaml> <joris0588> why ? i am curious
<discocaml> <joris0588> i HATE puppet
<discocaml> <yawaramin> someone convince my manager to let me deploy OCaml. lol
<discocaml> <joris0588> @yawaramin πŸ’ͺ
<discocaml> <ada2k> - solid smalltalk-y object implementation
<discocaml> <ada2k> - very readable syntax if you don't invent your own DSL
<discocaml> <ada2k> - good enough stdlib to write shell scripts in
<discocaml> <ada2k> - good ecosystem for prototyping
<discocaml> <ada2k> it's a nice language if you ignore rails
<discocaml> <joris0588> well. maybe. but right now i have a spine in my hip
<companion_cube> Yep still there
<discocaml> <joris0588> it is called puppet, which is kind of ruby
<discocaml> <joris0588> and the problem is simple
<discocaml> <joris0588> not statically typed
<discocaml> <ada2k> isn't puppet an ansible-type thing as a dsl?
<discocaml> <joris0588> this is a bit problem
<discocaml> <joris0588> yes
<companion_cube> I'm not sure ocaml is the right balance
<discocaml> <ada2k> this is why i never use ruby for big things
<companion_cube> It's certainly better a language than a lot of other existing things
<discocaml> <ada2k> there is gradual typing now, but i like organising code in ocaml better
<companion_cube> But ideally I'd rather have something with refcounting, dot notation, value types, sum types, expressing based
<discocaml> <joris0588> hm
<companion_cube> Really kind of a rust with refcounting and a bit more lenient
<discocaml> <joris0588> hm
<companion_cube> I mean it's basically swift. But without apple. And ideally with fast compilation
<discocaml> <joris0588> ok yeah swift is another thing in the balance
<discocaml> <yawaramin> so Nim
<companion_cube> Yikes, no
<discocaml> <joris0588> refcounting is a tricky question. i am not convinced
<companion_cube> Nim has toxic leadership, bad syntax, and a lot of complex stuff
<discocaml> <joris0588> rust is definitely good
<companion_cube> I think refcounting wins because it's easier to have value types and C interop
<discocaml> <ada2k> companion_cube: and a type system that isn't so broken it has time outs
<companion_cube> Yeeeeeep
<companion_cube> See: fast compilation
<discocaml> <joris0588> yes, c interop, yes. but performance is complicated
<discocaml> <joris0588> ok, we have those genius at microsoft doing wonder
<discocaml> <joris0588> we have mimalloc
<discocaml> <joris0588> but refcounting is also either invasive in your programming pattern
<discocaml> <joris0588> or just has the same performance cliff as GC
<companion_cube> Not if you have value types at the same time imho
<discocaml> <joris0588> hm
<companion_cube> You reduce allocations, and what remains is rc
<companion_cube> Kind of like go in a way, with its crappy gc
<discocaml> <joris0588> but this is not clear to me how this scale better
<companion_cube> Even better is regions but I'm less sure of how it'd work out
<discocaml> <joris0588> like, you have a 256 cores server, you have numa
<companion_cube> Yeah sure don't share too much
<companion_cube> But also, concurrent gc is *hard*
<companion_cube> Really hard
<companion_cube> So maybe not better if you don't use the jvm or equivalent
<discocaml> <joris0588> yes concurrent gc is really hard
<discocaml> <yawaramin> maybe Koka then. but at this point we are getting more and more niche and it gets harder and harder to justify for production use cases
<discocaml> <joris0588> but you know, concurrent "malloc" is already super hard
<companion_cube> But not as much, is it?
<companion_cube> I mean there are solutions you can reuse, whereas for gc it's just immix at best
<discocaml> <yawaramin> oh, Scala Native is also coming along nicely
<discocaml> <joris0588> i don't know i am not a scientist πŸ˜„
<discocaml> <joris0588> all i know is that things tend to break and be a bottlneck. that is why i am SRE
<discocaml> <joris0588> in all honnesty, i would have a really hard time to pick RC vs GS
<discocaml> <joris0588> i am not sure
<companion_cube> Imho rc is simpler
<discocaml> <joris0588> the thing is, at the end of the day what happen is
<discocaml> <joris0588> you run perf, or go to your pyroscope instance
<discocaml> <joris0588> you see that you bottlneck on smp_cond_any
<discocaml> <joris0588> which is invalidating page cache
<companion_cube> :D
<discocaml> <joris0588> and you are like "hm"
<discocaml> <joris0588> hm remains the only true answe'r
<companion_cube> Yeah but again... So would a compacting gc :/
<discocaml> <joris0588> yeah
<discocaml> <joris0588> the big drawback of gc i think is that, you can't fork
<discocaml> <joris0588> and, you can't scale memory
<discocaml> <joris0588> because, you need to mark
<discocaml> <joris0588> and how clever you are, like you can be java and G1
<discocaml> <joris0588> which is i guess the gold standard of GC
<discocaml> <joris0588> you NEED to mark
<discocaml> <joris0588> and when you mark, it means, you need too fault/access memory/TLB whatver
<discocaml> <joris0588> and this can becomes a bottlneck
<discocaml> <joris0588> but otoh, when you RC you need the same
<discocaml> <joris0588> it is just that, it happens as a single cascade of event
<discocaml> <joris0588> so the load pattern is different
<discocaml> <joris0588> it is hard to say which one is best imo
<discocaml> <joris0588> anyway i need to sleep πŸ™‚
<discocaml> <joris0588> good talk
<companion_cube> Good night :)
<discocaml> <joris0588> see you πŸ™‚
<companion_cube> I think you can do defered RC (on another thread) but it does complicate stuff
<companion_cube> Cheers!
<discocaml> <joris0588> yeah, that sounds like a very interesting topic. there was talk on this at icfp
tomku has joined #ocaml
Tuplanolla has quit [Quit: Leaving.]
terrorjack has quit [Quit: The Lounge - https://thelounge.chat]
terrorjack has joined #ocaml
kurfen_ has joined #ocaml
kurfen has quit [Ping timeout: 265 seconds]
germ has quit [Read error: Connection reset by peer]
germ- has joined #ocaml
bartholin has joined #ocaml
tomku has quit [Ping timeout: 276 seconds]
tomku has joined #ocaml
pi3ce has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
pi3ce has joined #ocaml
waleee has quit [Ping timeout: 260 seconds]
YuGiOhJCJ has joined #ocaml
myrkraverk__ has joined #ocaml
myrkraverk_ has quit [Read error: Connection reset by peer]
pi3ce has quit [Quit: No Ping reply in 180 seconds.]
pi3ce has joined #ocaml
<discocaml> <s0ln.> hello ! I'm trying to create a generic type `type 'a ident = string * 'a ` where `'a` can be changed . for example I want both `("foo", [1,2,3])` and `("foo", (Start 1, End 3))` to be idents
<discocaml> <s0ln.> is it possible to do this in ocaml ?
myrkraverk__ has quit [Quit: Leaving]
myrkraverk has joined #ocaml
<discocaml> <s0ln.> okay it was possible, i was just ommitting the `'` in `'t` in my code
mbuf has joined #ocaml
YuGiOhJCJ has quit [Remote host closed the connection]
YuGiOhJCJ has joined #ocaml
<discocaml> <ada2k> any way in dune to see the names of invidual tests as they run?
<discocaml> <ada2k> writing a patch for an unfamiliar codebase and one test seems to be deadlocking, but since it's not an actual error and simply a freeze i can't see which one
<discocaml> <ada2k> nvm, found it with ctrl+f
chiselfuse has quit [Remote host closed the connection]
chiselfuse has joined #ocaml
Serpent7776 has joined #ocaml
pi3ce has quit [Quit: No Ping reply in 180 seconds.]
pi3ce has joined #ocaml
Tuplanolla has joined #ocaml
mbuf has quit [Ping timeout: 260 seconds]
dreadedfrog has joined #ocaml
mbuf has joined #ocaml
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
<discocaml> <ada2k> ```
<discocaml> <ada2k> utop # Sys.mkdir "abc" 0x777;;
<discocaml> <ada2k> - : unit = ()
<discocaml> <ada2k> ---
<discocaml> <ada2k> echo a > abc/a.txt -> warning: An error occurred while redirecting file 'abc/a.txt'
<discocaml> <ada2k> ```
<discocaml> <ada2k> really weird bug, can anyone recreate?
lain` has quit [Remote host closed the connection]
lain` has joined #ocaml
<discocaml> <dinosaure> It should be `0o777` (octal value), no?
<discocaml> <ada2k> tried that, same result
<discocaml> <ada2k> perms are fine, i just can’t actually use it for some reason
<discocaml> <ada2k> one single person on stack overflow seems to have the same issue, for now i am just calling Sys.command mkdir
<discocaml> <ada2k> oh, huh
<discocaml> <ada2k> turns out where i was using o i was passing 660, which is invalid for a dir
<discocaml> <ada2k> feel dumb now
<discocaml> <joris0588> 0x777 is 0o3567 in octal, so it is probably setting some weird stuff like sticky bit or something
<discocaml> <joris0588> like 3 is SGID + sticky i think
<discocaml> <ada2k> yeah, my fault
<discocaml> <ada2k> been staring at manpages too much today my braincells dying off
<discocaml> <joris0588> and the thing is, sticky prevents file deletion. And i think > is deleting and reopening the file in a way hence it fails
<discocaml> <joris0588> nowadays, i find it very useful to dump this kind of thing to chatgpt and see what it says when in this situation
<discocaml> <ada2k> least i know now that a directory needs to be executable. which makes sense when i think about it
<discocaml> <ada2k> when creating it at leat
waleee has joined #ocaml
lain` has quit [Remote host closed the connection]
lain` has joined #ocaml
lain` has quit [Remote host closed the connection]
lain` has joined #ocaml
<discocaml> <yawaramin> one technique i like to use is 'name and shame' magic numbers. eg `let rwxrwxrwx = 0x777` should hopefully catch some more attention
lain` has quit [Ping timeout: 260 seconds]
szkl has joined #ocaml
tomku has quit [Ping timeout: 255 seconds]
tomku has joined #ocaml
tomku has quit [Ping timeout: 260 seconds]
tomku has joined #ocaml
pi3ce has quit [Quit: No Ping reply in 180 seconds.]
pi3ce has joined #ocaml
lain` has joined #ocaml
lain` has quit [Ping timeout: 252 seconds]
lain` has joined #ocaml
<discocaml> <joris0588> Yes, octal is for robot not human. The good interface is probably a list of variants like we have in open flags
<discocaml> <joris0588> Like Mode.create ~others:[] ~user:[Read; Write] ~group:[Read]
mbuf has quit [Quit: Leaving]
<discocaml> <ada2k> yeah, i'm gonna wrap in that eventually anyway
<discocaml> <ada2k> unix interfaces in general are to be abstracted away. too easy to mess up
<discocaml> <yawaramin> or maybe `Mode.make { user = rwx; group = rwx; other = rwx }`
<discocaml> <ada2k> oo that's nice
<discocaml> <yawaramin> or maybe `Mode.to_octal { user = rwx; group = rwx; other = rwx }`
<discocaml> <joris0588> Unix interfaces are ... Old yes. And designed for c
tomku has quit [Ping timeout: 252 seconds]
tomku has joined #ocaml
<discocaml> <ada2k> my goal with what im working on atm is to build a nice, fully typed oop i/o layer for miou that hopefully doesn't make the user engage with unix interfaces at all except for optimisation purposes
<discocaml> <ada2k> and those optimisations are mostly gonna be calling into uring which is already a less weird interface than non blocking file descriptors
<discocaml> <lecondorduplateau> it's a nice idea
<discocaml> <lecondorduplateau> OCaml lacks of abstraction for i/o
<discocaml> <ada2k> i like miou but i'm rather jealous of eio's flow system
<companion_cube> What makes a IO layer fully typed?
<discocaml> <yawaramin> the Flow abstraction is kinda inflexible when you're trying to work with lower-level byte-based I/O
<companion_cube> Do what rust and go do
<companion_cube> That's it
<discocaml> <ada2k> companion_cube: unix apis return a file_descr for everything. what i'm doing has seperate types for sockets, ```[ `R | `W | `RW ]``` tagging, etc
<discocaml> <ada2k> i guess not fully typed, but it#s an improvement
<companion_cube> (buffered, unbuffered) (reader, writer)
<companion_cube> Yeah please don't do that
<companion_cube> I mean do what you want :). But I don't like this
<discocaml> <Kali> btw you can do inline graves by surrounding the inline code with two ` instead of one
<discocaml> <Kali> ``[`R | `W | `RW]``
<discocaml> <ada2k> companion_cube: what would you advise otherwise?
<discocaml> <ada2k> i don't want to fully abandon the fd model for readers and writers. i still think it has merit
<companion_cube> What I said above
<companion_cube> Raw fds are best left untyped, and seldom used directly
<companion_cube> And phantom types are rarely worth it, I regret most of my uses of them
<companion_cube> I should say, permission oriented phantom types
<companion_cube> They're just clunky and verbose for little actual gain
<discocaml> <ada2k> phantom types are just one idea i have
<companion_cube> The tricky part of IOs are more like, cancelation and buffer (pool?) management, at least in my experience
<discocaml> <ada2k> i am leaning more towards seperate read and write mixins since i'm using objects and can just not expose write on a fd that doesn't support it
<discocaml> <ada2k> i have cancellation completely down
<companion_cube> Objects are good I think
<companion_cube> Make sure to make buffering explicit imho
<companion_cube> (... Just copy rust)
<discocaml> <ada2k> i've got that all in a lower level Miou_uring lib (switched from picos since i could either have a single-domain impl with structured concurrency or a multi-domain impl with unstructured concurrency that crashes the only scheduler im actually targetting)
<discocaml> <ada2k> i'm leaving buffering completely up to the caller, at least now
<discocaml> <ada2k> i want most abstraction to be opt-in, so that most things just work but users who want it can still explicitly use features of the underlying i/o stack
<companion_cube> Oh but then it's only for io uring?
<discocaml> <joris0588> This is not the same level of abstraction level but in my opinion io_uring is genius level api design
<discocaml> <ada2k> i really like it
<discocaml> <ada2k> i actually fixed most of my concurrency/locking issues by just moving more data into the ring :p
<companion_cube> Joris: why? :)
<companion_cube> To me it seems like it's making it hard to do cancelation but I probably am missing something
<discocaml> <joris0588> ok, cancellation is usually an issue, but how often do you need to cancel a syscall ?
<companion_cube> All the time? :)
<discocaml> <joris0588> solving the cancellation in user space is already hard, asking the kernel to track it is even harder
<companion_cube> Imagine if you want the kind of primitive like `select` in rust
<companion_cube> You race a read with a timeout
<companion_cube> If timeout fires you wamt to cancel the read
<companion_cube> That's most reads in a http server that resists slow Loris
<companion_cube> So readiness apis make this easier
<discocaml> <joris0588> yeah, ok that is a good point actually, epoll let's in a way lets you do that
<discocaml> <joris0588> indeed. But otoh, and i didn't look at io_uring for that in a while, i guess theoritically you could introduce cancellation to it
<discocaml> <joris0588> just a special message in the ring ?
<companion_cube> But you risk racing
<companion_cube> Get the timeout, start canceling, and then the kernel performs the read
<discocaml> <joris0588> that is fine i guess ? you just drop it
<companion_cube> What if it's a write?
<discocaml> <joris0588> raaah πŸ˜„ you are annoying
<companion_cube> What if you want to read sth else? You just lost data potentially
<discocaml> <ada2k> companion_cube: i'm still not clear on how i'm gonna expose it in an api. currently i just have an object type that contains certain functions which will execute as normal read/write operations on unix, or fancy fixed buf/eventually multishot nonsense on uring
<discocaml> <joris0588> i guess people invented transactions
<discocaml> <joris0588> for this
<companion_cube> But does io uring have transactions?
<discocaml> <joris0588> but still though, io_uring brings a lot of throughput
<companion_cube> I think it's an optimization
<discocaml> <joris0588> no it does not, but it could. that is why i think it is genius level api design, it could be added
<discocaml> <joris0588> the api was invented with a purpose and it remains extensible and could be used for more things
<discocaml> <ada2k> uring has cancellation. you submit a job to the ring containing the id of [target], and that job either returns ok and [target] is gone from the ring and will never complete, or error and [target] is too far along and needs to be awaited normally
<discocaml> <ada2k> i'm explaining it terribly, but i got it to work in practice without deadlocks
<discocaml> <joris0588> nice. Yeah i never had to think about it but glad it is already implemented
<discocaml> <ada2k> i've got a funky setup where i submit a record containing a syscall and it all just works. axboe is magic
<discocaml> <joris0588> and the thing is, io_uring is more that just an optimization, compared to readiness based api. That is bringing the model of working with io devices and high parallelism that exists in kernel to user space
<discocaml> <joris0588> you can call it an optimisation, i would call it a revolution
<discocaml> <ada2k> joris have you seen zerohttpd
<discocaml> <joris0588> that is the api that allows you to saturate an jbod of 20 nvme
<discocaml> <ada2k> super cool uring teaching sample that hosts a static page without a single syscall
Anarchos has joined #ocaml
<discocaml> <joris0588> you can't do that with any other existing api (especially since readiness api don't work with block device)
<discocaml> <joris0588> @ada2k i had a look at it yes, sounds cool
<discocaml> <ada2k> none of that stuff is in ocaml-uring yet. i've been thinking of writing up a PR for at least multishot ops since i think the kernel support on them is widespread nowadays and it's a very natural pattern for servers
<discocaml> <joris0588> mostly do ocaml at work though and at work we don't really need high performance http in ocaml funnily enough. So just glanced at it
<discocaml> <joris0588> i actually don't have much experience with uring for network, besides doing very basic stuffs with eio, most of my experience is with disk io
<discocaml> <ada2k> liburing has a nice doc page about the state of network, but it's new enough that kernel support is actually a concern
<discocaml> <joris0588> the thing is, in my mind for network, uring is good, but you had epoll already, so this is just better
<discocaml> <joris0588> for disk io, uring changes everything
<discocaml> <ada2k> i haven't used epoll at all. if i get round to it im gonna learn kqueue, but epoll just looked yuck when uring is on most kernels
<discocaml> <joris0588> and one of the strengh of uring is that, it helps you batch things more. so for high throughput network, it helps with that
<discocaml> <joris0588> i mean, the core win with uring vs epoll for network is that
<discocaml> <joris0588> you can do many operation with one syscall
<discocaml> <joris0588> and syscall, at a certain scale, are costly
<discocaml> <joris0588> but for disk io, you get that. And on top of that, you unlock the concurrency and get access to actuall async io with a programming model that match things like the io subsystem in linux and the nvme interface
<discocaml> <joris0588> and by doing that, you get zero copy io, without cache access on the cpu, from nvme device, to userspace
<discocaml> <joris0588> and you get that with very high concurrency and batching. this is huge
<discocaml> <ada2k> i really like the fixed buffers for this
<discocaml> <joris0588> this means you can do 100Gbps random read
<discocaml> <joris0588> yes. this is big
<discocaml> <ada2k> i wonder how they compare to splice
<discocaml> <ada2k> perf wise
<discocaml> <joris0588> well hm that is a good question, i am not sure. Compared for what task ?
<discocaml> <joris0588> for traffic forwarding between network and disk, i would guess this is roughly similar, but idk. But it should be, in theory no cpu access to the data
<discocaml> <joris0588> but splice is only for forwarding in the context of a server
<discocaml> <joris0588> fixed buffers also work when you need to get the data in userspace and process it
<discocaml> <ada2k> i need to write proper benchmarks for all of this sometime
<discocaml> <joris0588> data is always key πŸ™‚
<discocaml> <ada2k> my current build does not even have all operations implemented
<discocaml> <joris0588> last year i did some experiment, bypassing every abstraction, a bit similar to zerohttpd in a way
<discocaml> <joris0588> we had some data that was served with some rocksdb thing, and it was not scaling. Mostly the thing was in C
<discocaml> <joris0588> so i tried to write ocaml, like you were writing C, pierre chambard way in a way. Minimize alloc. Just bypass abstraction, and go directly to io_uring api, reuse buffers
<discocaml> <joris0588> the general idea is, Key value api that reads a key from disk, and dump the value to network. Http is kind of "hard", so it was just tcp. Accept connection, parse key, read index to get offset, fetch data and forward it to tcp without reading the data
<discocaml> <joris0588> value size around ~400kb
<discocaml> <joris0588> the thing reached 40k req/s single core (ocaml 4.14)
<discocaml> <joris0588> the code is really really aweful though, like wow you don't want to read this code
<discocaml> <joris0588> that's why it never reached prod
<discocaml> <joris0588> i'm talking cstruct, hand crafted state machine instead of continuation/fiber and reusing alloc as much as possible
<companion_cube> Reusing buffers is important for sure :p
<companion_cube> I think it's indeed more exciting for disk IOs
<companion_cube> Which you're not going to cancel anyway
<discocaml> <joris0588> yeah that is what i meant. In blocking eio you wont cancel anyway. And if it is canceled, usually you are unhappy about it (EINTR)
<companion_cube> Heh
<companion_cube> So I say it's an optim because it's not the thing you should use by default
<discocaml> <joris0588> hm
<companion_cube> Not portable, not flexible enough for composable cancelation, etc
<discocaml> <joris0588> epoll is not portable either
TCZ has joined #ocaml
<companion_cube> But for tasks where you need it (saturating a nvme) you can opt in to it
<companion_cube> And go brrr
<discocaml> <ada2k> the portable option is kqueue and linux decided it was too good for it
<companion_cube> No but there are a lot of layers that cover epoll/kqueue/...
<discocaml> <ada2k> (not that kqueue approaches io_uring)
<discocaml> <joris0588> kqueue is just epoll, slightly better
<discocaml> <ada2k> libuv covers uring too now
<companion_cube> So you can have libuv for the general case and io uring for specific needs
<discocaml> <joris0588> (look i have been a bsd fan too, companion_cube can be a witness)
<discocaml> <ada2k> uring is kind of opt in
<companion_cube> Ahah yes
<companion_cube> Of hipsterbsd
<companion_cube> (I guess the actual name is dragonfly but let's be real)
<discocaml> <ada2k> obv you need to replace your actual event loop code, but beyond that you are submitting the same calls you would under epoll/kqueue/select unless you specifically want to use a uring specific optimisation
<discocaml> <ada2k> and the event loop itself is very well designed, even for cancellation
<discocaml> <joris0588> another angle is that, sometimes, throughput matters and most of the time it does not. cancellation is another important topic
<discocaml> <joris0588> but the last pillar of the trilogy is, fairness latency and starvation
Serpent7776 has quit [Ping timeout: 248 seconds]
<discocaml> <joris0588> and this is a problem we are having a lot at work, with a very big monolith, with like 20 devs writing code into it
<discocaml> <joris0588> and at some point, you end up with starvation. Right now it is still lwt, and just a big queue. It does not work without constant care
<discocaml> <joris0588> and for this reason, i think the focus of the devs who worked on effects or now pico, of pluggable scheduling
<discocaml> <joris0588> this makes a lot of sense to me. This is really important and good idea i suspect
<companion_cube> I mean lwt has a lot of bottlenecks inside
<companion_cube> And you pay for a lot of futures and binds
<discocaml> <joris0588> bottlneck is througput
<discocaml> <joris0588> i'm talking latency
<discocaml> <joris0588> i'am talking murphy law actually
<discocaml> <joris0588> the thing is, you can have a program (or a distributed system) or anything run well. It runs stable, it works
<discocaml> <joris0588> until it breaks, because it went over some threshold
<discocaml> <joris0588> and when you reach this stage, what matters is that, it bouces back into stability and recover
<discocaml> <joris0588> vs, which is usually the default, enters a self sustaining loop of starvation
<discocaml> <joris0588> and when you reach this stage, at this stage, this is where scheduling is critical
<discocaml> <joris0588> because the only way to make a system recover when it went over capacity, is to accept, you need to let something fail, and need to process some
<discocaml> <joris0588> and the important question in this case, is to choose well
<discocaml> <joris0588> and if you just have round robin, or some queue or whatever
<discocaml> <joris0588> it will never choose well
<discocaml> <joris0588> i mean, you can write an http server that processes 50k req/s, this is really good. And it has a lot of components, like it forwards requests, or read some disks
<discocaml> <joris0588> what if it get 51k/req/s ?
<discocaml> <joris0588> if you just have lwt, what will actually happen, is that things will start to fail randomly, being inter dependant
<discocaml> <joris0588> and it will drop to being able to process only like 30k req/s
<discocaml> <joris0588> and then it will collapse itself, things will get delayed and delayed even more, wait in queue, get canceled
<discocaml> <joris0588> while in fact you can prioritize group of task, and cancel some of them, group that are related to a single request and let it recover.
<discocaml> <joris0588> this is really important
<discocaml> <joris0588> i mean unless you manage to make your thing never be overloaded, and in that case you are really lucky to be able to do that πŸ˜„
<companion_cube> 50 k req/s is not that much
<companion_cube> I think my threaded server can do that on a good machine
<companion_cube> (with a thread pool and explicit accept limit it'll not degrade too much I think, just drop the excess requests)
<discocaml> <joris0588> i just used it as an example that is not the point. Also per core, i think it is kind of ok depends what you are doing
<companion_cube> Ah yeah sure
<companion_cube> There's a cool recent post about queues
<discocaml> <joris0588> i'm listening
<discocaml> <joris0588> oh yeah i saw it. Pretty cool blog post !
<companion_cube> There's a more recent one, wait
<discocaml> <joris0588> pretty nice indeed
<discocaml> <joris0588> those thing though, they matter at network level. They also matter at fiber/lwt threads "group" level
<discocaml> <joris0588> i am actually thinking, it would be nice to have a notion of task.
<companion_cube> ha!
<discocaml> <joris0588> like a group of fiber
<companion_cube> with structured concurrency that'd just be the parent fiber
<discocaml> <joris0588> oh
<companion_cube> (ie the toplevel fiber that's handling a query)
<discocaml> <joris0588> wow
<discocaml> <joris0588> yeah
<companion_cube> (and cancelling it cancels all the related children)
<companion_cube> yeah…
<discocaml> <joris0588> yeah
<companion_cube> hmm, maybe structured concurrency is just, regions for fibers
<discocaml> <joris0588> man, did i miss talking with you
<companion_cube> β™₯
Tuplanolla has quit [Quit: Leaving.]
tomku has quit [Ping timeout: 245 seconds]
tomku has joined #ocaml
bartholin has quit [Quit: Leaving]
<discocaml> <ada2k> @joris0588 what do you think about openbsd?
<discocaml> <joris0588> They care.
<discocaml> <ada2k> funnily i’ve had trouble with ocaml 5 on freebsd, and had everything work ootb on openbsd
<discocaml> <ada2k> (this was not very thorough testing however)
<discocaml> <joris0588> yes, because they care.
<discocaml> <ada2k> really nice, unsurprising system
<discocaml> <joris0588> they care about simplicity security robustness
<discocaml> <joris0588> they care hard
<discocaml> <joris0588> of course it means that with limited ressources other things will suffer
<discocaml> <joris0588> but i mean. openbsd is also openssh which is already like openssh ?
<discocaml> <joris0588> hard to say πŸ˜„
<discocaml> <ada2k> it’s a small thing but doas really impresses me
<discocaml> <ada2k> a simple tool made for one purpose with direct collaboration from the kernel
<discocaml> <ada2k> refreshing to have a design that feels holistic esp compared to linux
Anarchos has quit [Quit: Vision[]: i've been blurred!]
bibi_ has quit [Quit: Konversation terminated!]
TCZ has quit []
bibi_ has joined #ocaml
torretto has quit [Remote host closed the connection]
torretto has joined #ocaml
lain` has quit [Remote host closed the connection]
lain` has joined #ocaml
lain` has quit [Remote host closed the connection]
lain` has joined #ocaml