companion_cube changed the topic of #ocaml to: Discussion about the OCaml programming language | http://www.ocaml.org | OCaml 4.14.0 released: https://ocaml.org/releases/4.14.0.html | Try OCaml in your browser: https://try.ocamlpro.com | Public channel logs at https://libera.irclog.whitequark.org/ocaml/
<dons> oh i managed to get a .merlin file building flow, so maybe its ok
<rgrinberg> .merlin files are history
<greenbagels> > opam update 22.59s user 14.12s system 7% cpu 8:24.46 total
<greenbagels> hmm...
<dons> right, so the merlin-based lsif indexer is a dead end I guess?
<dons> just some older projects still have .meriln files floating around
Tuplanolla has quit [Quit: Leaving.]
<greenbagels> an 8.5 minute opam update isn't normal... right?
<greenbagels> let me try checking my disk and internet
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<d_bot> <Ambika E.> opam update by default doesnt do a lot of logging/reporting as to what step it's currently on right?
<d_bot> <Ambika E.> maybe see if you can get it to be less silent
<d_bot> <orbitz> I think it'd be really useful if there was some sort of file system based common interface for this in dune. It really makes it hard to use the Merlin information for other use cases
<greenbagels> Ambika, yeah i enabled the debug logs; it seems related to the diff issue described on https://github.com/ocaml/opam/issues/3730
<greenbagels> but now the execution is only taking 1 minute or so; i'll keep the debug flags on for future updates to see what's up
gentauro has quit [Ping timeout: 276 seconds]
gentauro has joined #ocaml
leah2 has joined #ocaml
tiferrei has quit [Remote host closed the connection]
tiferrei has joined #ocaml
hyphen has quit [Ping timeout: 258 seconds]
hyphen has joined #ocaml
hyphen has quit [Ping timeout: 244 seconds]
hyphen has joined #ocaml
Haudegen has joined #ocaml
spip has joined #ocaml
bobo has quit [Ping timeout: 255 seconds]
<d_bot> <mbacarella> I think you're supposed to get this info from ocaml-lsp-server now
Serpent7776 has joined #ocaml
mro has joined #ocaml
zebrag has quit [Quit: Konversation terminated!]
gravicappa has joined #ocaml
jsoo_ has joined #ocaml
bacam_ has joined #ocaml
xgqtd has joined #ocaml
spip has quit [*.net *.split]
gopiandcode has quit [*.net *.split]
xgqt has quit [*.net *.split]
jsoo has quit [*.net *.split]
rak has quit [*.net *.split]
nerdypepper has quit [*.net *.split]
bacam has quit [*.net *.split]
gopiandcode has joined #ocaml
spip has joined #ocaml
rak has joined #ocaml
nerdypepper has joined #ocaml
mbuf has joined #ocaml
azimut has quit [Remote host closed the connection]
dwt_ has quit [Ping timeout: 256 seconds]
olle has joined #ocaml
azimut has joined #ocaml
dwt_ has joined #ocaml
olle has quit [Ping timeout: 255 seconds]
bartholin has joined #ocaml
jpds1 has joined #ocaml
jpds has quit [Ping timeout: 240 seconds]
olle has joined #ocaml
perrierjouet has quit [Quit: WeeChat 3.5]
xgqtd has quit [Ping timeout: 240 seconds]
xgqtd has joined #ocaml
bacam_ is now known as bacam
adanwan has quit [Remote host closed the connection]
adanwan has joined #ocaml
xgqtd has quit [Quit: WeeChat 3.4.1]
xgqt has joined #ocaml
wingsorc has quit [Remote host closed the connection]
adanwan has quit [Remote host closed the connection]
mro has quit [Remote host closed the connection]
azimut has quit [Remote host closed the connection]
azimut has joined #ocaml
adanwan has joined #ocaml
<gopiandcode> Heoy, anyone know how to change the z3 timeout parameter at runtime multiple times
<gopiandcode> I know certain goals will be more/or less difficutl, so I'd like to configure Z3 to spend more or less time proving goals
<d_bot> <Le condor du plateau> what is the equivalent `Stdlib.really_input_string` in Lwt ?
mro has joined #ocaml
Haudegen has quit [Quit: Bin weg.]
<d_bot> <Drup> @octachron if I wanted to do some (reproducible) work on multicore, is there a switch other than 5.0+trunk that I could use ?
vicfred has joined #ocaml
namkeleser has joined #ocaml
rgrinberg has joined #ocaml
azimut has quit [Remote host closed the connection]
azimut has joined #ocaml
mro has quit [Remote host closed the connection]
<sadiq> there may be an alpha on the way very soon
<sadiq> but no at the moment 5.0+trunk is it, the 4.12+domains/effects aren't getting updated anymore
bartholin has quit [Quit: Leaving]
Sankalp has quit [Read error: Connection reset by peer]
Sankalp has joined #ocaml
Sankalp has joined #ocaml
gravicappa has quit [Ping timeout: 276 seconds]
<d_bot> <Drup> ok!
<d_bot> <Drup> sadiq: since you are here, another question: What's the right way to launch a computation with a timeout in multicore ocaml ?
Haudegen has joined #ocaml
<sadiq> that's a good question
<sadiq> I'm not aware of a pattern we're using for doing that at the moment. I wonder whether you could abuse signals and exceptions to do it though.
<d_bot> <Drup> hmm, signals are probably a good idea
<d_bot> <Drup> because I don't want to have to turn my code cooperative
azimut has quit [Remote host closed the connection]
azimut has joined #ocaml
<companion_cube> you have effects, you can just call one regularly to check the timeout?
<companion_cube> or just poll an atomic bool, anyway
<d_bot> <Drup> that would pretty much be cooperative
<companion_cube> you don't have to thread a whole monad through your code, count your blessings
<d_bot> <Drup> true, but still
<companion_cube> if you had fibers you could use Eio cancellation I suppose; otherwise this is the only way
<d_bot> <Drup> ideally, I would really like `with_timeout : float -> (unit -> 'a) -> 'a`
<d_bot> <Drup> (or it's equivalent with domains/chans/bla)
<companion_cube> wouldn't we all :p
<d_bot> <Drup> sadiq: what's the behavior of signals and domains ? Are signal handlers per-domains ?
mro has joined #ocaml
mro has quit [Remote host closed the connection]
<sadiq> I was just thinking through that.
<sadiq> Signal handlers are global.
<sadiq> so you'd have to resort to something like SIGALARM or set up up itimer and then in the OCaml signal handler check the domain you're on and raise an exception if the timeout has past.
mro has joined #ocaml
<sadiq> you can block SIGALRM on domains you're not interested in having timeouts on.
<companion_cube> don't mix signals and threads
<companion_cube> it's awful
<companion_cube> (in fact, don't use signals at all, they deeply suck)
<companion_cube> I have to sprinkle mine with stuff like `ignore (Thread.sigmask Unix.SIG_BLOCK [ Sys.sigint; Sys.sigpipe ] : _ list);`
<companion_cube> because it's that shitty
<d_bot> <Drup> mmh, ok, that seems a bit brittle, and incompatible with having multiple timout-able tasks in //
<d_bot> <Drup> Ok, I'll reconsider the cooperative-ish solution
<d_bot> <Drup> wait, I can use Eio for it, can't I ?
<companion_cube> if you go with fibers, sure, I guess
<d_bot> <Drup> it does mean I have to yield in my tight loop, right ?
<companion_cube> I think so, otherwise you won't poll for cancellation
<companion_cube> (imho the atomic is still more lightweight, it's how some solvers do it)
<d_bot> <Drup> right
<d_bot> <Drup> I barely even need effects for this, it's just slightly more general with it.
<companion_cube> you could use eio in another domain to use the timer and trigger the atomic
<companion_cube> yeah
gravicappa has joined #ocaml
<sadiq> it's a little sad you end up needing to do that manually when we've essentially already done a lot of work with safepoints on having a really good poll points implementation.
<sadiq> it would be nice to have a mechanism to reuse that.
<companion_cube> like a builtin `with_cancel : bool atomic -> (unit -> 'a) -> 'a` ?
<companion_cube> and the runtime would check the atomic?
<d_bot> <Drup> that would be really nice
<companion_cube> indeed
<d_bot> <Drup> because currently, the answer to "I want to cancel some computation" seems to be 🤷‍♂️
SquidDev has quit [Remote host closed the connection]
<companion_cube> always has been
bartholin has joined #ocaml
<d_bot> <Bluddy> it could be difficult to integrate this kind of functionality into the runtime
<d_bot> <Bluddy> also you may not want to use an atomic. the value is true until any writer sets it to false, and it doesn't matter if the active thread misses the write by a little bit and only gets it later
mbuf has quit [Quit: Leaving]
<sadiq> companion_cube, something like that. Yea. (with_cancel)
<sadiq> actually that would be difficult to implement.
<sadiq> or at least implement efficiently
<companion_cube> you absolutely want an atomic, @Bluddy
SquidDev has joined #ocaml
<companion_cube> I don't think writing a normal reference from another thread is safe?
<companion_cube> sadiq: because the atomic isn't pinned?
<sadiq> no because you'd have to check the atomic at each poll point and that would be bad
<companion_cube> hmm ok
namkeleser has quit [Quit: Client closed]
<companion_cube> I thought poll points were doing a lot more than that already
<companion_cube> although, indeed, in case of nesting it gets bad
<sadiq> you want a cancellation function that when called will cancel the computation
<sadiq> I don't really know what the api would look like for that
<sadiq> (because then when you call that function you can change the allocation pointer to ensure an exception is thrown at the next poll point)
<d_bot> <Bluddy> companion_cube if you're writing to one location in memory, and you have a reader + multiple writers all writing the same value (false), it should be safe
<d_bot> <Bluddy> the worst that could happen is you'll get a delay in detecting the write (the reader will read from that location incorrectly for a short period of time), but that's not a big deal I think in this case
<d_bot> <Bluddy> and you save the cost of atomics
<d_bot> <Drup> personally, I really do not care if the canceling is not super precise, timewise.
<companion_cube> @Bluddy I don't know exactly the memory model for OCaml 5
<companion_cube> I do think that what you suggest is bad in C++
<companion_cube> you can't assume anything on non atomic accessed racely
<d_bot> <Bluddy> the only danger is if the write is removed as an optimization. you may need to declare it volatile for this reason.
<companion_cube> I think that's really dangerous
<d_bot> <Bluddy> but other than that, it should be completely fine
<d_bot> <Bluddy> not really
<d_bot> <Bluddy> atomics are only necessary if there is real contention. if some of those writers were writing true and some were writing false, you'd have to have atomics.
<companion_cube> it does seem that what you say is UB
<companion_cube> (with the C++ memory model)
<d_bot> <Drup> I think the choice of whever to use atomics or not is pretty minor compared to the question of the API :p
<d_bot> <Bluddy> companion_cube: in theory that's true, but in reality, you have a bunch of writers where the *only option* is to write false
<d_bot> <Bluddy> so it's a special case. there is no other possible value that could exist there (other than the true that was there initially)
<companion_cube> so what, you still can't assume the write will go well
<companion_cube> there's a reason atomic exists
<d_bot> <Bluddy> sure you can. it will eventually.
<companion_cube> no, it might write garbage
<d_bot> <Bluddy> how?
<companion_cube> anyway, OCaml's memory model might be a bit stronger
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<d_bot> <octachron> You cannot use references for communication ever: no memory models guarantees you that you will ever see a write.
<d_bot> <Bluddy> this is used in many real time systems to avoid the extra synchronization costs of atomics
<d_bot> <Bluddy> not references as in OCaml, but addresses in memory
<companion_cube> maybe you can do that with assembly if you know your hardware
<companion_cube> but don't do it in C++ :D
<d_bot> <Bluddy> it's done in C++ as well
<companion_cube> then it's incorrect :)
<companion_cube> not that it's a big surprise, of course, C++ makes it easy to be incorrect
<haesbaert> if you're not properly using membars, it's very likely incorrect
<d_bot> <octachron> For instance,
<d_bot> <octachron> ```ocaml
<d_bot> <octachron> cell = ref true
<d_bot> <octachron> (* Domain A *) | (* Domain B *)
<d_bot> <octachron> cell := false | while true do f !cell done
<d_bot> <octachron> ```
<d_bot> <octachron> can be correctly optimized to
<d_bot> <octachron> ```ocaml
<d_bot> <octachron> cell = ref true
<d_bot> <octachron> (* Domain A *) | (* Domain B *)
<d_bot> <octachron> cell := false | while true do f true done
<d_bot> <octachron> ```
<companion_cube> so… UB?
<haesbaert> ^, no since the writes are "atomic" in respect to the write itself
<haesbaert> assuming it's aligned
<d_bot> <octachron> And the first answer tells you that the standard behavior is that it triggers a "may the world burn" UB.
kakadu_ has joined #ocaml
kakadu has quit [Ping timeout: 256 seconds]
<haesbaert> reader is free to never see any of the writes, x86 would guarantee the writes are not seen in the incorrect order though
<companion_cube> but ARM is less forgiving I think
<haesbaert> on x86 basically the only re-ordering is a load of A preceding a store of B
<haesbaert> so it can speculate loads of different addresses "in front of stores"
<haesbaert> everything else is strictly ordered
<haesbaert> arm alows load vs load, store vs store and store vs load
<haesbaert> and alpha and powerpc are a clusterfuck :O
<d_bot> <Bluddy> looks like the proper way to do this in modern C++ is use an atomic with memory_order_relaxed, which doesn't actually activate the atomic
rgrinberg has joined #ocaml
<haesbaert> relaxed basically does't do anything that a cast to volatile wouldn't do
<companion_cube> it at least avoids UB
<companion_cube> tells the compiler not to fuck up your code
<companion_cube> I think that's useful
<haesbaert> "the nice way to do" is the reader to use consume and write to to use store
<haesbaert> aye, but it's semantically equivalento to a volatile cast, like linux does with WRITE_ONCE
<d_bot> <octachron> @Drup, but yes, there should be a 5.0 branch and an alpha0 release soonish (around end of May/beginning of June).
<companion_cube> it's not like linux is written in compliant C
<companion_cube> they can't
<d_bot> <Bluddy> haesbaert: this is correct. but it's future proof against a platform that could mess up
<haesbaert> why not ? C doesn't specify architecture memory ordering
<companion_cube> ah, for other reasons I mean.
<companion_cube> linus seems to rant a lot on the C standard
<haesbaert> well yeah they abuse stuff, but frankly, everyone does, named structure members are already not valid C89, but people do it seems forever
<d_bot> <Bluddy> linux of course likes to disable a lot of compiler optimizations
<haesbaert> *since
<d_bot> <Bluddy> they throw out whatever they don't like
<companion_cube> @Bluddy probably because it'd break the code
<haesbaert> well most compiler optimizations are broken on some level
<d_bot> <Bluddy> they're definitely playing with fire very often
<haesbaert> O3 produces unreliable code on gcc on anything other than amd64
<d_bot> <Bluddy> I hate the whole aliasing thing they added to C. it's awful
<d_bot> <Bluddy> I'm happy Linus disabled that one
<companion_cube> so for OCaml, anyway, idk what the memory model guarantees, but the runtime should use atomics whenever there's shared data
<haesbaert> "basically" memory ordering is only a concern when you're doing fancy stuff (aka lockless datastructures and so on)
<haesbaert> the idea in ocaml is to center it around Atomic.t
<companion_cube> and it always use a strong ordering, doesn't it?
<haesbaert> well, on x86 an atomic.store has an actual LOCK which is a strong membar
<d_bot> <Bluddy> with *some* notion of strong ordering, based on the famous paper I skimmed through
<companion_cube> but x86 is not the only architecture
<d_bot> <Bluddy> (famous ocaml paper)
<companion_cube> if I write Ocaml I don't particularly want to assume a given architecture
<companion_cube> I'd rather have my code be correct everywhere
<haesbaert> but the load doen't imply anything according to kc, but I remember it imposed an lfence (which is virtually nothing)
<d_bot> <Bluddy> right supposedly it has a small cost on arm?
<haesbaert> companion_cube: yeah that's the idea, you shouldn't if you are, there is abug
<haesbaert> 99% of the people will use locks, where they never have to care about ordering, be it in C or ocaml
Haudegen has quit [Quit: Bin weg.]
<companion_cube> I'm definitely going to use a ton of atomics, sorry :p
<haesbaert> oh well me too, WE ARE THE ONE PERCENT
<haesbaert> rejoice
<d_bot> <octachron> Even the stdlib runtime tends to use lock most of the time.
<companion_cube> I guess that now, the performance of Mutex.t is going to be a hot, hot topic
<haesbaert> but the plan is to never expose things like membars, and to have a stronger (even if it slower) memory model, that was my understanding, so take it with a grain of salt
<companion_cube> not sure if it's just an atomic on the happy path (futex)?
<d_bot> <Bluddy> I think it should be
<haesbaert> it's probably a pthread_mutex_lock underneath (really just guessing here)
mro has quit [Remote host closed the connection]
<haesbaert> which (I think) are a futex on linux ? dunno much about linux code
kakadu has joined #ocaml
<companion_cube> not sure either
<haesbaert> on obsd was an atomic_cmpset + sched_yield() (embarassing AF)
<companion_cube> last time I checked, Mutex was quite fast with no contention
<companion_cube> like 15ns or sth like that on my machine for a do nothing critical section
<haesbaert> I think you also have to consider the scale of things being done in ocaml
Tuplanolla has joined #ocaml
<haesbaert> when your workload is in the 100k+ cycles, who cares about an atomic lock that takes ~100
<companion_cube> 15ns per iteration on my machine
<haesbaert> o The cost for an uncontested operation is never higher than the
<haesbaert> maximum cost of a single compare-and-set operation, average cost on a
<haesbaert> cached situation is less than 50cycles per pair of lock/unlock on my
<haesbaert> haswell i5 2.4ghz,
<companion_cube> haesbaert: I don't really agree, perf of primitives is critical since it's what you build everything else on top of
kakadu_ has quit [Ping timeout: 276 seconds]
<companion_cube> if you need fine grained concurrency then this kind of thing matters a lot
<haesbaert> companion_cube: only if your workload is very very tiny
<haesbaert> if it takes 100 or 500 cycles and you do 100k cycles between them, who cares ?
<haesbaert> not to say you shouldn't make it as fast as possible, but scale must be considered
<d_bot> <octachron> Yes, looking at the code, it is a pthread_mutex_lock.
<haesbaert> we're talking about locks that block here, that's a ton of cycles
<companion_cube> haesbaert: say I need to lock every time I insert into a hashtbl?
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<companion_cube> (in a piece of code that does that a lot)
<companion_cube> I don't think inserting is 100k cycles in the easy case
<haesbaert> yes yes, totally agree
<companion_cube> if there's contention or hashtable redimension or whatever, sure
<companion_cube> resize*
<haesbaert> the 50 cycles thing I wrote before was on a spinning mutex I wrote for bitrig, I mean 50cycles is really nothing
<haesbaert> but if you're doing an atomic outside of your L1 where the MESA is fighting for cachelines accross memory, it can get real slow
rgrinberg has joined #ocaml
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
mro has joined #ocaml
zebrag has joined #ocaml
mro has quit [Remote host closed the connection]
mro has joined #ocaml
mro has quit [Remote host closed the connection]
mro has joined #ocaml
spip has quit [Ping timeout: 246 seconds]
bobo has joined #ocaml
mro has quit [Remote host closed the connection]
<d_bot> <Drup> Sooo, I did a thing
<d_bot> <Drup> The API looks like this:
<d_bot> <Drup> ```
<d_bot> <Drup> val check : unit -> unit
<d_bot> <Drup> val with_timeout : float -> (unit -> 'a) -> ('a, unit) result
<d_bot> <Drup> ```
<d_bot> <Drup>
<d_bot> <Drup> It's not even using multicore ocaml yet (still using `Atomic`).
<d_bot> <Drup> It's very much cobbled together with a hammer by someone who doesn't know about non-cooperative multitasking 😄
<companion_cube> how is the state passed to `check`? :)
<d_bot> <Drup> https://bpa.st/3POQ
<d_bot> <Drup> there is only one state. Nesting is not allowed, it's sequential only now 🙂
<d_bot> <Drup> (a proper //-ready version would be very welcome)
<companion_cube> an issue iwth that, often, is the lack of thread-local storage
<companion_cube> (one could keep a stack of states per thread)
<companion_cube> I'd also recommend mtime :)
olle has quit [Ping timeout: 258 seconds]
<d_bot> <Drup> well, there is domain-local storage
<d_bot> <Drup> I would probably do it like this.
<companion_cube> ohhhhh nice
<companion_cube> Domain.DLS? perfect
<companion_cube> ahahahhahhah and we're getting In_channel.input_all
<companion_cube> what a crazy world we live in
Haudegen has joined #ocaml
mro has joined #ocaml
olle has joined #ocaml
<d_bot> <orbitz> Is recommend instead of timeout taking a time, take something else that becomes realized later on. That way you can timeout on lots of things
rgrinberg has joined #ocaml
vicfred has quit [Quit: Leaving]
aspe has joined #ocaml
aspe has quit [Quit: aspe]
aspe has joined #ocaml
aspe has quit [Client Quit]
aspe has joined #ocaml
aspe has quit [Client Quit]
aspe has joined #ocaml
mro has quit [Remote host closed the connection]
mro has joined #ocaml
mro has quit [Read error: Connection reset by peer]
mro has joined #ocaml
<d_bot> <mbacarella> i just saw a PR involving this code:
<d_bot> <mbacarella> `let%map_open.Command tag_flag = Db.tag_flag in fun () ->`
<d_bot> <mbacarella> the part that's wild is the module name right after the binding? i've never seen that form before?
<d_bot> <mbacarella> what does that mean?
<d_bot> <Anurag> ppx_open supports qualifying the `%bind`/ `%map` etc with a module name. It will look for `<Module_name>.Let_syntax` and use that for expanding the ppx
<d_bot> <mbacarella> sorry, i meant, is there a name for that?
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<d_bot> <mbacarella> i thought it was a more general compiler feature, i guess it's all in ppx_let?
<d_bot> <Anurag> this particular feature is a ppx_let specific thing
aspe has quit [Quit: aspe]
mro has quit [Quit: Leaving...]
spip has joined #ocaml
bobo has quit [Ping timeout: 244 seconds]
<d_bot> <leviroth> In ppx_let it will open a module called `Open_on_rhs` from the given module, and it will be open in the rhs but not the body of the binding @pilothole
gravicappa has quit [Ping timeout: 244 seconds]
jpds1 has quit [Ping timeout: 240 seconds]
jpds1 has joined #ocaml
pieguy128_ has quit [Quit: ZNC 1.8.2 - https://znc.in]
pieguy128 has joined #ocaml
aspe has joined #ocaml
aspe has quit [Client Quit]
williewillus has quit [Quit: The Lounge - https://thelounge.chat]
xgqt has quit [Quit: WeeChat 3.4.1]
xgqt has joined #ocaml
williewillus0 has joined #ocaml
williewillus0 is now known as williewillus
rgrinberg has joined #ocaml
Serpent7776 has quit [Ping timeout: 255 seconds]
<d_bot> <geoff>
<d_bot> <geoff> After reading your GADT comments yesterday and replying, I had a bit of a lightbulb go off and realized with what I knew now about GADTs (more than when I initially wrote this thing that uses them) I could actually achieve the restrictions that I wanted without messing up the ergonomics of the API (proliferating type specific functions, making it more different from OpenSCAD).
<d_bot> <geoff>
<d_bot> <geoff> Now I am using the parameters of the GADT `t` to restrict transforms on 2d objects to 2d, and rotations to only around the z-axis (see `translate`, `rotate`, and `rotate_about_pt`). Before, even though I prevented mixing of 2d and 3d in boolean operations, shapes in 2d would be transformed by functions that would still be taking 3d vectors, so it would be up to the user to not do things that didn't make sense (like it is in the dyn
<d_bot> <geoff>
<d_bot> <geoff> Anyway, just wanted to share since I was pretty satisfied with what I came up with, and it was seeing you glow about GADTs that made me think about this problem again.
<d_bot> <mbacarella> nice! 😀 this is an aspect I haven't even explored yet myself
<d_bot> <mbacarella> i will definitely be referencing this in the fuutre
<d_bot> <geoff> Yea, besides this I had only used them to change the return type, or have restrictions on list mixing like I had before. The realization that I could have different ***input*** types is something that I hadn't had until yesterday
bartholin has quit [Quit: Leaving]
Tuplanolla has quit [Quit: Leaving.]
bastienleonard has joined #ocaml
tiferrei has quit [Ping timeout: 240 seconds]
tiferrei has joined #ocaml
<bastienleonard> hi, I am confused about why this code doesn't work: https://try.ocamlpro.com/#code/let'x'='3!print_endline'$(Test$(;!!(*'This'works:'*)!(*'let'x'='3'*)!(*'let'_'='print_endline'$(Test$('*)
<bastienleonard> I'm tempted to make it work with `;;`, but I read here that `;;` should never be used in source code: https://baturin.org/docs/ocaml-faq/#the-double-semicolon
<bastienleonard> surely this is a very basic question, but I couldn't find the answer
<d_bot> <NULL> let x = e is a top-level definition. In a source file, you must chain definitions, except if you put ;; which allows you to have expressions
<d_bot> <NULL> Top-level* expressions
bgs has quit [Read error: Connection reset by peer]
<bastienleonard> thanks, I didn't encounter this information yet
bgs has joined #ocaml
<bastienleonard> is there a particular reason for this? E.g. does it make parsing easier?
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
olle has quit [Ping timeout: 258 seconds]
<d_bot> <NULL> For not allowing top-level expressions? Since definitions don't have a closing token it would be impossible to parse the end if not for the start of the following definition
perrierjouet has joined #ocaml
rgrinberg has joined #ocaml
gopiandcode has quit [Ping timeout: 246 seconds]
gopiandcode has joined #ocaml
jpds1 has quit [Ping timeout: 240 seconds]
wingsorc has joined #ocaml
jpds1 has joined #ocaml
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]