<discocaml>
<cemerick> I don't see any functions in the "cone" of its definition at all.
<companion_cube>
wait, is this with JST stuff? Float.Map might contain a first-class module if so
<discocaml>
<cemerick> OHHHH
<discocaml>
<cemerick> yeah, that's probably it
<companion_cube>
(ie a record of functions)
<companion_cube>
marshal is a marsh mistress
<discocaml>
<cemerick> That's absolutely it
<discocaml>
<cemerick> Thanks companion_cube
<companion_cube>
👍
<discocaml>
<cemerick> count another win for functors 😬
<discocaml>
<cemerick> ech, this is going to be a pita to deal with
ced2 is now known as cedb
<discocaml>
<cemerick> (I'm intrigued by the parameters of "a marsh mistress")
<discocaml>
<cemerick> So do folks that use jane street stuff just not use Marshal at all?
<discocaml>
<cemerick> just pervasive sexprs everywhere I suppose
<companion_cube>
avoiding marshal is probably a good idea
<companion_cube>
it's dangerous
<companion_cube>
at JST they have binprot and an army of ppx
<discocaml>
<octachron> I imagine that marshall security model (any marshalled file as the same right as your executable) is also not a good fit for them.
<companion_cube>
right, segfault is if you're lucky
<discocaml>
<cemerick> yeah, those constraints make sense, tho it's unfortunate when JS constraints end up tainting a general-purpose lib like this 😕
<companion_cube>
it's just that they don't care about marshal at all
<discocaml>
<cemerick> sure
<companion_cube>
and yeah, in a way marshal is only a degree above Obj.magic in terms of yolo
<discocaml>
<cemerick> I've had a TODO to eliminate my use of marshal for caching certain structures, but it's been open for ~3 years so 🤷
<discocaml>
<cemerick> I guess I'll just have to keep the tdigest stuff out of cache, tho that'll defeat at least half of its purpose
<companion_cube>
ahah yeah I know
<companion_cube>
when I use marshal I never use output_value though
<companion_cube>
(I write into a buffer, so I can know the length and do some sort of prefix-length framing)
<discocaml>
<cemerick> it all grounds out in caml_output_value so that part is a wash
<discocaml>
<cemerick> I should also use a buffer, for the same and also other reasons, but the data being cached is so small `Marshal.to_string` is _fine_
yoyofreeman has joined #ocaml
<companion_cube>
:)
cr1901__ is now known as cr1901
rgrinberg has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<discocaml>
<cemerick> It sure would be nice to have rust-esque `.0`, `.1`, etc accessors for tuples
sim642 has quit [K-Lined]
sim642 has joined #ocaml
dstein64- has joined #ocaml
<companion_cube>
sure would…
dstein64 has quit [Ping timeout: 245 seconds]
dstein64- is now known as dstein64
dhil has joined #ocaml
deadmarshal_ has quit [Quit: IRCNow and Forever!]
dhil has quit [Ping timeout: 246 seconds]
yoyofreeman has quit [Remote host closed the connection]
deadmarshal_ has joined #ocaml
<discocaml>
<leostera> hi folks, i'm writing a dune `dialect` and i'm running into a small issue – when i call `dune build @fmt --auto-promote` I was exepcting as usual to just format my ocaml code, but this overrides the source files of the dialect as well. I'm sure I'm missing something here, but I can't find any flags I should be using to prevent this?
waleee has quit [Ping timeout: 246 seconds]
bartholin has joined #ocaml
bartholin has quit [Quit: Leaving]
azimut has quit [Ping timeout: 246 seconds]
Hammdist has left #ocaml [#ocaml]
<discocaml>
<Et7f3 (@me on reply)> When we see log4j, phar attack + evolution in case ocaml change I see it as a good thing
ursa-major has joined #ocaml
<discocaml>
<leostera> where can i read more about the transition from caml4p to ppx? 🤔
<companion_cube>
😅 sounds like investigative journalism, where you'd have to track the various actors and interview them
<companion_cube>
"ah it's in the past, but the pain remains"
<discocaml>
<leostera> honestly i'm just wondering under what circumstances the decision of limited/directed extension points was taken
<discocaml>
<leostera> but yeah maybe it is a bit journalistic haha
<greenbagels>
when looking online for arbitrary precision integers in ocaml, some search results are old documentation from ocaml 4.01; but ive noticed since then there is no standard big int module; is this because Zarith has supplanted it?
<companion_cube>
indeed, zarith subsumed it
<companion_cube>
@leostera: camlp4 is adverse to tooling such as merlin/LSP
<companion_cube>
ppx/extension points were a solution to that (also to perf issues I think)
<greenbagels>
companion_cube: thanks!
<discocaml>
<leostera> @companion_cube do you remember any specific extensions that merlin/lsp would just not support?
<discocaml>
<leostera> i'm trying to find the overlap between ppx, camlp4, and say rust proc macros
<companion_cube>
I mean, merlin didn't support _any_ camlp4 extension, afair
<companion_cube>
it would have needed an extensible parser
<companion_cube>
ppx exists because everybody agrees on the same AST (just like with rust macros)
<discocaml>
<cemerick> proc macros are just unhygenic, camlp4 is more akin to having control over a lisp's read table
<discocaml>
<rgrinberg> Merlin and dune can work with generic preprocessors like camlp4
<discocaml>
<rgrinberg> Not 100%, but good enough
<companion_cube>
sure, you just lose all error recovery?
<companion_cube>
you do mean the same way that dialects are handled right?
<discocaml>
<rgrinberg> You can get some back if you write a merlin extension
<discocaml>
<rgrinberg> We support reason “ok”
<discocaml>
<rgrinberg> So it’s not entirely out of reach
<discocaml>
<cemerick> so it'd be a merlin extension per camlp4 extension
<companion_cube>
if you have an entirely different parser, yeah, it works
<companion_cube>
anyway the other thing was that camlp4 doesn't compose iirc
<companion_cube>
2 extensions that don't know about each other will be unhappy
<discocaml>
<rgrinberg> Only for the recovery
<discocaml>
<rgrinberg> Ppx composes poorly as well without additional machinery like ppxlib
<companion_cube>
but it does compose
<companion_cube>
you have one AST for everybody
<companion_cube>
with camlp4 you'd just get a parse error when an extension meets syntax meant for the other extension
<companion_cube>
(if handling preprocessors in merlin+dune was easy, wouldn't we have had cppo support for years?!)
<discocaml>
<rgrinberg> Don’t we already?
<discocaml>
<rgrinberg> I use cppo without much problems
<discocaml>
<rgrinberg> We just don’t support cppo and ppx simultaneously
<companion_cube>
ha! TIL
<companion_cube>
you just use `(preprocess (… cppo))`?
<discocaml>
<rgrinberg> Yeah
<discocaml>
<cemerick> how? Anytime I browse into a file that uses cppo, nothing works (no types, no go to definition)
<discocaml>
<cemerick> I figured that's just the way it goes
<discocaml>
<rgrinberg> Show me an example project
<discocaml>
<cemerick> yojson
<companion_cube>
@rgrinberg but all that only works because now merlin and dune communicate
<companion_cube>
in 2013 you just had .merlin files and good luck with making merlin understand/preprocess camlp4 extensions
<discocaml>
<rgrinberg> Yojson uses code generation rather than the preprocess field
<discocaml>
<rgrinberg> Merlin ended up getting -pp support sometime after 2013
<companion_cube>
true, ture
<companion_cube>
but it applied to a whole directory, as well, didn't it?
<discocaml>
<rgrinberg> That was fixed a few years ago
<companion_cube>
:D
<companion_cube>
yes, 2013 is 10 years ago though
<discocaml>
<cemerick> ok, so it uses cppo, just not via `preprocess`
<companion_cube>
in the mean time ppx rose and camlp4 more or less died
<discocaml>
<leostera> one big big difference here is that proc macros in rust get token trees, so whatever's a valid rust token goes
<companion_cube>
to the point that Chet is trying to resurrect camlp4 as a ppxlib alternative, if I understand correctly
<discocaml>
<cemerick> I'm sure I've had the same outcome in other cases; anytime I see cppo stuff, I just (accurately) presume vscode/merlin isn't going to work 🤷
<discocaml>
<leostera> so you don't get fed an ast, and instead can cook up any syntax
<discocaml>
<leostera> and this doesn't get in the way of lsp's or whatnot 🤷🏼♂️
<discocaml>
<cemerick> companion_cube: I think any demand for camlp5 would go poof if ocaml macros were a thing
<companion_cube>
@cemerick I now have my little cppo-like thingie, which uses `[@@@ifge 4.08]` stuff
<companion_cube>
this way LSP mostly works
<companion_cube>
although, with what @rgrinberg says, I could probably use invalid syntax and LSP would pick up the preprocessed file? hum
<discocaml>
<rgrinberg> As long as your preprocessor produces something it should work
<companion_cube>
yeah that's really nice.
ns12 has quit [Quit: bye]
<companion_cube>
guess I could make it more readable then 😅
<companion_cube>
(it's in use in containers and moonpool, so far)
ns12 has joined #ocaml
myrkraverk has quit [Read error: Connection reset by peer]
myrkraverk has joined #ocaml
dnh has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
rgrinberg has joined #ocaml
dhil has joined #ocaml
rgrinberg has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
rgrinberg has joined #ocaml
waleee has joined #ocaml
dhil has quit [Ping timeout: 260 seconds]
rgrinberg has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<companion_cube>
@rgrinberg: I think y'all fixed so many issues with merlin, in dune and ocaml-lsp-server, that you don't remember what it was before :)
<companion_cube>
a testament to the progress made, really
<companion_cube>
(btw dune expect tests are still amazing)
rgrinberg has joined #ocaml
<discocaml>
<rgrinberg> you mean ppx_expect?
<companion_cube>
nah, just regular foo.ml + foo.expected
<companion_cube>
(who needs a ppx for that?)
<discocaml>
<rgrinberg> it's a little more convenient to interleave the tests and the results 😛
bartholin has joined #ocaml
<rgrinberg>
companion_cube perhaps you could instroduce a "seekable" type and use it to encode file descriptors?
<rgrinberg>
or at least encode the seekable kinds of file descriptors
<companion_cube>
you mean with a phantom type or something?
<companion_cube>
given your other remarks, it feels like you want at least one type parameter to encode stuff like "works with bigarrays", "seekable", etc. am I right?
<rgrinberg>
not necessarily, I would imagine type seekable = { pos : unit -> int ; seek -> int -> unit }
<rgrinberg>
and something that is both would be type in_seekable = seekable * In.t
<companion_cube>
oh, more types, ok
<rgrinberg>
for bigarrays vs strings, I would use a functor. rarely one wants to write polymorphic code here
<companion_cube>
I guess I'm still in the optic of "what if this could replace {in,out}_channel" but you're right
<companion_cube>
so you'd have `Iostream.In.Bigarray.t`?
<rgrinberg>
I guess so
<rgrinberg>
Seeking also doesn't work for all fd's so it's important not allow it by default I feel
<rgrinberg>
Another thing that would be quite handy is a buffer input stream that allows for
<rgrinberg>
val unread : In.t -> string -> pos:int -> len:int -> unit
<companion_cube>
right, but how does that ever work?
<companion_cube>
the goal of `In_buf.t` is that you should be able to peek enough that you don't have to unread
<rgrinberg>
It works by maintaining internal buffer of what is unread
<companion_cube>
oh, like `Stream`?
<rgrinberg>
How do you peek enough so that you don't have to unread? Your companion_cube input_line needs a buffer anyway for example
Anarchos has joined #ocaml
<companion_cube>
because you don't consume beyond the '\n'
<companion_cube>
the buffer is just to accumulate the line itself (which might be longer than the buffer's size!)
<companion_cube>
the point is that you can write `input_line` at all; with the stdlib you can't (not without consuming too much data)
mima has joined #ocaml
<companion_cube>
now I'm worrying about using objects, again (the combinatorial explosion of possibilities, like `seekable` or `unreadable` makes it quite tempting)
<rgrinberg>
Yeah, so it's quite a common use case and I think your API could accommodate it more
<rgrinberg>
It's quite common to want to have n bytes in the buffer before doing anything
<companion_cube>
`fill_buf` will ensure it's not empty
<companion_cube>
but to have n bytes, yeah, that's doable too I think
<companion_cube>
(assuming n <= size of buffer, ofc)
<companion_cube>
that part is doable without unread
<rgrinberg>
So instead of passing the optional buffer, you could have val buffered : In.t -> Buffer.t-> In.Buffered.t
<companion_cube>
(with an actual Buffer.t? :/)
<rgrinberg>
Is that a complaint against stdlib's buffer or having a buffer in general?
<rgrinberg>
unread is quite useful btw. I've often wished I had it when dealing with Lexing.lexbuf
<companion_cube>
a complain against Buffer.t specifically :)
<companion_cube>
and I could reinvent mine here, but I really tried not to :/
<companion_cube>
in older commits you can see a `Buf.t` actually
<rgrinberg>
seeking is quite niche and you could easily do without it for v1 I think
<rgrinberg>
Buffering you will find hard to live without if you're reading a stream into something like bencode, csexp, msgpack
rgrinberg has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
rgrinberg has joined #ocaml
masterbuilder has quit [Remote host closed the connection]
<companion_cube>
oh for sure. I mean.
<companion_cube>
I just don't use msgpack/cbor/… without framing, it's too annoying
<companion_cube>
the question is: is it reasonable for `In_buf.t` to have an unbounded buffer
<companion_cube>
or is that better left to a lexbuf-like structure on top
<rgrinberg>
it would more reasonable if it was separate
<rgrinberg>
but really, i imagine it to be quite a common use case
<rgrinberg>
it will be helpful to also have val to_inbuf : In_buf.t -> In.t so that one can use existing api's that don't rely on buffering
<companion_cube>
there is `In_buf.into_in` :)
<companion_cube>
with objects, `In_buf.t` would just be a subtype of `In.t` though…
<companion_cube>
pity objects are so divisive
<rgrinberg>
resist the temptation
<companion_cube>
it's just sad we don't have a good mechanism for that
<companion_cube>
and becaues of monomorphization there's no runtime overhead :/
<rgrinberg>
tbh, IO is going to be dominant overhead here
<rgrinberg>
object perf is fine here, but people would dismiss the library anyway
<companion_cube>
yeah that's my issue too.
<companion_cube>
objects are probably the right abstraction, but they make people uncomfortable (or don't even work in forks of the compiler, etc.)
<discocaml>
<anurag_soni> Eh, if they suit your problem well (and in this case they most likely do), i'd try using them. With a mli hiding the details in some cases the error messages (and merlin/lsp completion) won't be too bad either.
<companion_cube>
agreed (LSP is terrible with objects), but see what happened to Eio
<discocaml>
<anurag_soni> People will most likely find other reasons to dissmiss efforts/libraries. If something solves your problem its good enoush to exist 🙂
<discocaml>
<anurag_soni> > @companion_cube : agreed (LSP is terrible with objects), but see what happened to Eio
<discocaml>
<anurag_soni> To be fair EIO is opinionated in more ways than simply using objects. That was bound to bring up some discussions/debates since it also seems to be aiming for becoming the "default" option for IO with effects.
<discocaml>
<geoff> Are objects not working in some compiler forks why JS wanted objects out of Eio?