<bgs>
coming from imperative languages, I often wonder, how does Ocaml manage to be so fast despite lots of things in memory being copied very often (instead of being mutated)?
<d_bot>
<NULL> Few things are actually copied. Like list and tree operations don't need copying, they just need reorganisation of some nodes
<d_bot>
<NULL> (Except for list appending, that really copies the list on the left)
<d_bot>
<NULL> ~~copies~~ duplicates
vicfred has quit [Quit: Leaving]
<d_bot>
<NULL> And also, memory allocation is made really fast by how the runtime manages (part of) its heap like a stack so allocation is just increasing a value
hackinghorn has quit [Quit: Leaving]
<companion_cube>
You can also still use mutation in ocaml
<companion_cube>
The compiler does it a lot, for example
<d_bot>
<NULL> Unrelated: how are message edits conveyed to IRC ?
manjaro-user has joined #ocaml
<d_bot>
<NULL> Apparently they're not
manjaro-user has quit [Client Quit]
manjaro-user has joined #ocaml
manjaro-user has quit [Client Quit]
<bgs>
NULL: thanks
waleee has quit [Quit: WeeChat 3.3]
waleee has joined #ocaml
<bgs>
companion_cube: I am aware of that. Sadly I have an assignment to solve and the glue code that I have to use is written in a fairly inefficient way, with almost no utilization of mutability, just copying everything
<bgs>
which would be fine, if the assignment weren't a competition in who writes the fastest solution
<companion_cube>
ah well, then look at Map/Set
<bgs>
I don't think I can use them to my advantage in this task, but thanks for the suggestion
<Corbin>
bgs: Something that wasn't quite said yet (although this is because I think NULL and companion_cube have internalized it) is that a large portion of computational time is simply spent scrolling through space without mutating anything.
<companion_cube>
scrolling through space?
<Corbin>
Like, navigating through address space. Chasing pointers, adding relative offsets, counting struct components, hashing structs, etc.
<companion_cube>
ah well, that depends on the language and program, doesn't it?
<companion_cube>
a reason why C++ and the likes are so fast is because they allow you to minimize that
<companion_cube>
(adding offsets is free, btw. pointers are indeed the expensive part)
<bgs>
well, chasing pointers and adding relative offsets is definitely a significant portion of every program
<Corbin>
Inasmuch as "space" and "time" are tied to Turing machines, yeah. But that also does describe our current CPUs, and it shows up in descriptive complexity a little, too.
<bgs>
relative offsets are important enough to have a dedicated cpu instruction
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<companion_cube>
afaik the load instructions can bake an offset directly in them
<companion_cube>
all the cost is cache misses on pointer dereference
<companion_cube>
(as in, 100x or worse, compared to an addition)
<bgs>
yeah, absolutely
<bgs>
copying lots of stuff also makes this worse
<Corbin>
...And the typical cost of any opcode. Even if it's a NOP, it costs a quantum of time. "It adds up, Jerry!" etc.
<bgs>
more memory -> more cache misses
<companion_cube>
Corbin: negligible quantities
<d_bot>
<NULL> ~0.4ns, assuming >2.5 GHz
<companion_cube>
one indirection or one test could cost you the equivalent of hundreds of additions
<Corbin>
companion_cube: At least in traditional complexity theory, on TMs or RAM machines, the time adds up. I grok and agree with your point that modern CPUs are very cache-dependent and a RAM-machine analysis isn't appropriate.
<companion_cube>
not just cache
<companion_cube>
branch prediction, pipeline, etc.
<companion_cube>
it's also why basic hashing (FNV and the likes, say) is basically free
<companion_cube>
but binary search might not be
Tuplanolla has quit [Quit: Leaving.]
<d_bot>
<Et7f3> Cpu has read cache but does some cpu have write cache
<companion_cube>
the cache goes both ways afaik, and there's invalidation from other cores
<d_bot>
<EduardoRFS> yeah both ways
<d_bot>
<EduardoRFS> "there's invalidation fro other cores", not really, unless you flush it
<d_bot>
<Et7f3> And a pattern I like to do is reverse loop (The count variable can be used instead of 2 variables can be useful in some can where you do i < something.size() even if it is O(1) )
<d_bot>
<Et7f3> I should benchmark. But does this pattern can cause more cache invalidation ?
<companion_cube>
@EduardoRFS if there's an atomic in your cache that's modified by another core?
rgrinberg has joined #ocaml
<d_bot>
<EduardoRFS> @Et7f3 not really cache lines are 64 bytes aligned, so it doesn't matter the direction that you access
<bgs>
reverse loop is usually slower, but not because of cache
<d_bot>
<EduardoRFS> TLDR from what I understand / remember on x86 there is cache coherence, even across cores, but that's not true on ARM even under ARM64 IIRC
<companion_cube>
so what happens when two cores are trying to access the same atomic?
<companion_cube>
(with the stronger memory orderings)
<d_bot>
<EduardoRFS> on x86 they will always have the same data, on ARM one core can read a piece of data and another one can read a different piece of data
<d_bot>
<Et7f3> bgs: Why reverse loop is slower ? It also free one register
<d_bot>
<EduardoRFS> BTW this is probably a big win for multiprocessor ARM
<bgs>
Et7f3 no idea about the underlying mechanisms, but the simple answer is "because it is optimized for the most common access pattern, which is looping forward over the memory"
<bgs>
one register more/less has much less impact than memory access latency
<d_bot>
<EduardoRFS> probably prefetching
<bgs>
and nowadays reasoning about register consumption is pretty much guesswork anyway
<d_bot>
<EduardoRFS> well someone did gather the data for us, so you can actually predict register renaming on the CPU
<hackinghorn>
hi
hackinghorn has joined #ocaml
hackinghorn has quit [Changing host]
zebrag has quit [Quit: Konversation terminated!]
hackhorn has quit [Quit: Leaving]
<companion_cube>
@EduardoRFS that can't be the case, not with atomics
<companion_cube>
there are some strong guarantees
<companion_cube>
(if you do a compare and swap, there must be some form of cache invalidation, otherwise your atomics are broken)
<d_bot>
<EduardoRFS> oh yeah, there is memory barriers on ARM, mb, thought you were talking about general instructions like mov
<companion_cube>
oh, no
<d_bot>
<EduardoRFS> on x86 not only mov is atomic if memory aligned but hardware always have cache coherence
<companion_cube>
x86 is too strong indeed
Haudegen has quit [Ping timeout: 240 seconds]
kaph_ has quit [Ping timeout: 256 seconds]
kaph has joined #ocaml
mbuf has joined #ocaml
zebrag has joined #ocaml
<d_bot>
<minimario> in vscode is there an ocaml extension where you can hover on an object and jump to its type definition
<d_bot>
<minimario> like my extension does type inference and all
<d_bot>
<NULL> Well-configured OCaml Platform
<d_bot>
<minimario> but i want to see what the type actually is
<d_bot>
<minimario> how do you configure ocaml platform to do that lol
<d_bot>
<NULL> Assuming you use dune and the switch you use is the global one, simply building the project (and possibly lightly editing the file) should make all useful tooltips appear
<d_bot>
<minimario> can you easily go from a .ml to a .mli
<d_bot>
<minimario> and vice versa
<d_bot>
<NULL> Depends what you mean. If you just mean switching between existing files, there's an icon at the top-right corner
<d_bot>
<minimario> oh wow i didn't see this icon
<d_bot>
<minimario> this is useful
<d_bot>
<minimario> learning so much about my ide today
waleee has quit [Ping timeout: 268 seconds]
<d_bot>
<darrenldl> bgs: glue code is in ocaml?
<d_bot>
<minimario> is it possible to set up some hotkey to show the type of a value
<d_bot>
<minimario> rather than have to hover over it
<d_bot>
<NULL> A hotkey to have the tooltip appear ? I don't know any, but I imagine even a global extension might add this
<d_bot>
<minimario> yeah i kind of want a vim like environment
<d_bot>
<minimario> in the comfort of vscode
<d_bot>
<minimario> hehe
<d_bot>
<minimario> i downloaded the vim for vscode extension but
<d_bot>
<minimario> it would be nice to be able to fully use keyboard
<d_bot>
<NULL> I found `editor.action.showHover` which is by default bound to `Ctrl+K Ctrl+I` apparently
<d_bot>
<minimario> where's this?
<d_bot>
<minimario> i don't have this in my defaultSettings.json
<d_bot>
<NULL> It's a keybinding, look there
<d_bot>
<minimario> ah i see
<d_bot>
<minimario> maybe my vim extension does weird things to keybindings
<d_bot>
<NULL> It shouldn't remove the action though
<d_bot>
<NULL> You should always be able to remap it
<remexre>
is there an equivalent of Java ArrayList that I'm not seeing in the OCaml standard library? (a resizable, mutable collection of arbitrary type)
<d_bot>
<NULL> No, you can look for Vector in "augmented standard libraries"
<remexre>
hm, okay
<d_bot>
<NULL> If you are used to the Stdlib, CCVector should be easy to pick up
<d_bot>
<NULL> Oh wait, "of arbitrary type" ?
<d_bot>
<NULL> Like not knowing anything about the type of the elements ?
<remexre>
no, like "not just for byte vectors"
<remexre>
CCVector looks like the shape of thing I want
<d_bot>
<NULL> Okay, so that's how I read it before, so CCVector should be good
zebrag has quit [Quit: Konversation terminated!]
shawnw has quit [Ping timeout: 268 seconds]
<d_bot>
<minimario> hmm my opam 4.13.1 version has ocaml 4.12 installed
<d_bot>
<minimario> this is so weird
<d_bot>
<minimario> can i upgrade ocaml without like completely removing the opam switch lol
gravicappa has joined #ocaml
<companion_cube>
yeah you should be able to
<companion_cube>
something about `--unlock-base`
<d_bot>
<minimario> can you get the type of a highlighted expression with ocaml platform
<companion_cube>
a full expression, not sure — LSP doesn't have that
<d_bot>
<minimario> ah ok
<d_bot>
<minimario> sad
<companion_cube>
yeah it's a pity
<d_bot>
<minimario> can you go from a .mli declaration straight to the definition somehow
<rgrinberg>
no, that feature is not implemented yet
<d_bot>
<minimario> 😦
<d_bot>
<minimario> are there plug-ins where i can 😛
jlrnick has joined #ocaml
hackhorn has joined #ocaml
hornhack has joined #ocaml
hackinghorn has quit [Ping timeout: 240 seconds]
hackhorn has quit [Ping timeout: 256 seconds]
<d_bot>
<travv0> by default the hover keybinding is `gh` with the vim extension
hackhorn has joined #ocaml
hornhack has quit [Ping timeout: 268 seconds]
jlrnick has quit [Ping timeout: 260 seconds]
hornhack has joined #ocaml
hackhorn has quit [Ping timeout: 240 seconds]
bobo_ has joined #ocaml
spip has quit [Ping timeout: 256 seconds]
hackhorn has joined #ocaml
hornhack has quit [Ping timeout: 250 seconds]
shawnw has joined #ocaml
hornhack has joined #ocaml
hackhorn has quit [Ping timeout: 256 seconds]
Haudegen has joined #ocaml
epony has joined #ocaml
kaph has quit [Read error: Connection reset by peer]
kaph has joined #ocaml
jlrnick has joined #ocaml
bartholin has joined #ocaml
epony has quit [Quit: QUIT]
epony has joined #ocaml
hackhorn has joined #ocaml
hornhack has quit [Ping timeout: 245 seconds]
kolexar has quit [Remote host closed the connection]
xd1le has joined #ocaml
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<hackhorn>
hi
<dmbaturin>
Hi hackhorn!
<hackhorn>
can I use String.sub if I use Base?
<hackhorn>
argg, Base has a different String.sub
<dmbaturin>
You mean use the standard String.sub when it's shadowed by its Base version? You can call it as Stdlib.String.sub I think.
<hackhorn>
ahh thankss
hornhack has joined #ocaml
hackhorn has quit [Ping timeout: 260 seconds]
hackhorn has joined #ocaml
hackhorn has quit [Client Quit]
hornhack has quit [Ping timeout: 240 seconds]
Tuplanolla has joined #ocaml
Tuplanolla has quit [Ping timeout: 260 seconds]
Tuplanolla has joined #ocaml
jlrnick has quit [Ping timeout: 256 seconds]
waleee has joined #ocaml
hackinghorn has joined #ocaml
shawnw has quit [Ping timeout: 256 seconds]
waleee has quit [Ping timeout: 240 seconds]
xd1le has quit [Quit: xd1le]
hackinghorn has quit [Quit: Leaving]
vsiles has quit [Ping timeout: 245 seconds]
zebrag has joined #ocaml
gdd has joined #ocaml
zebrag has quit [Remote host closed the connection]
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
rgrinberg has joined #ocaml
waleee has quit [Ping timeout: 252 seconds]
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
waleee has joined #ocaml
gravicappa has quit [Ping timeout: 240 seconds]
bartholin has quit [Quit: Leaving]
mro has joined #ocaml
reynir has quit [Ping timeout: 256 seconds]
dalek-caan has quit [Quit: dalek-caan]
rgrinberg has joined #ocaml
hackinghorn has joined #ocaml
rgrinberg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<hackinghorn>
how do I filter a list with 2 functions
<d_bot>
<NULL> What do you mean "with 2 functions" ?
<hackinghorn>
I want to write this let filtered = List.filter (List.filter l ~f:f1) ~f:f2 but that looks clunky
<hackinghorn>
or is it alright?
<d_bot>
<NULL> You can filter on the conjunction
<d_bot>
<NULL> manually expanding the function
<hackinghorn>
arg, any other way?
<d_bot>
<NULL> The way you do it also works
rgrinberg has joined #ocaml
<d_bot>
<NULL> You can write it as `l |> List.filter ~f:f1 |> List.filter ~f:f2` if it looks nicer
rgrinberg has quit [Client Quit]
<hackinghorn>
oh thats nice, thanks
mro has quit [Quit: Leaving...]
waleee has quit [Ping timeout: 268 seconds]
waleee has joined #ocaml
<d_bot>
<let Butanium = raise Not_found;;> Why not filter with f1 && F2?
<d_bot>
<NULL> `(&&)` isn't defined on functions, so you have to expand and it takes more space
<d_bot>
<Et7f3> hackinghorn: Why other way ? If you create a named predicate `let filter_this_and_this elt = f1 elt && f2 elt` then use in List.filter it has a name so more readable and you do one pass so it is also faster.