Haudegen has quit [Quit: No Ping reply in 180 seconds.]
Haudegen has joined #ocaml
jtck has quit [Remote host closed the connection]
Haudegen has quit [Ping timeout: 272 seconds]
brettgilio has quit [Ping timeout: 252 seconds]
jtck has joined #ocaml
jtck has quit [Remote host closed the connection]
Melantha has quit [Quit: WeeChat 3.2]
Guest37 has joined #ocaml
<Guest37>
Is it possible to write a decently performant IRC server in OCaml using things like Lwt (until multicore gets stable anyways)?
<Guest37>
Many are written in C, C++, and Go. I wonder how OCaml would do in this arena.
Guest37 has quit [Quit: Connection closed]
<Corbin>
Sure. I've seen it done in Python, even. Network daemons can be written in many different backend languages; what matters is support for concurrency and scheduling, not low-level compiler output.
<Corbin>
...Curses, too slow.
jtck has joined #ocaml
<companion_cube>
An IRC server, certainly anyway
Guest7941 has joined #ocaml
<Guest7941>
Corbin i read your message from the logs. No worries. So you think ocaml has a decent enough concurrency to be competitive in this regard. Also what do you mean by "low-level compiler output"
<Guest7941>
Also what about things like shared memory? If you scale an irc network over multiple servers, wouldnt it need some form of shared communication? Or I guess that would depend on how it is arranged.
<Guest7941>
Message passing could work yeah
<dh`>
erm
<dh`>
irc servers are surprisingly performance-limited in surprising ways
<dh`>
and they're inherently a distributed system so I'm not sure what you mean by message passing in this context
<dh`>
though the question is not "can you write an irc server", it's "how many simultaneous users can it support before it tanks"
<Guest7941>
What do you think?
<dh`>
I have no idea.
<Guest7941>
Im also curious about the implementation side. Like, how would two distributed ocaml processes communicate with each other? A query api?
<dh`>
uh
<dh`>
irc servers communicate with each other using sockets and the irc server protocol (which is not quite the same as the irc client protocol)
<dh`>
if you wanted to try to write a distributed implementation that looks like a single irc server to the irc network, I suppose that could be done, but it's not clear why you'd bother or what the point would be
<average>
d_bot: <monk> regarding what you said of Matrix and the Matrix <-> Slack bridge. so.. I like Matrix a lot, and yes it has a good funding model. What I'm concerned about is Slack mostly because it's the kind of closed source product company that Facebook also is. Remember when FB had jabber and then they dropped it because people noticed they could use FB without the junk that is their web UI ? yeah. Slack also did this, they shut down their IRC and
<Corbin>
TIL that double-quoted strings can have indentations after a line continuation and they'll be ignored. I don't think I've seen a lexer that does this before.
<d_bot>
<ggole> No comment or anything, how mysterious!
mro has joined #ocaml
mro has quit [Remote host closed the connection]
mro has joined #ocaml
mro has quit [Remote host closed the connection]
mro has joined #ocaml
<d_bot>
<Competitive Complications> Hi! I need some help with bindlib
<d_bot>
<Competitive Complications> Basically, I'm just trying to understand: let's say you want to do some symbolic reductions under a lamda. I assume you'll need to `Bindlib.unbind` it, and then bind it back. But I don't know how to perform this latter step efficiently. (Right now, I would crawl through the term for all the occurrences of the variables, and box them)
Haudegen has joined #ocaml
beshr has joined #ocaml
mro has quit [Remote host closed the connection]
mro has joined #ocaml
mro has quit [Ping timeout: 276 seconds]
dmbaturin has joined #ocaml
mbuf has quit [Read error: Connection reset by peer]
ski has quit [Ping timeout: 256 seconds]
mbuf has joined #ocaml
mro has joined #ocaml
kevinsjoberg has quit [Ping timeout: 272 seconds]
conjunctive has joined #ocaml
mro_ has joined #ocaml
mro has quit [Ping timeout: 258 seconds]
spip has quit [Read error: Connection reset by peer]
Guest9609 has quit [Ping timeout: 252 seconds]
Nahra has quit [Ping timeout: 256 seconds]
kevinsjoberg has joined #ocaml
Leonidas has joined #ocaml
mro has joined #ocaml
mro_ has quit [Ping timeout: 245 seconds]
Leonidas is now known as Guest7430
ski has joined #ocaml
johnel has joined #ocaml
gahr has joined #ocaml
daimrod has joined #ocaml
engil1 has joined #ocaml
Armael has joined #ocaml
cbarrett has joined #ocaml
mro has quit [Ping timeout: 252 seconds]
energizer has joined #ocaml
mro has joined #ocaml
olle has joined #ocaml
mro has quit [Read error: Connection reset by peer]
mro_ has joined #ocaml
mro_ has quit [Ping timeout: 250 seconds]
olle has quit [Ping timeout: 245 seconds]
beshr has quit [Read error: Connection reset by peer]
olle has joined #ocaml
<d_bot>
<Kakadu> Do you know a way to suppress compiler alerts like `Alert deprecated: Base.__FILE__` ?
<d_bot>
<Kakadu> Ah it is warning 3
<d_bot>
<Kakadu> fixed
waleee has quit [Ping timeout: 245 seconds]
waleee has joined #ocaml
olle has quit [Ping timeout: 250 seconds]
berberman has joined #ocaml
berberman_ has quit [Ping timeout: 272 seconds]
favonia has joined #ocaml
mro has joined #ocaml
kvik has joined #ocaml
mro has quit [Ping timeout: 245 seconds]
dh` has joined #ocaml
dh` has quit [Changing host]
<companion_cube>
@Competitive: you probably need to carry around a substitution
<companion_cube>
to reduce the whole expression, not do reductions one by one
elf_fortrez has joined #ocaml
<d_bot>
<Competitive Complications> companion_cube: I'm not sure what you mean by that
mbuf has quit [Quit: Leaving]
<companion_cube>
Unless you have mutable variables or something like that, you need to traverse to substitute
<companion_cube>
The trick is to do only one traversal for all the reductions
<companion_cube>
If you want a normalized tern, that is
elf_fortrez has quit [Ping timeout: 246 seconds]
mro has joined #ocaml
olle has joined #ocaml
mro has quit [Read error: Connection reset by peer]
mro has joined #ocaml
<d_bot>
<Competitive Complications> companion_cube: I believe bindlib does something smart so that it indeed does not substitute
mro_ has joined #ocaml
mro has quit [Ping timeout: 245 seconds]
<companion_cube>
It might delay
<companion_cube>
But it can't delay indefinitely
<olle>
Would you say FP is generally better at separating concerns than OOP? Why or why not?
<Corbin>
Both terms are meaningless and the question seems intended to start a flamewar.
<olle>
Corbin: Which terms? FP or OOP? Or concern and separation?
Anarchos has joined #ocaml
<Anarchos>
how to force opam to install a package and ignore its various versions requirements ?
<Corbin>
olle: "FP" and "OOP". Separation of concerns is part of why many languages have modules, but there's no universal way to do modules. (Not even categorical composition is universal!)
<d_bot>
<Bluddy> OOP has all sorts of patterns to do separation of concerns, but almost all of them tend to be massively over-engineered and hard to use
<d_bot>
<Bluddy> In both paradigms you're better off going for the simplest approach that works
<d_bot>
<Bluddy> IMO
<d_bot>
<Bluddy> Not to mention that strong separation of concerns means heavy mutable state in OOP, and that's one of its biggest criticisms from the FP perspective.
<companion_cube>
OCaml has immutable OO btw :)
<Corbin>
And mutable FP. This is part of why the terms are meaningless; many people have a "but not Scheme" or "but not Python" or etc. attitude towards the entire framing, while ignoring "multiparadigm" languages that simply don't fit into the imagined taxonomy.
mro_ has quit [Ping timeout: 245 seconds]
<olle>
Corbin: Well, you can say, is OCaml better at sep of concerns compared to Java? Using idiomatic OCaml code.
<olle>
Bluddy, why would strong sep of concern mean mutable state in OOP?
<Corbin>
Pick a metric and then we can compare. I usually use lines of code, because I hate typing; OCaml usually beats Java for me. (Indeed, I'm here because I want terse mainstream languages.)
spip has joined #ocaml
<olle>
LoC is indeed a fault predictor, at least if you put everything in one file.
<olle>
But "separation" can be understood as coupling, which can be measured.
<olle>
Or if a single file is touched for every change request, it means separation is low.
<olle>
LCOM4 etc is only for cohesion, not coupling. You also want to measure which classes call which other classes and put a number on that.
mro_ has joined #ocaml
<olle>
And how many access points are available/published to the outside.
<companion_cube>
I think people don't care too much about these metrics
<olle>
I care
<olle>
And my team will be forced to care.
* olle
muahahaaa
<olle>
I think they are used in code grading systems too, like SonarQube
<olle>
But I assume it's weighted together with LoC, cyclomatic complexity, etc
mro has quit [Ping timeout: 245 seconds]
<olle>
A former colleague can't even merge his branch unless the code grades A or B at his new place.
<companion_cube>
That's horrible
<olle>
Welcome to E N T E R P R I S E
<olle>
companion_cube: In effect, it only means don't put everything in one file/class/function.
<companion_cube>
🙄
mro_ has quit [Ping timeout: 245 seconds]
<companion_cube>
What if you only have 50 lines of code, that should probably be in only one file
<olle>
Plus, I guess, a bunch of best practice related to that programming language etc.
<olle>
companion_cube: 50 loc won't have high complexity, usually.
<olle>
B+ xD
<companion_cube>
Honestly it'd be a red flag for me if I was applying somewhere and they were doing this kind of BS
<olle>
companion_cube: Where are you working now?
<companion_cube>
In a tiny startup :p
<olle>
Good company culture?
<olle>
Maybe too small to have one still? Hm.
mro has joined #ocaml
average has quit [Quit: Connection closed for inactivity]
<olle>
Anyway, I think how you configure your CI is very much related to the company culture you have.
mro has quit [Ping timeout: 245 seconds]
<d_bot>
<darrenldl> > LCOM4
<d_bot>
<darrenldl> thanks, i hate it
<olle>
:D
<olle>
Why??
<olle>
There's LCOM5 too, you know. They tried multiple versions.
Serpent7776 has joined #ocaml
<d_bot>
<darrenldl> well okay, maybe the more precise description of my gripe is: insistence that there is always a meaningful separation of mutable states via class, then insist there are specifically good practices quantified by some metrics with seemingly arbitrary thresholds
<d_bot>
<darrenldl> first part is my gripe with oop, so not overly related to LCOM
<olle>
Code metrics must be integrated into your company culture.
<olle>
They are not arbitrary if you use them as guides.
<Armael>
"if you believe in the numbers that makes them real"?
<d_bot>
<darrenldl> i feel like they direct effort at the wrong direction, compared to say, code coverage and other testing metrics that have a more concrete meaning in number
<olle>
Well
<d_bot>
<darrenldl> > They are not arbitrary if you use them as guides.
<d_bot>
<darrenldl>
<d_bot>
<darrenldl> are said guiding numbers backed by statistics then? or just "common sense" by whomever wrote the text
<olle>
If it's proven that big files are fault predictor, then making sure files don't grow too big will lower your fault, over time.
<olle>
darren, not sure
<Corbin>
Goodhart's Law arises whenever metrics are used like this, regardless of what the metric actually describes. That's why it's important to focus on the effects which the metrics are proxying, and always be ready to switch metrics if the effect isn't powerful enough.
<d_bot>
<darrenldl> does the guide define "big" then?
<olle>
darren, you'd order files by size and focus refactoring effort on the biggest ones.
<d_bot>
<darrenldl> Corbin: hmmm....they might already have a metric to measure the effects of the metrics
* olle
I have to check the original LCOM papers to see if they did measure a decrease in bugs or such
<olle>
"Fault prediction and the discriminative powers of connectivity-based object-oriented class cohesion metrics"
<olle>
Something like that, I guess.
<olle>
"We propose a new class cohesion metric that has higher discriminative power than any of the existing cohesion metrics. In addition, we empirically compare the connectivity and non-connectivity-based cohesion metrics."
<olle>
Etc
Anarchos has quit [Quit: Vision[0.10.3]: i've been blurred!]
<d_bot>
<darrenldl> when it says fault prediction, does it mean like predicting where the logic errors are more likely to occur etc?
<d_bot>
<darrenldl> (im trying to find the full text as i have no clue what discriminative power means, but i might not try very hard)
<d_bot>
<darrenldl> anyway, lets just say i am (unreasonably) biased, and fail to see how these carry out meaningfully in practice
<olle>
If you have a huge code-base, and a budget to improve it, you need to know where to start.
<d_bot>
<darrenldl> cheers
<olle>
Not sure how they define "fault"
<Armael>
(my own understanding is that software engineering is an actual and serious research area, but that just looks very boring from a PL perspective)
<olle>
You never know how many bugs you really have in a system, you I guess you can only measure them by the reports you get in.
<olle>
so I guess*
<olle>
Armael: What's boring about it? :)
<Armael>
I should have said that I find it very boring
<Armael>
so YMMV
<olle>
Yeah, I got it
<d_bot>
<darrenldl> > If you have a huge code-base, and a budget to improve it, you need to know where to start.
<d_bot>
<darrenldl> yeah okay, i can see it being used as a heuristics for code navigation of a new and unfamiliar code base
<olle>
Periodically, in fact. Unless you can guarantee all code is "good", which is not possible, sadly.
<Corbin>
We can use some statistics to estimate the total number of bugs, given the measured number of bugs. I don't remember exactly how it works, though.
<Corbin>
olle: It's quite possible to have proven-correct code, but employers rarely are willing to pay for it.
<olle>
Corbin: Even when proven correct, it's not certain it has good maintainability.
<d_bot>
<darrenldl> yeah idk, maybe i just find the premise that there is one good universal way to measure code quality a bit silly
<olle>
That's why it's a research topic :) Duh
<Corbin>
There exist metrics which apply to any language, but their results aren't comparable across different languages. The classic example is Kolmogorov complexity: Smaller programs are simpler.
<d_bot>
<darrenldl> i'd say if it is comparable across different projects of same language, i'd already be impressed
<d_bot>
<darrenldl> anyhow, the paper does back the claim with some stats it seems, so maybe it is useful
<d_bot>
<darrenldl> okay right, lcom is really specific to idea of classes isnt it, maybe that's my problem
<olle>
Yes, LCOM is only for classes a la Java
<olle>
Lots of this kind of research is for Java or C++ or C#
<olle>
I actually manually translated a code clone algorithm from Java to PHP to use in my own team...
<Corbin>
There's a paper which I can't find. It broke "class" up into several pieces, called "shape", "forge", "script", etc. Point is that languages without classes can be given one class per function body, their closures can be treated as immutable private fields, and the same class-oriented analyses can be run there.
<d_bot>
<darrenldl> hm....okay, so we'd end up with a function dependency graph essentially
<companion_cube>
olle: the big file thing is stupid imho
<companion_cube>
Sometimes a complex problem is better solved within one file
<d_bot>
<darrenldl> kinda neat for debugging
<companion_cube>
With a simple interface and a lot of internal complexity
<companion_cube>
Beats splitting the thing arbitrarily into files, exposing internals that should have stayed hidden
<olle>
companion_cube: "Prediction" means probability
<olle>
If you can show 60% probability it's better than randomness
<companion_cube>
But then I'm that guy who writes modules with 2kloc.
<companion_cube>
Yeah and applying probability based thresholds as if they were hard rules is bad
<olle>
So to improve code health, you can look at the biggest file first, and know it will be better than to pick a file randomly in the same project.
<olle>
companion_cube: False positives can be acceptable, up to each team, I guess
<companion_cube>
My gut feeling is that this might work for crud
<companion_cube>
So for one particular kind of programs
<companion_cube>
If you write video games or compilers or DBs it might just be totally inapplicable, for all we know
<companion_cube>
(even the idea of unit tests might not apply for these)
<olle>
Dunno, would have to check closer what kind of programs different papers used.
<companion_cube>
if anyone wrote papers on that kind of program
<companion_cube>
and if it even makes sense
<companion_cube>
(sample sizes might be super low, and there's a lot of parameters, so…)
<olle>
It's pretty easy to check for file size, if you have the bug report database and the commit history.
tjammer has joined #ocaml
<companion_cube>
🙄
<companion_cube>
assuming the bugs are uniformly found
<olle>
Yes, of course. :) You might need to normalize it somehow.
<companion_cube>
I don't think a lot of what I consider good software was written using any of these matrics
<companion_cube>
metrics
<olle>
Linux? :) OCaml compiler?
<olle>
You have to consider the educational level of the team.
<olle>
Small team of 5 people, everyone has a PhD, yeah, it's gonna turn out good.
<olle>
Big team, spread out in age, geographically, different educations... It's a different story.
<d_bot>
<darrenldl> i think if you have a team that's not incentivised to make good software, adding a metric to it is not a solution
<d_bot>
<darrenldl> and if they are incentivised, then metric may have limited use
<olle>
Depends on company culture. :) Metrics can be used as a tool to discuss what "quality" means.
<d_bot>
<darrenldl> hm...i guess so
<olle>
Also, as mentiond, having a good tool to point out hot-spots is nice. ^^
<olle>
Changed often, high complexity --> refactor, maybe?
<d_bot>
<darrenldl> > Also, as mentiond, having a good tool to point out hot-spots is nice. ^^
<d_bot>
<darrenldl> that is true
<olle>
And that's only technical metrics - organizational metrics have same predictive power, it seems... Employee churn rate etc.
<Corbin>
There's also problems like a missing "definition of done"; can't a codebase's maintenance costs be lowered over time as bugs are fixed? Usually the only reason why not is because the codebase's owners are accepting feature requests!
<companion_cube>
olle: or maybe it's becaues it's the piece of code that actually tackles inherent complexity
<olle>
companion_cube: It can be :) Always exceptions.
<olle>
Corbin: A bug fix might not improve the code health as such.
<olle>
But yeah, holes in the process.
<olle>
companion_cube: Although, if that piece of code tackles complexity, its change rate should be low, I'd argue.
mro has joined #ocaml
mro has quit [Ping timeout: 245 seconds]
mro has joined #ocaml
<Corbin>
Trying to use Dune for the first time. I have a single frame.ml file, a basic `dune` and `dune-project` file based on the Dune quickstart, and I get the error: "I don't know about package frame"
<Corbin>
I gather that this is due to the -p flag. What's required to get this flag to work? I don't have to use it, but I looked at packages like https://github.com/mirage/ocaml-base64 and I'm not sure what they're doing correctly that I'm not doing.
<Armael>
I don't think I usually do any opam stuff?
<Armael>
except from doing "touch foo.opam" if I declared a "foo" dune library
mro_ has joined #ocaml
mattil has joined #ocaml
mro has quit [Ping timeout: 240 seconds]
mro_ has quit [Ping timeout: 245 seconds]
olle has quit [Ping timeout: 245 seconds]
vicfred has joined #ocaml
mro has joined #ocaml
mro has quit [Ping timeout: 245 seconds]
mro has joined #ocaml
mro has quit [Ping timeout: 245 seconds]
tjammer has quit [Quit: terminated!]
tjammer has joined #ocaml
<d_bot>
<Competitive Complications> i think the whole point of HOAS is that substitution can be done in constant time (just use the internal function representation of the host lang)
<d_bot>
<Competitive Complications> I'm not sure what you mean
mattil has quit [Quit: Leaving]
<companion_cube>
Not constant time
<companion_cube>
You still have to reallocate the term
<companion_cube>
No magic
gravicappa has quit [Ping timeout: 272 seconds]
brettgilio has joined #ocaml
tjammer has quit [Remote host closed the connection]
<d_bot>
<Competitive Complications> constant wrt the size of the body of the lambda, not the number of occurrences of the param
<companion_cube>
But the body is always at least as big as the number of occurrences of the parameter :)
<d_bot>
<Competitive Complications> Sure, and I only care about the latter
<d_bot>
<Competitive Complications> I don't want to crawl through the whole term, I'm fine iterating over the number of occurrences, regardless of how it's called
Nahra has joined #ocaml
PinealGlandOptic has joined #ocaml
glassofethanol has joined #ocaml
brettgilio has quit [Ping timeout: 252 seconds]
glassofethanol has quit [Quit: leaving]
Tuplanolla has quit [Quit: Leaving.]
<companion_cube>
you have to iterate through the whole term, unless you know where the occurrences are somehow
<companion_cube>
(you could have flag on each term indicating if it contains variables, for example)
bartholin has quit [Quit: Leaving]
Haudegen has quit [Ping timeout: 252 seconds]
<d_bot>
<Competitive Complications> which we do, because we built the term
<d_bot>
<Competitive Complications> Bindlib does it with a box type using dedicated constructors, and my question was in the context of bindlib
<companion_cube>
ah, well, good for them then
average has joined #ocaml
<d_bot>
<NULL> Maybe I'll have more chance with IRC users, sorry Discord for the double post :
<d_bot>
<NULL> With menhir, I expected code below the rules to be interpreted with all productions defined (I want to redefine it), but apparently it doesn't.
<d_bot>
<NULL> Is it normal ? In any case, how should I go adding a regular argument to a production rule ?
<companion_cube>
hmm I don't know that menhir rules are parametrized
<companion_cube>
unless you go the functor way
<d_bot>
<NULL> The interface seems to fix the signature of the starting rules (legacy version) so it may be impossible. What do you mean with "functor way" ?
<companion_cube>
menhir allows you to parametrize parsers with a module
<companion_cube>
so the generate code is a functor
<companion_cube>
not sure exactly how it works but it's in the manual
<d_bot>
<NULL> Didn't know about that, let me check then
<d_bot>
<NULL> This will work, I hope it won't slow everything down too much. Thanks