brettgilio has quit [Remote host closed the connection]
[itchyjunk] has joined #ocaml
brettgilio has joined #ocaml
vicfred has joined #ocaml
waleee has quit [Ping timeout: 252 seconds]
[itchyjunk] has quit [Remote host closed the connection]
zebrag has quit [Remote host closed the connection]
mbuf has joined #ocaml
shawnw has joined #ocaml
gravicappa has joined #ocaml
Serpent7776 has joined #ocaml
Haudegen has joined #ocaml
hackinghorn has joined #ocaml
<hackinghorn>
hi
<hackinghorn>
is everything, functions, calls, arguments, nodes an llvm::Value?
mro has joined #ocaml
mro has quit [Remote host closed the connection]
mro has joined #ocaml
orbifx has joined #ocaml
glassofethanol has joined #ocaml
hendursa1 has joined #ocaml
hendursaga has quit [Ping timeout: 276 seconds]
olle has joined #ocaml
orbifx has quit [Ping timeout: 252 seconds]
orbifx has joined #ocaml
kakadu has joined #ocaml
mro_ has joined #ocaml
mro has quit [Ping timeout: 252 seconds]
bartholin has joined #ocaml
hackinghorn has quit [Ping timeout: 265 seconds]
terrorjack has quit [Read error: Connection reset by peer]
mbuf has quit [Quit: Leaving]
mro_ has quit [Quit: Leaving...]
terrorjack has joined #ocaml
Haudegen has quit [Quit: Bin weg.]
waleee has joined #ocaml
[itchyjunk] has joined #ocaml
mro has joined #ocaml
mro has quit [Remote host closed the connection]
mro has joined #ocaml
Haudegen has joined #ocaml
mro has quit [Ping timeout: 252 seconds]
orbifx has quit [Ping timeout: 264 seconds]
mro has joined #ocaml
mro has quit [Ping timeout: 245 seconds]
mro has joined #ocaml
mro has quit [Ping timeout: 245 seconds]
xiongxin has joined #ocaml
<olle>
Is it possible to statically check if a function is referential transparent? E.g. a caching fib function
<companion_cube>
not in OCaml
<olle>
companion_cube: In any language?
<companion_cube>
well if your language is pure with explicit effects (like koka, say), then yes
<companion_cube>
pure is when there's no effect :p
<olle>
no no no
<olle>
Or wait
<olle>
Caching is not an effect in Koka?
<olle>
Or memoization
<companion_cube>
idk if you can make it pure
<companion_cube>
I think in general the answer is "no", anyway, purity is probably a hard property to assess ("forall inputs A and B, forall env, A=B => f(A,env)=f(B,env)" or whatever)
[itchyjunk] has quit [Read error: Connection reset by peer]
mro has quit [Ping timeout: 240 seconds]
mro has joined #ocaml
Tuplanolla has joined #ocaml
mro has quit [Remote host closed the connection]
bartholin has quit [Quit: Leaving]
mro has joined #ocaml
mro has quit [Ping timeout: 240 seconds]
Haudegen has joined #ocaml
vicfred_ has joined #ocaml
bobo has joined #ocaml
serpent has joined #ocaml
Serpent7776 has quit [*.net *.split]
vicfred has quit [*.net *.split]
spip has quit [*.net *.split]
dstein64 has quit [Excess Flood]
ocabot has quit [Ping timeout: 265 seconds]
dstein64 has joined #ocaml
ocabot has joined #ocaml
hendursa1 has quit [Ping timeout: 276 seconds]
mro has joined #ocaml
mro has quit [Client Quit]
hendursaga has joined #ocaml
quernd2 has joined #ocaml
vicfred_ has quit [Quit: Leaving]
Exagone313 has joined #ocaml
klu_ has joined #ocaml
klu_ has quit [Changing host]
klu_ has joined #ocaml
mal``` has joined #ocaml
orbifx1 has joined #ocaml
infinity0_ has joined #ocaml
infinity0_ is now known as infinity0
infinity0 has quit [Killed (lead.libera.chat (Nickname regained by services))]
Exa has quit [Ping timeout: 265 seconds]
quernd has quit [Ping timeout: 265 seconds]
klu has quit [Ping timeout: 265 seconds]
fds has quit [Ping timeout: 265 seconds]
mal`` has quit [Ping timeout: 265 seconds]
quernd2 is now known as quernd
orbifx has quit [Ping timeout: 265 seconds]
orbifx1 is now known as orbifx
orbifx has quit [Client Quit]
fds has joined #ocaml
Exagone313 is now known as Exa
<d_bot>
<crackcomm> have anyone ever experienced OCaml program freezing the entire linux system? I dont know how to approach debugging this problem
<d_bot>
<mk-fg> Pretty sure it should be same approach as with any other app freezing the system
<d_bot>
<mk-fg> I'd suspect some loop quickly eating all memory or a forkbomb issue, which tend to have such effect, especially on the desktop
olle has joined #ocaml
<d_bot>
<mk-fg> If it's an easily reproducible freeze, I'd probably try replicating it in a VM to see what exactly is going on there (and debug it there easily without rebooting), but otherwise maybe just limiting ram/cpu/etc resource usage for a long-running app (via systemd-run or in .service on a modern linux) can detect/prevent this whenever it might happen again
kurfen has joined #ocaml
<d_bot>
<crackcomm> it happens over and over again, the memory is constant, does not grow at all, happens even I don't start any other process, how do I debug it in a VM if it's freezing the entire system?
kurfen has quit [Client Quit]
kurfen has joined #ocaml
<d_bot>
<mk-fg> Those two things I mentioned above tend to happen too quickly to detect anywhere, basically milliseconds, like you try to open some 100M image and boom, your system is frozen :)
<d_bot>
<mk-fg> Running same app in a VM (as in virtual machine) should not have any effect on your main system when you reproduce same issue
<d_bot>
<mk-fg> VM can freeze, but you then have much easier time restarting it or looking into what happened there in general
<d_bot>
<crackcomm> so I have nothing except logs?
<d_bot>
<mk-fg> Don't think I understand the question
<d_bot>
<crackcomm> > VM can freeze, but you then have much easier time restarting it or looking into what happened there in general
<d_bot>
<crackcomm> is there any way I can look deeper into what happened except looking at the logs?
<d_bot>
<mk-fg> Yeah, first thing to check for would probably be a crash dump in console (or netconsole to main machine)
<d_bot>
<mk-fg> Kernel crash dump that is, if somehow app manages to break it, which should be quite unlikely
<d_bot>
<mk-fg> But then I'd also try basic rlimit and such to see if that makes app crash more cleanly
<d_bot>
<crackcomm> I was looking at `journalctl` logs before crash
<d_bot>
<crackcomm> seems like it does not produce a kernel crash dump
<d_bot>
<mk-fg> Also in a bare-bones linux system, freeze might not even happen
<d_bot>
<mk-fg> And worst-case you can just save whole VM memory and look at what's going on that, but there're probably 100 easier ways to understand what's happening before that
<d_bot>
<crackcomm> but what you said made me look through my code there is:
<d_bot>
<crackcomm>
<d_bot>
<crackcomm> ```OCaml
<d_bot>
<crackcomm> EGraph.run_until_saturation
<d_bot>
<crackcomm> ~node_limit:`Unbounded
<d_bot>
<crackcomm> ~fuel:`Unbounded
<d_bot>
<crackcomm> ```
<d_bot>
<crackcomm>
<d_bot>
<crackcomm> maybe `Unbounded` is the reason
<d_bot>
<mk-fg> Kernel panic won't make it into systemd journal, as at that point your OS is not running anything
<d_bot>
<mk-fg> Yeah, hence my initial suspicion about memory and suggestion to just put a limit to it, see what happens
<d_bot>
<mk-fg> (which again is probably much easier to test in a vm, but whatever works)
<d_bot>
<crackcomm> it's already running much longer than previously, maybe that was the reason
<companion_cube>
look at what coredumpctl tells you maybe?
<d_bot>
<crackcomm> I'm wondering how I'd be able to explore the whole VM memory
<d_bot>
<mk-fg> With kvm it's as easy as connecting to monitor socket (telnet) and running "migrate" to a file there
<d_bot>
<crackcomm> it is indeed the program that's crashing
<d_bot>
<mk-fg> Pretty sure you can also stop cpu and step through what it's doing with VMs, but that's probably hardcore overkill in this case too :)
<companion_cube>
(the topic hints at using a paste website, for thoese of us who use IRC)
<companion_cube>
call ulimit before running your thing, too
<d_bot>
<crackcomm> sorry didn't notice that
<d_bot>
<crackcomm> yeah seems like an overkill indeed and I'm not sure I'd be able to exactly know what's happening even then
<d_bot>
<crackcomm> it didn't crash yet but it crashed after longer time previously
<d_bot>
<mk-fg> I wouldn't worry about it, when some simple ulimit or systemd-run can do the trick :)
<d_bot>
<crackcomm> which limit exactly do you mean?
<d_bot>
<mk-fg> Iirc rlimit, for heap memory basically
<d_bot>
<mk-fg> Oh, and if you suspect forkbomb, there's also a limit on number of pids or something there iirc
<d_bot>
<crackcomm> I'm pretty sure it's not a forkbomb, there are forks but they are only spawned at the beginning of the program, basically `rpc_parallel` workers
<d_bot>
<crackcomm> just 2 forks
<d_bot>
<crackcomm> and later it just connects to these workers
<d_bot>
<mk-fg> You can limit memory for both cases via systemd-run, where you set a limit for the whole cgroup which would have all pids in it, but yeah, forkbombs are kinda rare outside of development oops :)
<d_bot>
<mk-fg> Also fun thing to with cgroups wrt memory is to soft-limit the memory so that instead of crashing upon hitting the limit, app will start swapping, which can be easier to notice and even debug
<d_bot>
<crackcomm> interesting
hendursaga has quit [Ping timeout: 276 seconds]
hendursaga has joined #ocaml
<d_bot>
<crackcomm> this time my system crashed but entirely differently, I still could move my mouse and hear the sound `BUG: kernel NULL pointer dereference, address: 0000000000000001` Call Trace: `? _nv037596rm+0xc3/0x350 [nvidia]` 😂
Skyfire has quit [Ping timeout: 260 seconds]
Skyfire has joined #ocaml
<d_bot>
<mk-fg> Ah yeah, you can maybe also try switching to built-in linux terminal (as in ctrl+alt+F5 or something)
<d_bot>
<mk-fg> Or login over ssh and do whatever, if video output is completely busted :)
<d_bot>
<mk-fg> Is it nvidia's proprietary driver btw?
Haudegen has quit [Quit: No Ping reply in 180 seconds.]
Haudegen has joined #ocaml
zebrag has joined #ocaml
Stumpfenstiel has joined #ocaml
zozozo has quit [Ping timeout: 252 seconds]
zozozo has joined #ocaml
olle has quit [Ping timeout: 250 seconds]
kurfen has quit [Quit: WeeChat 2.3]
serpent has quit [Read error: Connection reset by peer]
gravicappa has quit [Ping timeout: 252 seconds]
<d_bot>
<crackcomm> unfortunately it crashed again https://gist.github.com/crackcomm/065eb29886043eb856a71fbae0fbf4cf seemingly reporting `kernel: mce: [Hardware Error]: Machine check events logged` which I'm not sure I can trust since this only happens with this program running
cedric has joined #ocaml
<d_bot>
<mk-fg> I'd probably try rolling back that nvidia driver
<d_bot>
<crackcomm> do you really think it's related?
<d_bot>
<mk-fg> Yeah, I'd even question whether it's app-related - these things crash all the time :)
<d_bot>
<mk-fg> Not sure if AMD one is better, though it seem to only crash when I'm playing games
<d_bot>
<crackcomm> ok, I'll try
<d_bot>
<mk-fg> Hm, actually not really, iirc on 4.x kernels I had to reboot my machine something like once/2w due to gpu hangs with it
<d_bot>
<mk-fg> On recent 5.13 it does restart gpu after hangs, so they still happen, but at least it recovers after couple seconds of hang
<d_bot>
<mk-fg> Also you can easily bisect whether it's related to this app via my earlier VM suggestion
<d_bot>
<mk-fg> I.e. crashes VM - you've found a kernel bug, congrats - whole host crashes, probably unrelated, unless it's a VM escape, in which case run to the newspapers
<d_bot>
<crackcomm> system crashed again, this time with nvidia drivers purged, there is a coredump of my process in `coredumpctl`, I'm gonna setup a VM to see if it crashes there
cedric has quit [Quit: Konversation terminated!]
<d_bot>
<crackcomm> before I do that, is there any way I could make core dump any more useful? I didnt use `dune exec --release` and yet it doesn't seem to include any symbols `#0 0x0000573b171710b0 in ?? ()`
<d_bot>
<crackcomm> it shows a `SIGSEGV, Segmentation fault.` maybe afterall the problem is somewhere in the source code
<d_bot>
<crackcomm> it does use some C libraries like Owl for RNG
<d_bot>
<mk-fg> Still surprising why segfault in some app would kill the kernel, when it's kernel killing the broken app :)
<d_bot>
<crackcomm> yeah, that's true
<d_bot>
<mk-fg> Wonder if maybe systemd's coredump-catcher might be doing something unexpected
noddy has quit [Quit: WeeChat 3.2]
noddy has joined #ocaml
Tuplanolla has quit [Quit: Leaving.]
<d_bot>
<crackcomm> I looked at `bt` of ~5 different core dumps and all of them are different, some in list iter, some in gc
<d_bot>
<crackcomm> interesting, I didn't use rpc_parallel this time, used only one process and process returned `Segmentation fault (core dumped)`