ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
art3mis has joined #mlpack
art3mis has quit [Connection closed]
rcurtin has joined #mlpack
< 077AABTT9> (test message to IRC)
< rcurtin> (test message from IRC)
whitequark has joined #mlpack
<@rcurtin> whitequark: thanks for the quick response, does this work to give explicit authorization from a channel op for logging? :)
_whitelogger has joined #mlpack
< 077AABTT9> filed a bug with matrix-appservice-irc about a display issue I noticed from IRC: https://github.com/matrix-org/matrix-appservice-irc/issues/1435
<@rcurtin> oh awesome, thank you so much!
<@rcurtin> and it updates in realtime too... very cool :)
_whitelogger has joined #mlpack
<077AABTT9> filed a bug with matrix-appservice-irc about a display issue I noticed from IRC: https://github.com/matrix-org/matrix-appservice-irc/issues/1435
<rcurtin> oh awesome, thank you so much!
<rcurtin> and it updates in realtime too... very cool :)
< whitequark> please note that Libera guidelines require a prominent notice of the channel being logged, usually in the topic
<@rcurtin> yep, let me update that now
<whitequark> please note that Libera guidelines require a prominent notice of the channel being logged, usually in the topic
<rcurtin> yep, let me update that now
rcurtin changed the topic of #mlpack to: mlpack: a scalable machine learning library -- https://www.mlpack.org/ -- this channel is logged: https://libera.irclog.whitequark.org/mlpack
rcurtin changed the topic of #mlpack to: mlpack: a scalable machine learning library -- https://www.mlpack.org/ -- this channel is logged: https://libera.irclog.whitequark.org/mlpack
< whitequark> have fun!
whitequark has left #mlpack []
<whitequark> have fun!
whitequark has left #mlpack [#mlpack]
< 077AABTT9> huh, now their messages came through the bridge from IRC to Matrix just fine... I wonder why my test messages did not?
<077AABTT9> huh, now their messages came through the bridge from IRC to Matrix just fine... I wonder why my test messages did not?
rcurtin is now known as rcurtin_irc
< zoq[m]> Wait that means I can use IRC again?
<zoq[m]> Wait that means I can use IRC again?
< rcurtin_irc> maybe? the primary issue is just sending messages back from IRC to Matrix
<rcurtin_irc> maybe? the primary issue is just sending messages back from IRC to Matrix
< shrit[m]> heisenbuug (Gopi M Tatiraju): Are you here?
<shrit[m]> heisenbuug (Gopi M Tatiraju): Are you here?
< shrit[m]> I have joined the video chat, let me know if you can join
< heisenbuugGopiMT> Oh, I was in the wrong room...
< heisenbuugGopiMT> Coming...
<shrit[m]> I have joined the video chat, let me know if you can join
<heisenbuugGopiMT> Oh, I was in the wrong room...
<heisenbuugGopiMT> Coming...
< 077AABTT9> zoq: maybe, but it seems like sometimes messages do not come back from IRC to Matrix; I am not sure why
< 077AABTT9> my messages aren't going through, but, not sure why yet
<077AABTT9> zoq: maybe, but it seems like sometimes messages do not come back from IRC to Matrix; I am not sure why
<077AABTT9> my messages aren't going through, but, not sure why yet
< zoq[m]> Hm, I can also stick with the matrix client.
< zoq[m]> But great that the logging is back, that was really fast.
<zoq[m]> Hm, I can also stick with the matrix client.
<zoq[m]> But great that the logging is back, that was really fast.
< 077AABTT9> yeah---well, I'll keep digging and get it worked out... the IRC support is necessary for Jenkins :)
<077AABTT9> yeah---well, I'll keep digging and get it worked out... the IRC support is necessary for Jenkins :)
< jjb[m]> Anyway we could back date the IRC logs with whitequark?
<jjb[m]> Anyway we could back date the IRC logs with whitequark?
< 077AABTT9> doubtful, but I can ask
<077AABTT9> doubtful, but I can ask
rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- note: messages sent here might not be seen by bridged users on matrix, gitter, or slack
rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- note: messages sent here might not be seen by bridged users on matrix, gitter, or slack
rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
< 077AABTT9> jjb: actually we can do that, I just need to do a little homework on my end to get the scripts into the right format :)
< jjb[m]> 😃
<077AABTT9> jjb: actually we can do that, I just need to do a little homework on my end to get the scripts into the right format :)
<jjb[m]> 😃
< jjb[m]> ryan> re: the rf segfault. I think the trained model isn’t yielding the built-in predictions. But, I’m not able to cause a segfault using `iris` as a test example. In the data they sent over to you, are there labels starting at `0` or `1`? If you wanted to experiment with debugging tools, I recommend trying out Winston’s set of docker containers: <https://github.com/wch/r-debug
< 077AABTT9> the labels start with `1`; actually, let me forward you the data, hang on
<jjb[m]> ryan> re: the rf segfault. I think the trained model isn’t yielding the built-in predictions. But, I’m not able to cause a segfault using `iris` as a test example. In the data they sent over to you, are there labels starting at `0` or `1`? If you wanted to experiment with debugging tools, I recommend trying out Winston’s set of docker containers: <https://github.com/wch/r-debug
<077AABTT9> the labels start with `1`; actually, let me forward you the data, hang on
< 077AABTT9> sent---I haven't had a chance to dig at all yet; maybe later today if I'm lucky
<077AABTT9> sent---I haven't had a chance to dig at all yet; maybe later today if I'm lucky
077AABTT9 is now known as rcurtin[m]
077AABTT9 is now known as rcurtin[m]
< rcurtin_irc> test message from IRC
<rcurtin_irc> test message from IRC
test_irc has joined #mlpack
< test_irc> test with different username
test_irc has joined #mlpack
<test_irc> test with different username
test_irc has quit [Client Quit]
test_irc has quit [Client Quit]
< jjb[m]> Looks like IRC messages are not being sent through bridge.
<jjb[m]> Looks like IRC messages are not being sent through bridge.
< jjb[m]> Whoops, the channel topic already states that: NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
< rcurtin[m]> yeah, I am waiting on the irc bridge maintainers to get back to me with what should be done there
<jjb[m]> Whoops, the channel topic already states that: NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
<rcurtin[m]> yeah, I am waiting on the irc bridge maintainers to get back to me with what should be done there
< RishabhGarg108Ri> @ryan:ratml.org reminder for the meeting in 5 minutes :)
< rcurtin[m]> yes, I will be there
<RishabhGarg108Ri> @ryan:ratml.org reminder for the meeting in 5 minutes :)
<rcurtin[m]> yes, I will be there
< RishabhGarg108Ri> @ryan:ratml.org I have a follow-up question from our discussion.
<RishabhGarg108Ri> @ryan:ratml.org I have a follow-up question from our discussion.
< RishabhGarg108Ri> For `XGBSplit`, since the maximum possible value is infinity, then in some cases where the value of gradients is large, we will face the issue of exploding gradients.
<RishabhGarg108Ri> For `XGBSplit`, since the maximum possible value is infinity, then in some cases where the value of gradients is large, we will face the issue of exploding gradients.
< rcurtin[m]> Do you mean that the gradients could be so large that the squared sum of gradients could be larger than `DBL_MAX`?
<rcurtin[m]> Do you mean that the gradients could be so large that the squared sum of gradients could be larger than `DBL_MAX`?
< RishabhGarg108Ri> Okay, never mind. It will overflow and become negative
< rcurtin[m]> If you think this issue could be a problem, you can just adapt the computation to logspace 👍️
<RishabhGarg108Ri> Okay, never mind. It will overflow and become negative
<rcurtin[m]> If you think this issue could be a problem, you can just adapt the computation to logspace 👍️
< RishabhGarg108Ri> Taking a logarithm will not affect other things?
< rcurtin[m]> Since we're just maximizing, that should be fine, but let me do a little napkin math ...
< RishabhGarg108Ri> For a moment I thought we might unintentionally get `DBL_MAX` in some cases, but I think its probability will be very low because of floating point errors.
<RishabhGarg108Ri> Taking a logarithm will not affect other things?
<rcurtin[m]> Since we're just maximizing, that should be fine, but let me do a little napkin math ...
<RishabhGarg108Ri> For a moment I thought we might unintentionally get `DBL_MAX` in some cases, but I think its probability will be very low because of floating point errors.
< rcurtin[m]> well, so we only have the problem of overflow if the sum of gradients is somewhat larger than ~1e150
<rcurtin[m]> well, so we only have the problem of overflow if the sum of gradients is somewhat larger than ~1e150
< rcurtin[m]> I would think that this situation should not happen often 😃
< RishabhGarg108Ri> It also strikes me that this issue can be tackled by user itself by normalizing the data.
< rcurtin[m]> Yeah, that too, so maybe best to just not worry about it unless someone opens a bug? I doubt that logspace will be a faster computation in this case
<rcurtin[m]> I would think that this situation should not happen often 😃
<RishabhGarg108Ri> It also strikes me that this issue can be tackled by user itself by normalizing the data.
<rcurtin[m]> Yeah, that too, so maybe best to just not worry about it unless someone opens a bug? I doubt that logspace will be a faster computation in this case
< RishabhGarg108Ri> I will put logspace thing in my checklist and later we can compare their runtimes and convergence.
<RishabhGarg108Ri> I will put logspace thing in my checklist and later we can compare their runtimes and convergence.
< rcurtin[m]> Personally I wouldn't bother---I'm pretty sure that logspace is not going to be faster, just judging by the definition of the computation
< rcurtin[m]> logspace computations only tend to be faster when lots of `e^x` type subexpressions are involved
< rcurtin[m]> but just computing a log itself is somewhat expensive
<rcurtin[m]> Personally I wouldn't bother---I'm pretty sure that logspace is not going to be faster, just judging by the definition of the computation
<rcurtin[m]> logspace computations only tend to be faster when lots of `e^x` type subexpressions are involved
<rcurtin[m]> but just computing a log itself is somewhat expensive
< RishabhGarg108Ri> yeah. I think we are just fine with it. since you mentioned we it will be fine up to 10^150. I think that will be fine for 99% of the use cases.
<RishabhGarg108Ri> yeah. I think we are just fine with it. since you mentioned we it will be fine up to 10^150. I think that will be fine for 99% of the use cases.
rcurtin_irc has quit [Ping timeout: 245 seconds]
rcurtin_irc has joined #mlpack
rcurtin_irc has quit [Ping timeout: 245 seconds]
rcurtin_irc has joined #mlpack
< rcurtin[m]> jjb: how much RAM does your system have? I can provide a shell on a machine if SSH works
< rcurtin[m]> (referring to #3021)
<rcurtin[m]> jjb: how much RAM does your system have? I can provide a shell on a machine if SSH works
<rcurtin[m]> (referring to #3021)
< jjb[m]> ryan so, I set the container to have about 8gb of RAM. Looks like it needs about 12 GB or so?
<jjb[m]> ryan so, I set the container to have about 8gb of RAM. Looks like it needs about 12 GB or so?
< rcurtin[m]> If you can build with only one core, compilation should cost no more than 5GB
< rcurtin[m]> anyway, I can provide a machine with 32GB of RAM that you can run docker containers in and access via SSH, if that's sufficient
< rcurtin[m]> but, I am not sure of the exact environment you are using or whether that will be sufficient
<rcurtin[m]> If you can build with only one core, compilation should cost no more than 5GB
<rcurtin[m]> anyway, I can provide a machine with 32GB of RAM that you can run docker containers in and access via SSH, if that's sufficient
<rcurtin[m]> but, I am not sure of the exact environment you are using or whether that will be sufficient
< jjb[m]> I can drop down to 1 core, but that’s going to make the compile time go to about an hour or so?
<jjb[m]> I can drop down to 1 core, but that’s going to make the compile time go to about an hour or so?
< rcurtin[m]> yeah, probably, maybe more like 30 minutes? it can be slow... :)
<rcurtin[m]> yeah, probably, maybe more like 30 minutes? it can be slow... :)
< jjb[m]> Alrighty, so it’s compiled. Tagged the container build and I’m now experimenting with the script + data given.
<jjb[m]> Alrighty, so it’s compiled. Tagged the container build and I’m now experimenting with the script + data given.
< rcurtin[m]> 👍️
<rcurtin[m]> 👍️