ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
togo has quit [Ping timeout: 272 seconds]
gio4 has joined #mlpack
gio4 has quit [Remote host closed the connection]
< jenkins-mlpack2> Project docker mlpack nightly build build #642: STILL FAILING in 2 hr 52 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/642/
wayase47 has joined #mlpack
ImQ009 has joined #mlpack
wayase47 has quit [Remote host closed the connection]
favre49 has joined #mlpack
favre49 has quit [Remote host closed the connection]
witness has joined #mlpack
< PrinceGuptaGitte> This documentation https://www.mlpack.org/doc/mlpack-3.1.0/doxygen/anntutorial.html#layer_api_anntut still uses r value references.
< rcurtin> PrinceGuptaGitte: take a look at the URL, it's for mlpack 3.1.0, which does use rvalue references
< PrinceGuptaGitte> Oh right, it hasn't been released yet.
< rcurtin> :)
< PrinceGuptaGitte> wow, took you no time no notice.
< jenkins-mlpack2> Project docker mlpack weekly build build #96: ABORTED in 22 days: http://ci.mlpack.org/job/docker%20mlpack%20weekly%20build/96/
< jenkins-mlpack2> * Ryan Curtin: Rename files and mark +x.
< jenkins-mlpack2> * Ryan Curtin: Add first attempt at ensmallen memory check script.
< rcurtin> favre49: I was trying to figure out the documentation issue you were talking about a few days ago:
< rcurtin> "It seems that the function definition is not showing at https://www.mlpack.org/doc/mlpack-3.2.2/doxygen/classmlpack_1_1ann_1_1TanhFunction.html#details"
< rcurtin> but I can't seem to figure out what you were referring to; to me I can see all the definitions of f(x), f'(x), and f^-1(x)
< rcurtin> maybe the page has changed over the weekend or something when it got rebuilt? anyway, let me know if there is still an issue
< vigsterkr[m]> zoq: rcurtin around?
< jenkins-mlpack2> Project docker mlpack monthly build build #16: ABORTED in 22 days: http://ci.mlpack.org/job/docker%20mlpack%20monthly%20build/16/
< jenkins-mlpack2> * Marcus Edel: Convert HRF to JUNIT format.
< jenkins-mlpack2> * Marcus Edel: Raise exception if the format is unknown.
< jenkins-mlpack2> * birm: jenkins server name changed
< jenkins-mlpack2> * noreply: Update REGISTRY_CONFIG_README.md
< jenkins-mlpack2> * birm: documentation specific for this registry
< jenkins-mlpack2> * Marcus Edel: Use the actual error message instead of a default one.
< jenkins-mlpack2> * Ryan Curtin: Rename files and mark +x.
< jenkins-mlpack2> * Ryan Curtin: Add first attempt at ensmallen memory check script.
< rcurtin> vigsterkr[m]: yep, I'm here, just digging out from the weekend :)
< rcurtin> robertohueso: just scrolled back and saw your message, the situation seems tough in Spain, stay safe! (and sane, if you're stuck inside all day :-O)
< vigsterkr[m]> rcurtin: what's your status of azure pipelines?
< vigsterkr[m]> i think we are dead since last friday :(
zoq[m] has joined #mlpack
< zoq[m]> <https://status.dev.azure.com|https://status.dev.azure.com>
< vigsterkr[m]> zoq: so basically yaml based things are dead since friday :D
< vigsterkr[m]> amazing
< zoq[m]> There is a quickfix.
< vigsterkr[m]> ah i see
< vigsterkr[m]> seriously :D
< vigsterkr[m]> :>>>
< rcurtin> yeah, I'm surprised, it feels like everything is breaking on Github lately
< rcurtin> what's the quickfix?
< rcurtin> set the region to India or Australia where issues aren't being reported? :)
< vigsterkr[m]> basically explicitly whitelist everything
< vigsterkr[m]> :))))
< rcurtin> ohh, it says right there. I did not read closely enough
< rcurtin> thank you :)
< vigsterkr[m]> which literally contradicts their docs
< vigsterkr[m]> of course
< vigsterkr[m]> ;)
< vigsterkr[m]> as that should be the default behaviour :P
< vigsterkr[m]> zoq: thnx for the link!
< Saksham[m]> Hi Ryan Curtin, I was interested in the CMA-ES project. I have gone through the original paper and tutorial and through the codebase. Can you provide some more details i could include in my proposal ? Also does the project have to be about CMA based strategies , i read about natural evolution strategies (PEPG) and it seemed interesting to me.
< rcurtin> zoq[m]: do you think we should open a PR with the workaround, and then ask everyone who is having trouble to merge master into their branch once the workaround is committed?
< rcurtin> Saksham[m]: I don't know much about the CMA-ES project, unfortunately, so I can't give much input about it that would be more helpful than anything you might have already read :(
< Saksham[m]> Thanks a lot !, i see zoq is listed as a potential mentor , he might me able to give me a better idea
< Saksham[m]> Hi zoq can you go through my previous message about CMA-ES based optimizers and help me a little
favre49 has joined #mlpack
favre49 has quit [Remote host closed the connection]
favre49 has joined #mlpack
favre49 has quit [Remote host closed the connection]
favre49 has joined #mlpack
< favre49> rcurtin You're right, I figured out what was up but it's weird
< favre49> when I open it on a half-width window, it doesn't show. But it does on a full width window
< favre49> And if i switch from full width to half width
< rcurtin> oh, interesting, let's see if I can reproduce it
< rcurtin> huh, can't seem to reproduce... stretched my window width from ~400px (small enough for the mobile version) to 6400px (I have a big resolution :))
< rcurtin> the formulas seemed to display just fine the full time
toluschr has joined #mlpack
< toluschr> I'm having issues compiling mlpack in a chroot environment. Here is a log of the build https://hastebin.com/raw/nufuxoyohi
< toluschr> As you can see, the file libarmadillo.so is present. https://hastebin.com/raw/akoduyulib
< PrinceGuptaGitte> https://www.mlpack.org/doc/mlpack-3.1.0/doxygen/anntutorial.html#layer_api_anntut in this page's Layer API section, to make our own Layer we need to make our getter for gradient matrix : `OutputDataType& Gradient() { return gradient; }`. But in FFN class I didn't notice its use. Is it integral for FFN to work?
< PrinceGuptaGitte> Same for : `OutputDataType& Parameters() { return weights; }`
< PrinceGuptaGitte> `InputDataType& InputParameter() { return inputParameter; }`,
< PrinceGuptaGitte> I didn't see their usage anywhere
< chopper_inbound[> <toluschr "As you can see, the file libarma"> You can try removing armadillo and remaking the latest version, it worked for me for similar error
< LakshyaOjhaGitte> whats going on these days, azure, github.....even corona
< LakshyaOjhaGitte> (edited) ... azure, github.....even corona => ... azure, github.....corona
< LakshyaOjhaGitte> all being affected
< LakshyaOjhaGitte> @prince776 I am not sure but I think weights are used for differentiated use between layers.
< rcurtin> toluschr: can you tell me more about the chroot environment you're using?
< rcurtin> assuming the chroot is correctly in masterdir/, the make error doesn't seem to make sense
< rcurtin> maybe the symbolic link fails when inside the chroot or something?
< favre49> rcurtin Hmm I guess we'll never know. I assume you use firefox too
< kartikdutt18Gitt> Hey @prince, The Parameter are used to set weights. You can refers to tests of LSTM, I think I saw various params in test for Transposed Convolutions.
< toluschr> rcurtin It's the void-linux build system. I installed `libarmadillo-devel` and `boost-devel` as dependencies. I don't think that the symlink is a problem, as it is the official package.
< toluschr> I can send you a log of all commands executed if you want me to
< rcurtin> favre49: yeah, I also tested in chromium
< rcurtin> toluschr: hm, interesting, but I guess I'm not sure what happens when something is symlinked outside the chroot; what happens in the chroot if you run `ls -lh /usr/lib/libarmadillo.so`?
favre49 has quit [Remote host closed the connection]
< jeffin143[m]> Rcurtin : you there ??
< rcurtin> jeffin143[m]: yeah, I'm still here, I have not gotten up in the past 30 seconds :-D
< jeffin143[m]> :-p , you left a comment to implement column major in string encoding
< rcurtin> jeffin143[m]: I haven't gotten back to that PR yet, I'll probably handle it later today if I can
< rcurtin> toluschr: ok, I guess, I was just worried that the symlink was to a destination *outside* the chroot
< jeffin143[m]> But I was doubtful how will we handle without padding case
< rcurtin> but it looks like that is not the case
< rcurtin> toluschr: I'm a bit at a loss for this one, my thoughts for directions to investigate are:
< rcurtin> - is this a cross-compilation situation or anything? maybe libarmadillo.so is compiled for the wrong architecture and the compiler can't use it o?
< rcurtin> - is the chroot of make actually the exact same as what you're thinking? like, is it possible that libarmadillo.so does actually not exist in whatever environment 'make' is running in?
< toluschr> I'm not cross compiling, but I can try adding libarmadillo as a host dependency
< rcurtin> yeah, maybe worth a shot? honestly I don't have too many clear ideas here
< toluschr> Still doesn't work. I'm not sure whats going on here. Probably a stupid mistake I made.
< toluschr> I'm thinking of just removing the check for libarmadillo.so
< rcurtin> toluschr: removing from the Makefile directly? I guess that's one strategy that could work :)
< toluschr> Well now that is really strange. g++ can't find it while linking either.
< toluschr> rcurtin It has nothing to do with the chroot environment. It even fails on my host system.
< toluschr> The voidlinux package is broken.
< toluschr> The symlink points to nothing, because armadillo is not a dependency of armadillo-devel
< rcurtin> toluschr: well, that could be the reason :-D
< toluschr> Works now
< toluschr> I spent an hour on nothing...
< rcurtin> hehe
< rcurtin> I know the feeling :)
< jeffin143[m]> Hehe that was bad
< zoq> rcurtin: Will open a PR with a quickfix for the ci later.
< zoq> https://github.com/mlpack/mlpack/pull/2306 just needs two approvals :)
< PrinceGuptaGitte> Thanks @kartikdutt18 .
< PrinceGuptaGitte> Just a side question, in your LeNet model, why did you use Sequential<> instead of FFN<>
< rcurtin> zoq: one of two :)
favre49 has joined #mlpack
< favre49> zoq I went ahead and merged it. That's a pretty big problem fixed :)
< rcurtin> just merged it into #2305, seems to work just fine
toluschr has quit [Quit: toluschr]
< favre49> Ah, so people will need to merge into the new master for it to work
< rcurtin> yeah, but I don't feel the need to do a big blast of comments on all the PRs :) I was planning on only mentioning it on ones that I was reviewing
< favre49> Wouldn't it be best if you could have mlpack-bot do announcements across PRs?
< favre49> This kind of situation isn't normal (don't remember something like this happening last year) but in the case it does, saves you a headache :)
< kartikdutt18Gitt> @prince776, Sequential allows me to think of it as a layer, So we can add something before the model too, incase the model is a subpart of the some other model. I don't think we can do model1.Add(model2) if both are FFN, So using sequential gives the user some flexibility.
favre49 has quit [Remote host closed the connection]
tip has joined #mlpack
tip is now known as Guest7503
Guest7503 has quit [Remote host closed the connection]
zoso_floyd has joined #mlpack
zoso_floyd has quit [Client Quit]
karyam has joined #mlpack
erdem has joined #mlpack
< erdem> Hello, I wonder how to submit proposal to mlpack?
< himanshu_pathak[> kartikdutt18 (Gitter): Currently in my pr of Neural Turing Machine I am trying to change FFN. So, that it can be used as right now I am stuck due to some bug once I find workaround it will be a nice thing for us to use
< himanshu_pathak[> *So, that it can be used as a layer
erdem has quit [Remote host closed the connection]
< kartikdutt18Gitt> Yes, I think that would be great. :)
< PrinceGuptaGitte> I see. It's a nice idea.
< himanshu_pathak[> Yes, but my only problrm about it is that everytime when I was able to find workaround new error come into my way.
< PrinceGuptaGitte> If you get stuck at something for too long, you can always ask here, we are happy to help :)
karyam has quit [Remote host closed the connection]
< himanshu_pathak[> Yes I am stuck with a problem it is as follows in memory
< himanshu_pathak[> In memory_head_impl.hpp I am trying to do this gy += prevDW and gy is of type ErrorType and prevDW previously before rvalue refactor it was working but now I am getting an error note: ‘arma::vec {aka arma::Col<double>}’ is not derived from ‘const arma::Op<T1, op_type>’
< himanshu_pathak[> prevDW is of type arma::vec I don't know why it's not working after r-value refactor
< PrinceGuptaGitte> I don't know how exactly rvalue references work, I'll have to do some reading I guess.
< himanshu_pathak[> Maybe ryan have better answer for this.
< PrinceGuptaGitte> yes
< rcurtin> himanshu_pathak[: I'm in a meeting right now so my answer will be kind of short but I hope it helps...
< rcurtin> my guess is that gy was previously inferred as arma::vec, but now there is some type that's being inferred incorrectly
< rcurtin> so, e.g., if you are doing Forward(a, b + c), where that "b + c" is an Armadillo expression, the type of the second parameter gets inferred as some arma::Op<...> intermediate type
< rcurtin> the way to fix it might be to form a temporary vector or something for each of the parameters to Forward()
< rcurtin> or something of this general idea---I hope the input helps
< himanshu_pathak[> Thanks for the help I will try this outrcurtin: I will try to add temporary vectors may be this will work.
< jenkins-mlpack2> Project docker mlpack weekly build build #97: STILL FAILING in 6 hr 42 min: http://ci.mlpack.org/job/docker%20mlpack%20weekly%20build/97/
< jenkins-mlpack2> Ryan Curtin: Copy data before test run.
< rcurtin> himanshu_pathak[: yeah, in any case, the issue is likely rooted in incorrect types being inferred for template parameters
< himanshu_pathak[> Yes and in my code I am trying to do operation with incorrect types thanks again for the help 🙂
M_slack_mlpack13 has joined #mlpack
M_slack_mlpack13 is now known as TrinhNgo[m]
< TrinhNgo[m]> I realized that each project idea is attached with some mentors, but I cannot find them on slack channel :((
ImQ009 has quit [Quit: Leaving]
< jenkins-mlpack2> Project docker mlpack nightly build build #643: NOW UNSTABLE in 9 hr 10 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/643/
< zoq> TrinhNgo[m]: If you have any question you can ask here, it's generally not a great idea to contact mentors directly (private message).
< zoq> TrinhNgo[m]: To answer your question, we don't require a contributions to be considered for GSoC. But any contribution is great, because it shows you can work with a larger project.
< zoq> TrinhNgo[m]: (2) Yes, we have an application guide that should be helpful: https://github.com/mlpack/mlpack/wiki/Google-Summer-of-Code-Application-Guide
< zoq> TrinhNgo[m]: (3) - That could be a project yes, some people proposed to implement an object detection model or object segmentation model.
witness has quit [Quit: Connection closed for inactivity]