ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
favre49 has joined #mlpack
< favre49>
I think it got buried in the chat earlier, so I'll ask again - If I'm not wrong, VAE will be in the models repository, while LSTM and MNIST will be in examples?
favre49 has quit [Ping timeout: 264 seconds]
favre49 has joined #mlpack
favre49 has quit [Ping timeout: 264 seconds]
favre49 has joined #mlpack
< kartikdutt18Gitt>
I think I might have missed that message.
< kartikdutt18Gitt>
Ohh, Found it. So I think we can have LeNet and AlexNet that I made in models repo and a data loader which supports popular datasets like mnist (currently), later pascal VOC etc.We can also add VAE (both) models and we can remove it from examples repo. What do you think?
< jeffin143[m]>
> i usually just copy the code between the BOOST_AUTO_TEST_CASE(whateveralgorithm){ /code here/ }, change the "Log::Debug" to "std::cout", and paste it into the int main{ }, it works
< jeffin143[m]>
Yes , that's one way of learning along with tooo many print statements
< favre49>
kartikdutt18Gitt: Yeah that's what I was thinking as well. Good to be on the same page. I guess I'll make a PR that simplifies the examples repo down to DigitRecognizer and LSTM at some point
< favre49>
kartikdutt18Gitt: Ah you're right, forgot about those. I'll wait till those PRs are merged before making any additional PRs
< bisakh[m]1>
How to perform validation on test cases locally? It gets build successfully to make command though having some errors.
< kartikdutt18Gitt>
Hi bisakh, If you mean tests then you can run ./bin/mlpack_test -t TESTNAME eg. ./bin/mlpack_test -t ANNLayerTest
ImQ009 has joined #mlpack
< bisakh[m]1>
Got it, Thanks.
metahost has quit [Quit: (getting back up soon)]
metahost has joined #mlpack
metahost has quit [Client Quit]
metahost has joined #mlpack
mtnshh has joined #mlpack
< hemal[m]>
Anyone else who knows armadillo and sparse matrices, please help me out here.
mtnshh has quit [Ping timeout: 240 seconds]
< LakshyaOjhaGitte>
hey any mentor online?
jenkins-mlpack2 has quit [Ping timeout: 258 seconds]
rcurtin has quit [Ping timeout: 265 seconds]
rcurtin has joined #mlpack
< kartikdutt18Gitt>
Hey zoq, Can I get your opinion on this,
< kartikdutt18Gitt>
> I think it got buried in the chat earlier, so I'll ask again - If I'm not wrong, VAE will be in the models repository, while LSTM and MNIST will be in examples?
< zoq>
LakshyaOjhaGitte: I have an implementation for the paper using mlpack, I can push it somewhere if you think that is helpful.
< Param-29Gitter[m>
< Param-29Gitter[m>
> `zoq on Freenode` There is no easy answer, it's a challenging problem, since you have to be familiar with the method and the implementation. And often you start with something and realize this isn't going in the right direction, so it's time-intensive as well.
< Param-29Gitter[m>
Exactly 😅. I had a work around for this. I thought of considering some algorithms which are less dependent on OpenBLAS (KNN and Decision trees.) and along with these we could have a set of algorithms on which I can try to work and see if I could improve their performance (for GSOC 2020).
< zoq>
kartikdutt18Gitt: I guess it could work in both, but it has to be modifed for sure to make it fit, but if you think it should go into the models repo fine with me.
< zoq>
Param-29Gitter[m: Agreed, so I guess ideally you like to come up with a list of potential candidates?
< Param-29Gitter[m>
Yes something like that.
< zoq>
Param-29Gitter[m: Okay, will think about it and get back to you later.
< kartikdutt18Gitt>
Agreed, I mentioned the same in issue mlpack/examples#66 incase anyone has something else in mind, and Yes It would have to be changed to fit in both the repos. Thanks a lot.
< AnjishnuGitter[m>
they are conceptually different with respect to when each one is to be applied. So, I wanted to add Multi Label soft margin loss in a separate PR. Should I go ahead with that?
jenkins-mlpack2 has quit [Ping timeout: 264 seconds]
< zoq>
AnjishnuGitter[m: Yeah, I think that is a good idea.
rcurtin has quit [Ping timeout: 240 seconds]
rcurtin has joined #mlpack
< AnjishnuGitter[m>
I was trying to build my local fork of mlpack. But I ran into an error when building and then the bash exits with code 2. Now coming to the error itself, it is occuring in some of the files like output_width_visitor_impl and other such files which I haven’t modified. The same error message can also be seen
< zoq>
Ahh I see, looks like we have some connection issues.
< bisakh[m]1>
yeah it seems so.
< bisakh[m]1>
Hey zoq, I was thinking as WGAN, WGAN-GP is already implemented, PacGAN is going to be implemented, what about implementing image-to-image translation (pix2pix) in mlpack along with DeepLab-v3 with tests and Documentations in this summer on the idea Application of ANN?
< zoq>
I think this is a temporary issue, so let's wait some hours.
< zoq>
bisakh[m]1: Sounds like a great idea to me.
< bisakh[m]1>
So can I make a proposal on this ?
< zoq>
bisakh[m]1: Sure, feel free.
< bisakh[m]1>
if time permits we can cover 1-2 small topics also in this.
< zoq>
bisakh[m]1: Hm, I think this is already a big enough project, but yeah if there is time left always nice to have something that could be added on top.
< bisakh[m]1>
zoq: okay! It would be great if you point out what points should be covered in proposal
< Saksham[m]>
Hey zoq SimpleAdaDeltaTestFunctionFMat is failing on my system. I’ve opened up an issue?
< Saksham[m]>
Can you have a look?
< bisakh[m]1>
zoq thanks
< zoq>
Saksham[m]: Just commented on the issue.
john91 has joined #mlpack
john91 has quit [Remote host closed the connection]
< favre49>
zoq: rcurtin: Is setting up azure pipelines something only you guys can do? If not, I wanted to try moving our style checks and stuff over to Azure so we can ditch Travis entirely
< favre49>
Oh right Contributors currently don't have access to approvals and stuff on models and examples, not sure if that's on purpose
< zoq>
favre49: We don't use travis for style checks on the mlpack repo.
< zoq>
favre49: That's a jenkins job.
< favre49>
Oops yeah, you're right
< favre49>
Wouldn't it be simpler for everything to be on azure pipelines though? I assume we have to pay for AWS
< zoq>
favre49: I don't think azure provides a nice interface to show style issues.
< zoq>
favre49: No, it's free for open source projects.
< favre49>
Oh okay then I suppose it doesn't matter
< SriramSKGitter[m>
What functionality does Appveyor offer that we're still using it?
< zoq>
SriramSKGitter[m: It builds the msi installer package, that could be done in azure as well, but I haven't had time to debug this one.
< SriramSKGitter[m>
Ah I see. Only asked because far too many of my builds have failed after exceeding build time on Appveyor :)
< zoq>
SriramSKGitter[m: yeah, you can ignore those issues.
< PrinceGuptaGitte>
Hi, I'm working on Inceptionv1 and it uses padded pooling layers, which is being implemented in #2318. So should I wait for it to be merged or locally implement it(but then when I'll open PR, it'll cause problems)? Any ideas what I should do?
< PrinceGuptaGitte>
(edited) ... in #2318. So ... => ... in #2127. So ...
< zoq>
PrinceGuptaGitte: You can use git cherry pick.
< PrinceGuptaGitte>
Thanks :)
< AnjishnuGitter[m>
So, since I was kind of new to git a few weeks back, I made probably the most basic mistake possible, by editing directly on the master branch of my fork when creating #2307 . Coming to present time, I am working on a different feature locally, but it just struck me that if I create a branch from my master, then the changes from #2307 will be reflected in this branch, which shouldn’t be the case. As far as I can
< AnjishnuGitter[m>
guess, I have 2 options. Either delete my fork and re-open a different PR with same changes as #2307 and a separate PR for my new feature or keep #2307 as it is for the moment till it is merged and for the new feature, create a branch from my master and remove the commits corresponding to #2307 from this branch. However this second option doesn’t really feel ideal. What should I be doing in this case? 🙈 @zoq
< zoq>
AnjishnuGitter[m: I guess I would go with option one.
< PranavReddyP16Gi>
I hope you read this before nuking everything 😅
< AnjishnuGitter[m>
Thanks for that. I will have a look through the link. Should help to avoid future mistakes.
favre49 has quit [Quit: Lost terminal]
< LakshyaOjhaGitte>
Hi @iamshnoo I think you can use git revert, it will give you all the commits you did ever and you can go back to that commit from whatever you are on now.
< LakshyaOjhaGitte>
Will help if you remember which commit is the one you need right now. It's like you can go back in time for the branch you want this for.
AbdullahKhilji[m has joined #mlpack
< AbdullahKhilji[m>
Hi, I am Abdullah Khilji. Looking forward positively to join mlpack for GSoC this summer.
< AbdullahKhilji[m>
Interested in the Reinforcement learning project, am I too late to join here?
< zoq>
AbdullahKhilji[m: Hello, no you are not too late.
toluschr has joined #mlpack
< Param-29Gitter[m>
Hey @zoq, do think i should close #2286(since I am not getting any speed up when compiled with OpenMP)?. Also I would love to have your views on #2315.
< AbdullahKhilji[m>
For the introduction, I am a 3rd year CSE student at NIT Silchar and an undergrad researcher at AI Lab at my institute. Have written 4-5 research papers (under review) thus, passionate about Research especially the Reinforcement domain. Many of my research projects are also based on the Natural Language domain. But have a strong willingness to explore new avenues in RL. I have read the mlpack wiki, could anyone let
< AbdullahKhilji[m>
me know what my initial steps must be in order to proceed further.
< Param-29Gitter[m>
Hey @zoq, Should I close #2286 (since I am not getting any speed-up when compiled with OpenBLAS) ? Also I would love hear your views on #2315.
< zoq>
Param-29Gitter[m: Hm, I guess you are right, I would have to take a closer look, run some tests as well, but don't think I can do that in the next days.
< Param-29Gitter[m>
I tired running many different tests for #2286, but it works best when compiled with OpenBLAS without OpenMP. :)
< zoq>
Param-29Gitter[m: In this case, let's close the PR.
< Param-29Gitter[m>
I'll try doing profiling of some more algorithms. I guess that would help me decide which algorithms to consider for parallelizing.
ImQ009 has quit [Quit: Leaving]
M_slack_mlpack16 has joined #mlpack
M_slack_mlpack16 is now known as JatoJoseph[m]
< JatoJoseph[m]>
Hello, I wish to work on the project *"*Application of ANN Algorithms Implemented in mlpack"
< JatoJoseph[m]>
<JatoJoseph[m] "Hello, I wish to work on the pro"> Where do i start ls
< zoq>
JatoJoseph[m]: Hello, mlpack.org/gsoc.html should be helpful.
< rcurtin>
zoq: I think I figured out the convolutional network issue... I spent quite a while checking the math and everything, then realized basically all we have to do is use the `input` parameter provided by the Gradient() method instead of making an alias to the one we got in `Forward()` :)
< rcurtin>
(I'm referring to the issue that's solved by the PR where I made a copy, but didn't merge)
< zoq>
rcurtin: time intensive debug session -> simple fix, I guess we can be happy about the outcome.
< rcurtin>
yeah, agreed!
< rcurtin>
but I have been playing with -fsanitize=address and working my way through mlpack_test
< rcurtin>
I debugged all the ANN layers, so all those tests pass now
< rcurtin>
now at KDETest... so I am making progress through the alphabet :)
< zoq>
wow, nice
< zoq>
:)
< rcurtin>
I really hope to finish today or tomorrow, it's basically most of what I'm doing today
< rcurtin>
slow debugging cycle though---compiling with -fsanitize=address seems to take significantly longer
< zoq>
even with multiple cores?
< rcurtin>
yeah, it uses way more RAM too so I have to be conservative and only compile with 2 or 3 cores
< rcurtin>
(otherwise the system swaps and my music stops :-D)
< zoq>
:D
< rcurtin>
there is one problem I'm not solving that I know about though---the copy constructors for all of the layers are the default copy constructors
< rcurtin>
but this will copy the members like `inputParameter`, `outputParameter`, etc. which aren't meant to be copied
< rcurtin>
this is what results in the problem reported in #2314
< zoq>
right, I could handle that one, or we open an issue
< rcurtin>
so my PR won't address that. it might be easy to put together a quick workaround, but to really solve it right, all the layers would need a copy constructor
< rcurtin>
I think opening an issue would be fine---this is the type of thing where it's actually a pretty good task for people looking to get involved :)
< rcurtin>
I only wish we had noticed and opened the issue in February :)
< zoq>
Pretty sure someone will pick this up in a couple of hours.
< zoq>
But yeah, it's a nice entrance task.
< rcurtin>
the only thing is, @ilya-pchelintsev already said on #2314 that he'd like to fix it, but hasn't responded yet
< rcurtin>
in any case, maybe he'll respond in the next day or two and we'll see what the best way forward is
< zoq>
yeah, maybe he is busy with some other stuff
< rcurtin>
made it all the way to RecurrentNetworkTest, most of the way through the alphabet :)
< rcurtin>
I'm scared of SerializationTest and TreeTest though...
< rcurtin>
oh wait, I think it's not 100% alphabetical... it hasn't done any of the main tests yet
< rcurtin>
I used to think it was okay if those leaked a little memory because it would be reclaimed when the program exited, but that only applies for command-line bindings; a memory leak would actually be a problem in a Python or Julia session
< rcurtin>
so I guess they all have to be debugged :)
< zoq>
a little memory leak :)
< zoq>
I have to pay more attention to the memory check job.