ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< rcurtin> what? I accidentally made the mlpack/models repository as private
< rcurtin> I have no idea how that happened
< rcurtin> oops
< rcurtin> I didn't even know I could do that for an organization that doesn't have a paying account ...
< jenkins-mlpack2> Project docker mlpack weekly build build #99: STILL UNSTABLE in 6 hr 9 min: http://ci.mlpack.org/job/docker%20mlpack%20weekly%20build/99/
witness has quit [Quit: Connection closed for inactivity]
favre49 has joined #mlpack
< favre49> Question, does VAE still belong in the examples repo? I think not
< favre49> Also, perhaps it would make sense to put this change on the mailing list? Though that could wait till the release of 3.3
< favre49> Either way I'll put a message on relevant PRs and issues
favre49 has quit [Quit: leaving]
favre49 has joined #mlpack
< LakshyaOjhaGitte> hey @favre49 can you help me with some insight to how conv works in the layer?
< favre49> LakshyaOjhaGitte: Sorry, I'm not sure I know the code well enough to help you
< LakshyaOjhaGitte> Here are some [animation](https://github.com/vdumoulin/conv_arithmetic) that someone put up at github
< LakshyaOjhaGitte> No problem, just want to understand how that works in regard to the animation
< favre49> What's your doubt though?
< LakshyaOjhaGitte> It's just that as padding is done to the input and then like 4 blocks are used to generate a single output block in the animation
< LakshyaOjhaGitte> how that takes place
< favre49> Wait which animation are you looking at? I'm not sure I understand what you're saying
< LakshyaOjhaGitte> convolution animation in the link, say the fourth one
< LakshyaOjhaGitte> i thought input and output were of same sizes
< LakshyaOjhaGitte> here its different and it is using like 6 blocks of padded input(blue blocks) to generate 1 block of output (above one)
< favre49> In convolutions, size of the output depends on kernel size, input size and padding applied
< favre49> Full padding in general is meant to increase the size of the output
< LakshyaOjhaGitte> is that called upscaling? heard the term somewhere
< favre49> I've only heard that in the context of video and picture resolutions
< LakshyaOjhaGitte> okay thanks for the help :)
< favre49> Welcome, glad I was able to resolve it
< LakshyaOjhaGitte> also wanted to point out that documentation of convolution is not good I think
< LakshyaOjhaGitte> The detailed description section should it be improved for like if someone reads it, it gives easily understandable info
< Saksham[m]> From what I’ve read, we have transposed convolution which can be thought of as upscaling an image with convolution
< LakshyaOjhaGitte> Yup
< favre49> THe documentation you're looking at isn't really meant to be a tutorial
< Saksham[m]> I don’t think the mlpack documentation needs to be detailed enough for someone to learn what convolution means
< Saksham[m]> They’re plenty of resources online
< favre49> But we could have a tutorial for it. Unfortunately I wouldn't have the time, but feel free to add it
< favre49> Saksham[m]: You're right, but from a marketing perspective it would be great if people could come to our website and learn about CNNs, and proceed to use mlpack for it :)
< LakshyaOjhaGitte> Yeah plenty of resources there but we should not leave any chance to improve the documentation
< Saksham[m]> I’m working on the tutorial for cnn minst
< LakshyaOjhaGitte> Exactly
< Saksham[m]> I could add details about convolution there
< LakshyaOjhaGitte> yeah sure no problem with me
< LakshyaOjhaGitte> better then opening a pr for this
< Saksham[m]> Since the new examples directory works as a tutorial directory makes sense to add it there
< Saksham[m]> Sure I would explain convolution for a beginner somewhere in the tutorial
< LakshyaOjhaGitte> yeah for beginner it can be done , also can you refer to [this](https://arxiv.org/pdf/1603.07285.pdf) for like to explain the working of convolution
< LakshyaOjhaGitte> what do you say
< Saksham[m]> Should this be done for every machine learning algorithm?
< jeffin143[m]> That would be a lot of work Saksham :)
< jeffin143[m]> Buy if you have the fuel you can definitely
< LakshyaOjhaGitte> i was thinking of this only, it is good but kind of complex and lot of work
< Saksham[m]> I can open up a issue , we can let people take up
< jeffin143[m]> I beleive start a channel and teach mlpack :)
< jeffin143[m]> I will definitely subscribe it
< jeffin143[m]> > I can open up a issue , we can let people take up
< jeffin143[m]> Saksham yeah may be
< LakshyaOjhaGitte> maybe some guys who want a good first pr can take this ?
< Saksham[m]> Nice idea
< LakshyaOjhaGitte> :)
< Saksham[m]> But should this be done in the mlpack repo?
< Saksham[m]> I feel we should keep the function definition a little separate from the tutorials
< LakshyaOjhaGitte> I dont know how the documentation gets changed,a mentor can provide a better insight.
< Saksham[m]> The documentation is automatically generated form the source code using doxygen if I’m right.
< Saksham[m]> Not sure tho
< Saksham[m]> The comments and annotations in the source code
< birm[m]1> Mostly. There's also ./doc which has some guides not associated with a particular place in source
< favre49> There is merit to the idea of revamping the "tutorials". I've wanted to do it but haven't gotten around to properly thinking about it
< favre49> A really liked how comprehensive Theano's tutorial section was
< chopper_inbound4> I recently opened an issue https://github.com/mlpack/mlpack/issues/2309 regarding documentation.
< kartikdutt18Gitt> Hey Saksham, I have left a comment about documentation, what I think it should look like. Would love to get everyone's opinion on it. Here is the [link](https://github.com/mlpack/examples/issues/65#issuecomment-601542942). Thanks.
< favre49> chopper_inbound4: Yeah I saw that, what I was thinking was a complete revamp, including the website. It would be a gigantic project
< favre49> Like I said, I haven't put enough thought into how I would like it structured. Maybe I'll do it now with this extra time :)
< kartikdutt18Gitt> > `favre49 on Freenode` Saksham: You're right, but from a marketing perspective it would be great if people could come to our website and learn about CNNs, and proceed to use mlpack for it :)
< kartikdutt18Gitt> I think a breif about the layer and some reference links to blogs or papers should be fine. What do you think?
< SriramSKGitter[m> Just a thought, how would the examples repo relate to the tutorials on mlpack.org ? Are they supposed to be complementary or is one going to replace the other?
< chopper_inbound4> favre49 : agree. We need some web developers.
< favre49> kartikdutt18Gitt: Actually, I wanted something more like http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html
< favre49> I think this is also where LakshyaOjhaGitte animations are from :)
< favre49> It's a lot of work though. May be the kind of project that you would give for Google Summer of Documentation or whatever it's called
< kartikdutt18Gitt> @sriramsk1999, I think they are supposed to be complimentary. The tutorials on the website are simple code where as examples repo will have really cool projects that we user could look at, run them, understand and change them and so on.
< jeffin143[m]> favre49: that's a whole gsoc project :)
< favre49> Yeah, ideally the tutorials would also link to the examples repo
< favre49> jeffin143[m]: I had thought of it as a gsoc project, but it's too documentation intensive. Doesn't really fit imo
< favre49> Another gsoc project I had thought of was NLP related but I never got the time to flesh out the idea. I'll probably just co-mentor :)
< kartikdutt18Gitt> >favre49 on Freenode kartikdutt18 (Gitter): Actually, I wanted something more like http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html
< kartikdutt18Gitt> I think this is really cool, but this would require a whole website revamp as you mentioned.
< chopper_inbound[> Maybe some can take it up outside of gsoc.
< favre49> Yeah, if I ever flesh out the idea properly and decide on a good structure I'll creat a big help wanted issue
< SriramSKGitter[m> @kartikdutt18 : Yeah that sounds good. Perhaps the tutorials could feature a more in-depth explanation of the API and the examples focus on cool applications :)
< jeffin143[m]> favre49: NLP is in my to-do lisy
< chopper_inbound[> No hurry?
< jeffin143[m]> I have lots of plan
< jeffin143[m]> I was thinking of putting up a proposal 😂
< jeffin143[m]> But it was too late
< jeffin143[m]> favre49 : also implementing new callbacks for ensmallen *
< kartikdutt18Gitt> >@kartikdutt18 : Yeah that sounds good. Perhaps the tutorials could feature a more in-depth explanation of the API and the examples focus on cool applications :)
< kartikdutt18Gitt> Agreed. It would be nice to some great projects in it.
< jeffin143[m]> Like saving models after every epoch , or writting to files
< favre49> chopper_inbound[: Nah I'm not in a hurry, especially since I have so many competing interests it's hard to choose one
< jeffin143[m]> Do anybody knows a website where you can do online code challenge with your friend
< jeffin143[m]> Like 1 v 1 code fight
< chopper_inbound4> favre49 : haha, that happened to me as well.
< sreenik[m]> jeffin143[m]: Yeah I used to do it in school, there was a site by the name codefights apparently. I hope it still exists
< LakshyaOjhaGitte> Hi @sreenik can you give me some insight on attention layer?
< LakshyaOjhaGitte> is recurrent attention and attention the same thing?
johnsoncarl[m] has quit [Ping timeout: 260 seconds]
SakshamRastogiGi has quit [Ping timeout: 260 seconds]
tejasvi[m] has quit [Ping timeout: 260 seconds]
hemal[m] has quit [Ping timeout: 260 seconds]
TanayMehtaGitter has quit [Ping timeout: 260 seconds]
geek-2002Gitter[ has quit [Ping timeout: 260 seconds]
EL-SHREIFGitter[ has quit [Ping timeout: 260 seconds]
Shikhar-SGitter[ has quit [Ping timeout: 260 seconds]
bkb181[m] has quit [Ping timeout: 260 seconds]
GitterIntegratio has quit [Ping timeout: 260 seconds]
< jenkins-mlpack2> Project docker mlpack nightly build build #647: STILL FAILING in 3 hr 11 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/647/
johnsoncarl[m] has joined #mlpack
SakshamRastogiGi has joined #mlpack
hemal[m] has joined #mlpack
tejasvi[m] has joined #mlpack
TanayMehtaGitter has joined #mlpack
bkb181[m] has joined #mlpack
Shikhar-SGitter[ has joined #mlpack
EL-SHREIFGitter[ has joined #mlpack
geek-2002Gitter[ has joined #mlpack
< sreenik[m]> freenode_gitter_ojhalakshya[m]: Hey I am not quite familiar with the attention layer. I hope someone else helps you out here :)
< chopper_inbound4> Lakshya Ojha (Gitter) : No, recurrent attention and attention are not the same. I think you are talking about recurrent_attention layer in mlpack.
< chopper_inbound4> You can refer to https://github.com/mlpack/mlpack/issues/2296
< chopper_inbound4> This blog can help you through understanding attention.
< PrinceGuptaGitte> I think there is a fault in implementation of `Padding` layer. I have opened an issue #2318 for it.
< LakshyaOjhaGitte> oh okay Thanks chopper_inbound
witness has joined #mlpack
< chopper_inbound4> :)
< naruarjun[m]> Hey
< naruarjun[m]> I have written a test and added it to tests.
< naruarjun[m]> I wanted to ask how I can make only that test file and not the entire mlpack_test.
< naruarjun[m]> ?
< jeffin143[m]> ./bin/mlpack_test -t testname : naruarjun
< jeffin143[m]> Testname would be at the start of the test
< naruarjun[m]> Thanks got it.
ImQ009 has joined #mlpack
favre49 has quit [Ping timeout: 264 seconds]
favre49 has joined #mlpack
< bisakh[m]> Hi zoq I have a query, at GSOC idea page on 'ESSENTIAL DEEP LEARNING MODULE', I find the proposed work for `WGAN-GP` is already implemented in mlpack under src/methods/ann/gan. Is mlpack focussing on another different implementation?
< himanshu_pathak[> bisakh: I think It is already there
< himanshu_pathak[> That was list of ideas but you can whatever module you like
< himanshu_pathak[> <himanshu_pathak[ "That was list of ideas but you c"> propose
< bisakh[m]> Himanshu Pathak: yeah sure. so if we talk about the above-mentioned idea, we have to implement the whole idea i.e all submodules at summer, isn't it?
< bisakh[m]> I like that idea, that's why. I have a fondness towards any kind of adversarial model.
< himanshu_pathak[> bisakh: Yes you have to implement full module if you are trying to implement new GAN module as well as you have to take care of tests also.
< himanshu_pathak[> Sorry GAN *submodule
< bisakh[m]> Thanks himansu. What about the wgan with gradient penalty?
< AbishaiEbenezerG> Hi mlpack! Regarding the gsoc proposal, since the mentors may not be reviewing our proposals and giving us feedback in the next week, i would like a few pointers on what i should (or should not) include and what mlpack will be looking for in specific...
< himanshu_pathak[> bisakh: Oh I, see you were talking about both WGAN-GP and PACGAN idea. Firstly WGAN-GP is already completed I don't think their is anything left in that. Now, the PACGAN idea is left if you are familiar with the codebase and understand the paper also I will be a great idea to implement
< himanshu_pathak[> * bisakh: Oh I, see you were talking about both WGAN-GP and PACGAN idea. Firstly WGAN-GP is already completed I don't think their is anything left in that. Now, the PACGAN idea is left if you are familiar with the codebase and understand the paper also It will be a great idea to implement
< zoq> AbishaiEbenezerG: Have you seen the application guide: https://github.com/mlpack/mlpack/wiki/Google-Summer-of-Code-Application-Guide
< AbishaiEbenezerG> oh hi @zoq. I was thinking you were busy as i recently read that you would be busy with something else...
< AbishaiEbenezerG> Yes, i have read that a few times.. is that all i need to know?
favre49 has quit [Remote host closed the connection]
< zoq> I'm super busy right now, if everything works out I do have some more time tomorrow.
< zoq> AbishaiEbenezerG: But the wiki page is a good starting point.
< rcurtin> AbishaiEbenezerG: remember, mentors can always reach out and ask for clarifications after the proposal deadline has passed
< AbishaiEbenezerG> oh alright
< himanshu_pathak[> zoq: I was discussing with rcurtin that if I want to use FFN as a layer I have to define new Bakward() in which I am passing const InputType input as an argument but if someone using FFN as model and pass const InputType input it will cause an error in that case
< himanshu_pathak[> Should I add a warning for this or you have any different suggestion for this
< AbishaiEbenezerG> @joeljosephjin why is #2275 closed?
< zoq> himanshu_pathak[: Maybe using the sequential layer is another option, but a warning sounds fine as well.
< himanshu_pathak[> zoq: Yes using sequential layer will be a good thing but I also like the idea of using FFN as a layer because may I will be doing same thing with RBF to mplement but there case might be quite different I just want to experiment with this.
< himanshu_pathak[> *Implement DBN
< zoq> himanshu_pathak[: I see, we can modify the FFN class as well, if we have to.
< himanshu_pathak[> zoq: Ok So, I will go with warning approach
< zoq> Sounds good
< himanshu_pathak[> Thanks for suggestion
< Manav-KumarGitte> himanshu_pathak: Hey are you working on your proposal part or some issue/pr which is already opened?
< himanshu_pathak[> Manav-Kumar (Gitter): That question was related to my pr
< Manav-KumarGitte> Ok.
< himanshu_pathak[> Neural Turing Machine Implementation
travis-ci has joined #mlpack
< travis-ci> shrit/models#14 (digit - 3f184ca : Omar Shrit): The build is still failing.
travis-ci has left #mlpack []
< AbishaiEbenezerG> where can i find documentation on the tests?
drock has joined #mlpack
drock has quit [Remote host closed the connection]
< AnjishnuGitter[m> I was looking through the loss functions in mlpack and couldn’t find Smooth L1 Loss. I intend to start working on it probably by tomorrow. Let me know if I am missing something and it is actually implemented somewhere. Thanks.
< Saksham[m]> Smooth L1 is implemented if I remember correctly
< AnjishnuGitter[m> I see.. thanks for that. I didn’t actually know the name Huber Loss, which is why the confusion
< Saksham[m]> hahah happened to me too, i actually opened a PR for this after implementing it only to find this out later. xD
< AnjishnuGitter[m> XD. Also, could you actually help me out with one more thing. I was making a list of some stuff I want to implement, but which I couldn’t find in mlpack. Is any of the following already implemented?
< Saksham[m]> This might be helpful
< AnjishnuGitter[m> Okay. Thanks. I will have a look through that.
< Saksham[m]> or just go through the loss functions folder inside the mlpack
< AnjishnuGitter[m> 😅 yeah. that’s what i was doing initially. But then stuff like Huber came up, which I haven’t heard of before.
< Saksham[m]> sometimes i do a search on PRs before implementing. It is helpful sometimes…
< Saksham[m]> Anyway happy to help if you have doubts
< AnjishnuGitter[m> One more thing. I notice that gaurav singh mentioned on #2200 that he wanted to work on Multi Label Margin Loss back on Feb 11. I dont see a pr from him referencing this issue yet. So, should I assume that he is still working on it, or do I assume that it is not implemented? This is a specific description, but I have noticed some situations like this with some other PRs also. What do you do in such cases?
< Saksham[m]> Just tag him in the issue (#2200) and ask if he’s still working on it.
< Saksham[m]> add a comment regarding it and tag him in it
< AnjishnuGitter[m> I see. Okay👌
< AnjishnuGitter[m> Thanks so much for your help!
travis-ci has joined #mlpack
< travis-ci> shrit/models#15 (digit - 45a72c1 : Omar Shrit): The build is still failing.
travis-ci has left #mlpack []
< Saksham[m]> 🙂
travis-ci has joined #mlpack
< travis-ci> shrit/models#16 (digit - 6332beb : Omar Shrit): The build has errored.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> shrit/models#17 (digit - 7e1120d : Omar Shrit): The build has errored.
travis-ci has left #mlpack []
< metahost> rcurtin, zoq: I have shared a draft proposal via the application portal, will you have time to look at it? :)
< JoelJosephGitter> @abinezer i closed the issue because it was not an issue. I only trained it for 100 episodes n it did not change its test average, but when i ran it for a thousand episodes, it did work https://ibb.co/tXNrVJj
RishabhGoel[m] has joined #mlpack
< RishabhGoel[m]> 💃 Just arrived! Trying to familiarize myself before applying for GSoC.
< SriramSKGitter[m> @nishantkr18 : No, they are just suggestions :)
< JoelJosephGitter> @abinezer i don't think there is a detailed documentation for the tests yet, but see if the gist in here can help https://medium.com/@joeljosephjin/asynchronous-deep-reinforcement-learning-with-mlpack-140ee573a235 . :) what is the difficulty that u said u faced with the code for dqn on mountain car environment?
eadwu[m] has left #mlpack []
< JoelJosephGitter> i usually just copy the code between the BOOST_AUTO_TEST_CASE(whateveralgorithm){ /code here/ }, change the "Log::Debug" to "std::cout", and paste it into the int main{ }, it works
NishantKumarGitt has joined #mlpack
ImQ009 has quit [Read error: Connection reset by peer]
witness has quit [Quit: Connection closed for inactivity]
mlozhnikov[m]1 has joined #mlpack
mlozhnikov[m] has quit [Ping timeout: 246 seconds]
bisakh[m]1 has joined #mlpack
bisakh[m] has quit [Ping timeout: 246 seconds]