ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
jeffin has joined #mlpack
< jeffin> rcurtin : whats up with the naming of library, Armadillo and bandicoot , any particular reason.
Supriya has joined #mlpack
Supriya has quit [Client Quit]
chandramouli_r has joined #mlpack
pd09041999 has joined #mlpack
< rcurtin> jeffin: Conrad is from Australia :)
< jeffin> No doubt :) , may be i would try to train and predict , what would be your next library name :-p
chandramouli_r has quit [Ping timeout: 256 seconds]
< jenkins-mlpack2> Project docker mlpack weekly build build #41: UNSTABLE in 6 hr 30 min: http://ci.mlpack.org/job/docker%20mlpack%20weekly%20build/41/
rf_sust2018 has joined #mlpack
mulx10 has joined #mlpack
< mulx10> akfluffy: Did you figure out the problem?
< mulx10> if not I tried it, you can find the code here https://pastebin.com/6bwfERHv.
mulx10 has quit [Client Quit]
chandramouli_r has joined #mlpack
< chandramouli_r> does mlpack work with python3 ?
< chandramouli_r> Can the documentation be more specific for different versions of python ? Because it commonly points out to python2.x but python2.x will be deprecated from 2020. So I would like to contribute some change to documentation . Any opinions ?
johnsoncarl[m] has quit [Read error: Connection reset by peer]
harias[m] has quit [Remote host closed the connection]
ani1238[m] has quit [Remote host closed the connection]
shashank-b[m] has quit [Read error: Connection reset by peer]
kanishq244[m] has quit [Remote host closed the connection]
skrpl[m] has quit [Remote host closed the connection]
mrohit[m] has quit [Remote host closed the connection]
Sergobot has quit [Remote host closed the connection]
rf_sust2018 has quit [Ping timeout: 246 seconds]
jenkins-mlpack2 has quit [Ping timeout: 246 seconds]
jenkins-mlpack2 has joined #mlpack
rf_sust2018 has joined #mlpack
chandramouli_r has quit [Ping timeout: 256 seconds]
ani1238[m] has joined #mlpack
shashank-b[m] has joined #mlpack
harias[m] has joined #mlpack
Sergobot has joined #mlpack
mrohit[m] has joined #mlpack
kanishq244[m] has joined #mlpack
skrpl[m] has joined #mlpack
johnsoncarl[m] has joined #mlpack
rf_sust2018 has quit [Ping timeout: 255 seconds]
pd09041999 has quit [Ping timeout: 246 seconds]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
pd09041999 has quit [Ping timeout: 246 seconds]
rf_sust2018 has joined #mlpack
rf_sust2018 has quit [Ping timeout: 246 seconds]
rf_sust2018 has joined #mlpack
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
pd09041999 has quit [Ping timeout: 255 seconds]
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 264 seconds]
pd09041999 has joined #mlpack
i8hantanu has joined #mlpack
seewishnew has joined #mlpack
pd09041999 has quit [Ping timeout: 245 seconds]
seewishnew has quit [Ping timeout: 240 seconds]
rf_sust2018 has quit [Quit: Leaving.]
pd09041999 has joined #mlpack
shashank-b[m] has quit [Remote host closed the connection]
kanishq244[m] has quit [Remote host closed the connection]
harias[m] has quit [Remote host closed the connection]
ani1238[m] has quit [Remote host closed the connection]
rf_sust2018 has joined #mlpack
mrohit[m] has quit [Read error: Connection reset by peer]
skrpl[m] has quit [Remote host closed the connection]
Sergobot has quit [Remote host closed the connection]
johnsoncarl[m] has quit [Remote host closed the connection]
ani1238[m] has joined #mlpack
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 268 seconds]
rf_sust20181 has joined #mlpack
rf_sust20181 has quit [Client Quit]
Sergobot has joined #mlpack
harias[m] has joined #mlpack
mrohit[m] has joined #mlpack
kanishq244[m] has joined #mlpack
shashank-b[m] has joined #mlpack
skrpl[m] has joined #mlpack
johnsoncarl[m] has joined #mlpack
rf_sust2018 has quit [Ping timeout: 268 seconds]
KimSangYeon-DGU has quit [Ping timeout: 256 seconds]
pd09041999 has quit [Ping timeout: 250 seconds]
seewishnew has joined #mlpack
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
seewishnew has quit [Ping timeout: 250 seconds]
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 250 seconds]
akfluffy has joined #mlpack
< akfluffy> mulx10: omg thank you so much! how did you get it to work?
< akfluffy> it looks like the training did it?
< akfluffy> I just assumed I could evaluate a model with the random parameters without training...
< akfluffy> I think that could be a potential improvement to the library
akfluffy has left #mlpack []
seewishnew has joined #mlpack
pd09041999 has quit [Ping timeout: 258 seconds]
pd09041999 has joined #mlpack
pd09041999 has quit [Excess Flood]
pd09041999 has joined #mlpack
pd09041999 has quit [Excess Flood]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
seewishnew has quit [Ping timeout: 268 seconds]
pd09041999 has joined #mlpack
rf_sust2018 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
shoaib98libra_ has joined #mlpack
shoaib98libra_ has quit [Client Quit]
pd09041999 has quit [Ping timeout: 245 seconds]
pd09041999 has joined #mlpack
i8hantanu has quit [Quit: Connection closed for inactivity]
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
rf_sust2018 has quit [Ping timeout: 245 seconds]
pd09041999 has joined #mlpack
seewishnew has joined #mlpack
saksham189 has joined #mlpack
pd09041999 has quit [Ping timeout: 250 seconds]
KimSangYeon-DGU has joined #mlpack
Mann has joined #mlpack
pd09041999 has joined #mlpack
KimSangYeon-DGU has quit [Quit: Page closed]
< Mann> Hey I know this is so slow of me to join the community now, but I am really willing to get into mlpack can anyone help me with how man vacant seats are left to get selected
< ShikharJ> Mann: There's no such limit on the vacancy of seats to be a part of mlpack :)
< Mann> Hey Shikhar, are you Participating as a student or a mentor
< ShikharJ> Mann: I will be mentoring hopefully this year.
< Mann> Okay, which project will you be mentoring for, sir.
< ShikharJ> That depends on the proposal that I'll feel excited about to mentor and learn more :) It practically could be anything of interest.
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
< Mann> so these are the projects right? we have to select one or more of these for proposals
< ShikharJ> Basically, these are pointers to good ideas. Whether or not you write a proposal on them is your choice. But you're free to be creative and propose an idea of your own.
< Mann> alright
< Mann> I really find reinforcement learning project really interesting
< Mann> we can also go with old projects proposal ideas right ?
seewishnew has quit [Ping timeout: 250 seconds]
< ShikharJ> In case of old ideas, it's better I think to consult with individual mentors for that idea.
< Mann> Algorithm optimization
< Mann> okay sir
< Mann> via email right?
< Mann> okay thank you
< ShikharJ> Either IRC or the mailing list please.
< Mann> Okay Sir
rf_sust2018 has joined #mlpack
seewishnew has joined #mlpack
KimSangYeon-DGU has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 250 seconds]
vivekp has joined #mlpack
chandramouli_r has joined #mlpack
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 250 seconds]
sreenik has joined #mlpack
Mann has quit [Ping timeout: 256 seconds]
govg has joined #mlpack
rf_sust2018 has quit [Quit: Leaving.]
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 264 seconds]
heisenbug_ has joined #mlpack
< heisenbug_> hey, i want to work on mlpack in GSoC.
< heisenbug_> I am thinking of applying ANN algorithm/
< heisenbug_> I tried to subscribe to mailing list but i dont know why I was not added to it...
< heisenbug_> so this was was the only other place i could find to communicate with the mlpack community.
< heisenbug_> anyone up?
< jeffin> Shikharj : in ann/layers/layer_types.hpp in custom layers , when we declare something it says its not declare , suppose relu<arma::mat,arma::mat>* is declared afyer that i declare same mrelu<arma::mat,arma::mat>* , but its throwing error that mrelu not declared
< jeffin> Though i have included , #include<ann/layer/mrelu.hpp>
< zoq> heisenbug_: Hello, https://github.com/mlpack/mlpack/wiki/SummerOfCodeIdeas does have some ann related ideas but you are welcome to propose anything else.
< heisenbug_> I am thinkingof implementing LSTM.
< jeffin> Zoq : could you help me in debugging the above issue*
< zoq> heisenbug_: We already have a LSTM implementation.
< jeffin> Also mrelu.hpp does have a class mrelu { with definition}
Yashwants19 has joined #mlpack
< heisenbug_> there was a suggestion of connecting LSTM to VAE, can you explain it?
< zoq> jeffin: Is it part of the cmake file as well?
< jeffin> Yes it's compiled
< jeffin> I added both mrelu.hpp and mrelu_impl to cmake
< Yashwants19> Hi rcurtin : I submitted my draft for GSoC 2019. It would be really great if I get your feedback about it. Can you please have a look at it, if you have a chance?
< Yashwants19> Thank You
Yashwants19 has quit [Client Quit]
< zoq> heisenbug_: That idea goes into using VAE in a sequential setting: one example is https://github.com/google/vae-seq
< heisenbug_> ohk, are there any other ideas regarding this project?
< zoq> jeffin: I guess it's included in the layer_types.hpp file as well
< heisenbug_> is anyone else also working on it?
akfluffy has joined #mlpack
< jeffin> Ok , i will look through it, Thanks
< zoq> jeffin: jeffin: Make sure it's part of src/mlpack/methods/ann/layer/CMakeLists.txt; and included in src/mlpack/methods/ann/layer/layer_types.hpp
vivekp has quit [Read error: Connection reset by peer]
< akfluffy> if I'm using RandomInitialization, why are the output cubes always the same after training?
< zoq> heisenbug_: We received a couple of application for the project.
< jeffin> Zoq : Yes i have done both of these
< zoq> jeffin: What is the exact error you get?
< zoq> akfluffy: output for the RNN model?
< akfluffy> zoq: yeah
< jeffin> For leakyrelu<arma::mat,arma::mat>*, under custom layer , how did we create an object for leaky relu class...???
< heisenbug_> actually, I checked here https://github.com/mlpack/models and thought that LSTM was not implemented or need reimplementation. I also created a github ppage regarding the LSTM https://heisenbuug.github.io/Long-Short-Term-Memory/
rf_sust2018 has joined #mlpack
< zoq> akfluffy: Not sure I get what you mean, the RandomInitialization will inalize the model parameter/weights.
< akfluffy> heisenbug_: it's in ann/LSTM
< heisenbug_> yea, i saw that now.
< zoq> right, the models repo shows a couple of examples
vivekp has joined #mlpack
< zoq> Hopefully we will see more examples in the future.
< jeffin> Zoq : error is ../method/ann/layer/layers_types.hpp:179:5 erro 'mrelu ' was not declared in this scope mrelu<arma::mat,arma::mat> *
< akfluffy> jeffin: is mrelu your custom layer?
< jeffin> Yes
< zoq> jeffin: I can just guess, maybe it's not in the same namespace.
< heisenbug_> Ohk, so does applying LSTM to model time dependent data can be a project?
< heisenbug_> Like i will apply LSTM on different datasets in the same way VAE was done.
< jeffin> Ohh may be u are right
< chandramouli_r> Can someone review my proposal https://drive.google.com/open?id=1LEoO1xy7SZnpsYIbc8rHeivhWLksoc8u and give your valuable suggestions for improvements.
< heisenbug_> So, we will have more models...
< jeffin> No , i have just copied leaky_relu and made some tweek
< jeffin> So it should be under same namespace
< jeffin> Still i will go with it , and run throut namespace
< akfluffy> jeffin: did you use "make" again?
< chandramouli_r> Is Issue #1840 open ? Can I work on that ?
< zoq> heisenbug_: If we can fill heisenbug_: WE have to make sure this is enough work, but this could be a great project.
< zoq> chandramoul: Yes, please feel free to work on the issue.
< chandramouli_r> Thanks will start working on that
< heisenbug_> So, can you suggest me how to proceed with my proposal, as I have being coding in C++, Python and Java for past 2 years and I have implemented many ML models(https://github.com/heisenbuug).
< zoq> jeffin: Another guess is that the header guard is wrong, since you copied the relu layer?
< heisenbug_> My interest is in Stock Market Prediction, which is Time Series Data and so LSTMs and RNNs comes into play.
< zoq> heisenbug_: The application guide should be helpful: https://github.com/mlpack/mlpack/wiki/Google-Summer-of-Code-Application-Guide
< zoq> heisenbug_: I see I guess in this case it's a good fit.
< zoq> heisenbug_: http://mlpack.org/gsoc.html should be helpful as well
< akfluffy> zoq: by the way, my RNN problem was fixed by mulx10. All they did was train it. Apparently I can't evaluate a model with the initial random weights without training it first?
< jeffin> Zoq : header gard means , and sorry for bothering
< heisenbug_> What if I take 2-3 datasets, each of different fields and apply LSTM on them and we can put that in here https://github.com/mlpack/models
< heisenbug_> what more stuff are you expecting to add here https://github.com/mlpack/models?
< zoq> akfluffy: great, I saw the message but haven't time yet to look into the solution, it should be possible to evaluate a model without training it; if you like you can open an issue.
< jeffin> Zoq : i got it , you are correct
< jeffin> There was an issue with ifdef * since i copied , i should change it
< zoq> jeffin: Okay, great, I think that will solve the issue.
< jeffin> Thanks for the help , i will let you know after building whether really it was the issue or not
< zoq> heisenbug_: Perhaps you can think of more methods that can be used that are already implemented; or just have to be extended.
< zoq> jeffin: sounds good
< heisenbug_> there are some variations in LSTM like Gated Recurrent Unit and peephole connections, are these implemented already?
< zoq> GRU and peephole connections are implemented
< heisenbug_> Ohk, what about considering some MOST USED DATASETs and applying them to each and every ANN algorithm that is there is mlpack. like simple FNN, RNN, VAE, LSTM, BRNN, CNN
< zoq> heisenbug_: Yeah, that could defently be part of the project, and since every method implements the same interface, we just have to use adjust the model.
< akfluffy> zoq: I've figured it out. If you don't train the model, model.Parameters will [0x0]. So you can't evaluate a model that hasn't been initialized yet
< zoq> akfluffy: I see, I guess, we could check if parameters are empty and run the init routine before we evaluate the model.
< heisenbug_> If this can be only a part of the project, I am yet to figure out what more should I do. Is implementing a new ALGO is the ultimate goal of this project? or is it to add more ready to use models?
< akfluffy> zoq: I will try to make a PR to fix this. Thanks for the help
< zoq> heisenbug_: In this case it would be more about ready to use models, but as I said, we might be able to model something out of mlpack layer implementation e.g. the inception network would be one example
chandramouli_r has quit [Ping timeout: 256 seconds]
< heisenbug_> So, in my proposal shall I mention both, like i will be implementing an ALGO and will create some ready to use models...or it should concentrate on only one thing?
< akfluffy> Wait a minute, what does RandomInitialization even do if the model parameters aren't initialized to anything? When does it Randomly Initialize?
< zoq> heisenbug_: Depends on the timeline, but I would probably go with both
< zoq> akfluffy: maybe nothing
< ShikharJ> jeffin: Sorry for the late response. I'm glad zoq could help you out. Let me know if there are some other issues.
< heisenbug_> So, would doing enough research regrading Inception Network would be enough to give me a plus point for my proposal?
< ShikharJ> heisenbug_: Ideally a good proposal provides assurance to the reviewer that the student actually is well aware of the stuff he's talking about. So an in-depth look/description would never be a negative aspect.
< zoq> heisenbug_: It should be helpful to put the application together.
< akfluffy> zoq: also, would it be helpful to make RandomInitialization randomize the armadillo seed? Or else it would always be the same model lol
< zoq> akfluffy: You can use mlpack::math::RandomSeed(time(NULL)); if you like to use another seed for each run.
< akfluffy> Ah. So that shouldn't be behavior in the actual RandomInitialization?
< heisenbug_> ShikharJ_: Ohk, so will it be possible if I draft a proposal and send it here you can just let me know in what aspect it requires modification?
< zoq> akfluffy: No, in case you like to debug the code, it would be helpful to have the same seed for each run.
< zoq> heisenbug_: Please upload draft via the GSoC dashboard.
< zoq> Makes it a lot easier for us to provide feedback.
< heisenbug_> ohk.
akfluffy has quit [Ping timeout: 264 seconds]
< heisenbug_> Any more suggestions regarding ANN ALGO Implementation Project?
govg has quit [Ping timeout: 250 seconds]
akfluffy has joined #mlpack
< akfluffy> zoq: where is the init routine normally located? In Train()?
pd09041999 has quit [Ping timeout: 245 seconds]
pd09041999 has joined #mlpack
sreenik has quit [Quit: Page closed]
riaash04 has joined #mlpack
< zoq> heisenbug_: Not right now, maybe you can think of anything interesting?
< riaash04> zoq: NEAT's implementation would require implementation of activation functions. These are already implemented in mlpack but since ensmallen is header only framework, I don't think we would be able to use those implementation directly (since we don't want to include mlpack ). So, would it be required to implement those separately or is there any way the code in mlpack can be reused? Also, ffn code in mlpack could be reused to run
saksham189 has quit [Ping timeout: 256 seconds]
< zoq> riaash04: The sigmoid function should be enough for the start; I think the FFN class might be to static for NEAT, since we are constantly add/remove new nodes/connections we need a good structure that refelect that.
pd09041999 has quit [Quit: Leaving]
< zoq> riaash04: If you think we could make that work, maybe we should put NEAT into the mlpack repo.
< riaash04> zoq: Yes, I was thinking mlpack repo would be more appropriate for NEAT.
< riaash04> zoq: we could use FFN for the forward run, all the add/remove node connections process would be handled by the genome structure, and then after generating the phenotype from the genome we could feed it to ffn
< riaash04> zoq: also, I am thinking of looking into adding some parallel processing to this part, since evaluation of neural networks is one of the most time consuming part of NEAT.
riaash04 has quit [Quit: Page closed]
< zoq> riaash04: That might be a solution, yes. We would have to test how fast we can modify a model.
< zoq> riaash04: OpenMP might be a good solution.
< akfluffy> zoq: problem. Evaluate() already checks if the parameters are empty and calls the function you sent
< akfluffy> so possibly something else is going wrong
< akfluffy> rnn_impl.hpp:182
< akfluffy> going to trace it out
rf_sust2018 has quit [Ping timeout: 255 seconds]
< akfluffy> yeah I have no idea. Looks like it calls ResetParameters() but that doesn't update the actual Parameters for some reason.
akfluffy has left #mlpack []
akfluffy has joined #mlpack
< jenkins-mlpack2> Project mlpack - git commit test build #144: UNSTABLE in 43 min: http://ci.mlpack.org/job/mlpack%20-%20git%20commit%20test/144/
< jenkins-mlpack2> noreply: Merge pull request #1832 from abhinavsagar/abhinavdocs1
Lapras_ has joined #mlpack
Lapras_ has quit [Client Quit]
akfluffy` has joined #mlpack
akfluffy has quit [Ping timeout: 244 seconds]
akfluffy has joined #mlpack
akfluffy has quit [Remote host closed the connection]
akfluffy` has quit [Remote host closed the connection]
akashmahalik has joined #mlpack
< akashmahalik> hi
< akashmahalik> Hello Everyone i am a final year student doing my bachelor's in mathematics and computing.I was just going through https://github.com/mlpack/mlpack/wiki/SummerOfCodeIdeas#profiling-for-parallelization
< akashmahalik> Wanted to know more about this idea i have experience in OpenMPI,Pthreads and OpenMP recently took a course on Parallel Computing http://github.com/akashmahalik
akashmahalik has quit [Client Quit]
favre49 has joined #mlpack
< favre49> zoq: Are you open to the idea of putting NEAT in mlpack? Since the wiki had said that it should be in the ensmallen library, I had centred my proposal around that. If you are interested in its implementation in ml[ack, i could perhaps think of some ideas for that as well.
favre49 has quit [Client Quit]
< heisenbug_> There are 4 versions of Inception Model, I am thinking of applying them...
< heisenbug_> I liked the concept of using 1x1 3x3 5x5 all and then stacking them up so we dont have to decide what filter size to use...
< heisenbug_> What I am thinking is, I will apply one algo and create models and good documentation for other...
< heisenbug_> In which the code will be explained so users can get a more proper view of MLPACK's implementation.
< heisenbug_> First we can create a model with normal CNN, and then after creating an Inception Model we can show there comparison so that users can understand it better...
< heisenbug_> In this way we will obtan both, new ALGO implementation and also new models.
< heisenbug_> What about creating something like https://github.com/Prodicode/ann-visualizer for mlpack?
< heisenbug_> zoq?
lozhnikov has joined #mlpack
< heisenbug_> anyone up to discuss about implementation of Inception Networks?
jenkins-mlpack2 has quit [Ping timeout: 255 seconds]
lozhnikov_ has quit [Ping timeout: 255 seconds]
jenkins-mlpack2 has joined #mlpack
akfluffy has joined #mlpack
< akfluffy> is there any built-in structure to connect neural networks? as in, take in two models and bridge the input->output?
heisenbug_ has quit [Ping timeout: 256 seconds]
< zoq> favre49: I'm open for it, it might work either way, so no need to adjust the proposal.
< zoq> heisenbug_: If you are interested oin the idea, I think that could be a good start; if I remember right there is a open/closed PR for an inception layer.
< zoq> akfluffy: The merge and or Concat layer does that.