ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
jeffin143 has joined #mlpack
jeffin has quit [Remote host closed the connection]
< jeffin143> rcurtin zaq : is there any module in mlpack which can split strings i mean
< jeffin143> For example "hi hello how are you" could be extracted as [ hi , hello, how, are , you] a vector
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 252 seconds]
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 264 seconds]
< rcurtin> jeffin143: I seem to remember boost having some nice tokenizers in it
< rcurtin> maybe there are in the STL now though
< jeffin143> Thanks rcurtin
seewishnew has joined #mlpack
< rcurtin> sure, hope it helps :)
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
< jeffin143> http://www.mlpack.org/docs/mlpack-git/doxygen/bindings.html -> this link still exist or has it been shifted..??
gauravcr7rm has joined #mlpack
< gauravcr7rm> zoq: yes i am talking about the paper you mentioned.
sreenik has joined #mlpack
< sreenik> Does armadillo have anything similar to a python dict?
gauravcr7rm has quit [Ping timeout: 256 seconds]
Toshal has joined #mlpack
harias has joined #mlpack
Viserion has joined #mlpack
harias has quit [Quit: Page closed]
< Viserion> Can anyone point out error in: /home/viserion/MLPACK/src/mlpack/methods/cf/cf_main.cpp: In function ‘void AssembleFactorizerType(const string&, const string&, arma::mat&, size_t)’: /home/viserion/MLPACK/src/mlpack/methods/cf/cf_main.cpp:399:95: error: no matching function for call to ‘PerformAction(arma::mat&, const size_t&, const size_t&, const double&)’
< Viserion> PerformAction<NMFPolicy, CombinedNormalization>(dataset, rank, maxIterations, minResidue);
harias[m] has joined #mlpack
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
jeffin has joined #mlpack
jeffin143 has quit [Read error: Connection reset by peer]
< Viserion> z
Toshal_ has joined #mlpack
Toshal has quit [Ping timeout: 245 seconds]
Toshal_ is now known as Toshal
kanishq24b has joined #mlpack
Viserion has quit [Ping timeout: 256 seconds]
Toshal_ has joined #mlpack
Toshal has quit [Ping timeout: 245 seconds]
Toshal_ is now known as Toshal
seewishnew has joined #mlpack
seewishn_ has joined #mlpack
mulx10 has joined #mlpack
seewishnew has quit [Ping timeout: 250 seconds]
< mulx10> sreenik: You can use STL map, I am not sure if armadillo handles dict.
Toshal has quit [Ping timeout: 245 seconds]
seewishn_ has quit [Ping timeout: 246 seconds]
< sreenik> mulx10: Yup, thanks :)
seewishnew has joined #mlpack
Toshal has joined #mlpack
kanishq24b has quit [Ping timeout: 268 seconds]
kanishq24b has joined #mlpack
prateek0001 has joined #mlpack
kanishq24b has quit [Ping timeout: 246 seconds]
seewishnew has quit [Remote host closed the connection]
prateek0001 has quit [Remote host closed the connection]
prateek0001 has joined #mlpack
seewishnew has joined #mlpack
prateek0001 has quit [Ping timeout: 258 seconds]
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
Toshal has quit [Ping timeout: 246 seconds]
mulx10 has quit [Ping timeout: 256 seconds]
seewishnew has joined #mlpack
johnsoncarl[m] has joined #mlpack
Toshal has joined #mlpack
prateek0001 has joined #mlpack
prateek0001 has quit [Read error: Connection reset by peer]
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
< zoq> gauravcr7rm: Okay, I glanced over it and it sounds interesting, the performance improvement is quite nice.
seewishnew has quit [Remote host closed the connection]
mulx10 has joined #mlpack
seewishnew has joined #mlpack
kanishq24b has joined #mlpack
seewishnew has quit [Ping timeout: 268 seconds]
gauravcr7rm has joined #mlpack
< gauravcr7rm> zoq: yes
Toshal has quit [Ping timeout: 255 seconds]
seewishnew has joined #mlpack
< jeffin> Zoq , shikhar j , recuritin : could you check travis logs
< jeffin> Gradient transpose convolution layer test is throwing up error
< jeffin> Also i tried to sync from my forked repo
< jeffin> And then building it locally throws me error
Toshal has joined #mlpack
< jeffin> Also for pr 1780, i didn't run anything bt travis cl failed after syncing
mulx10 has quit [Ping timeout: 256 seconds]
< jeffin> And running mlpack_test throws up an error
< jeffin> Symbol lookup error Bin/mlpack_test : undefined symbol: _ZN6mlpack9BacktraceC1Ei
seewishnew has quit [Ping timeout: 250 seconds]
gauravcr7rm has quit [Ping timeout: 256 seconds]
Viserion has joined #mlpack
< Viserion> Can anyone checkout this error:home/viserion/MLPACK/src/mlpack/methods/cf/cf_main.cpp: In function ‘void AssembleFactorizerType(const string&, const string&, arma::mat&, size_t)’: /home/viserion/MLPACK/src/mlpack/methods/cf/cf_main.cpp:399:95: error: no matching function for call to ‘PerformAction(arma::mat&, const size_t&, const size_t&, const double&)’
< Viserion> in function PerformAction<NMFPolicy, CombinedNormalization>(dataset, rank, maxIterations, minResidue);
vikashChouhan has joined #mlpack
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
< vikashChouhan> Hello, I am interested in implementing 'Application of ANN Algorithms Implemented in mlpack'. I am bit confused about the minimum number of model we need to make.
< johnsoncarl[m]> hey:)
< johnsoncarl[m]> is anyone working on quantum gmm
< johnsoncarl[m]> ?
< sumedhghaisas> vikashChouhan: Hi. There is no minimum number of models required for that project. It depends on what models you plan to implement and how much time they expect to take.
< johnsoncarl[m]> sumedhghaisas:
< johnsoncarl[m]> is anyone working on Quantum GMMs
seewishnew has joined #mlpack
< sumedhghaisas> johnsoncarl[m]: Hi. I am not sure what you mean by 'anyone working on it'? For now its just my personal project :)
< johnsoncarl[m]> hey sumedhghaisas
< johnsoncarl[m]> i saw it in the lists of GSOC ideas.
< johnsoncarl[m]> so i thought if it is assigned to someone, or is someone already working on it.
< sumedhghaisas> ahh no... its not assigned. its a potential project.
< johnsoncarl[m]> so you are working on it, right. So its not supposed to be for someone else. 😅 I'm not sure
< vikashChouhan> sumedhghaisas: Thanks, So, you mean to say that number of models is our choice based on time they consumed. One more question, Do we need to implement these models using mlpack ann module from scratch?
< sumedhghaisas> johnsoncarl[m]: haha... no I am working on it out of interest. :) The project is open for GSOC applicants. I will be mentoring the project
< johnsoncarl[m]> oh great, 🙂
< johnsoncarl[m]> lets see what has it got for me.
< johnsoncarl[m]> @sume
Viserion has quit [Ping timeout: 256 seconds]
< johnsoncarl[m]> sumedhghaisas:
< johnsoncarl[m]> is there some way I can catch up with your progress? sumedhghaisas
< sumedhghaisas> vikashChouhan: Yes, the application will require implementation using mlpack code.
< vikashChouhan> Thanks @sumedhghaisas
< sumedhghaisas> johnsoncarl[m]: Surely. umm... although most of it is rough work on back of my notebook. :) One thing to note is I highly question their derivation
seewishnew has quit [Remote host closed the connection]
< johnsoncarl[m]> oh. I didn't read it yet.
< sumedhghaisas> I would start their, although little bit hairy I should warn you.
< sumedhghaisas> *there
< johnsoncarl[m]> I just ran my eye over it.
< johnsoncarl[m]> sumedhghaisas:
< johnsoncarl[m]> Oh,, 😅
< johnsoncarl[m]> maybe its beneficial if I start eithout knowing much about it.
< sumedhghaisas> The idea looks good but we need to find a way to optimize the likelihood some way, if EM then some way to get around the constraints ... mostly I am stuck there
< johnsoncarl[m]> won't be scary for me..
< johnsoncarl[m]> hmm.. sumedhghaisas ,, let me see what it has for me.
< johnsoncarl[m]> :)
< johnsoncarl[m]> and , what about about the translator?
< johnsoncarl[m]> and yeah one more stuff, sumedhghaisas ,,
< johnsoncarl[m]> I need to prepare a report right?
< sreenik> Hey if you don't mind can explain in one or two lines about quantum gmms. I didn't hear it before, but it sounds intriguing
< sreenik> *can you :)
seewishnew has joined #mlpack
< sumedhghaisas> johnsoncarl[m]: translator? andyes, for GSOC application you will need to write your proposal.
< johnsoncarl[m]> hey sreenik: , its just an idea where interference patterns are used to predict the results. High school Physics though 😅, and this interference is made through the Gaussian models.
< sumedhghaisas> sreenik: Hi. Its an idea where we model a gaussian distribution using a wave function, square of the wave will give you the distribution
< sumedhghaisas> and yes, there is interference between waves so the model may show more representative power than classical GMM
< sumedhghaisas> but thats yet to be proven
< sreenik> Oh that's really cool. Best of luck to you people :)
< johnsoncarl[m]> sumedhghaisas: So, the proposal i'll be preparing must be having the solution that how i'll be coding that derivation into a code to make a model?
< johnsoncarl[m]> and the tests, with the detailed timeline i'll be following?
< johnsoncarl[m]> and few other side things like the tests, documentation and maybe the video explaining there usage....
seewishnew has quit [Remote host closed the connection]
< sumedhghaisas> I think we will need to think that derivation part again, I am not sure thats correct so... :)
< sumedhghaisas> Also I will be away from keyboard for about 1 hour for lunch :)
< johnsoncarl[m]> oh.. lunch .. that's important.. 😄.. sure
< johnsoncarl[m]> okay, i'll loo at the derivation part
< johnsoncarl[m]> *look
seewishnew has joined #mlpack
gauravcr7rm has joined #mlpack
< gauravcr7rm> Zoq: so shall I go forward for it. For that I have to make a lot of changes from traditional NEAT.
seewishnew has quit [Ping timeout: 245 seconds]
< gauravcr7rm> Zoq: Can I have two proposals on the same, one including the changes and other traditional one.
gauravcr7rm has quit [Ping timeout: 256 seconds]
seewishnew has joined #mlpack
kanishq24b has quit [Ping timeout: 255 seconds]
vikashChouhan has quit [Quit: Page closed]
mulx10 has joined #mlpack
seewishnew has quit [Remote host closed the connection]
< Sergobot> Hey zoq and everyone else!
< Sergobot> A couple weeks ago I came in here asking about NEAT project for GSoC. Since then I've read the paper twice, briefly studied implementations (mlpack-based ones and others too) and made a couple small apps (basically retyped examples with other datasets) to get familiar with mlpack structure and interface.
< Sergobot> Now I would like to take up an issue and work on it to study the internals further. May I take #1741? It's about CI tesiting with different seeds.
< rcurtin> Sergobot: sure, you can take a look if you like, but mirraaj was already working on it in jenkins-conf/#13
< rcurtin> that's just an issue though, not a PR, so I don't think it's ready yet
Toshal has quit [Remote host closed the connection]
pd09041999 has joined #mlpack
mulx10 has quit [Ping timeout: 256 seconds]
< jeffin> Rcurtin : how to test the running time of bin/mlpack_test
< jeffin> I have made some changes in test file, now if i want to see whether it's optimised or not, so how could i do benchmarking
seewishnew has joined #mlpack
< rcurtin> jeffin: you can just run 'time bin/mlpack_test'
seewishnew has quit [Remote host closed the connection]
< rcurtin> and I think if you do 'bin/mlpack_test -h', it will give you some options on what kind of output to print, and some of these output options will print reports with times for individual tests
yo_ has joined #mlpack
< jeffin> Ok yeah i have tried --help
< jeffin> It gave many options i have tried some,
< jeffin> Looking for that one where it prints the report
< Sergobot> rcurtin: thanks for the pointer! I'll check it out
kanishq24b has joined #mlpack
kanishq24b has quit [Ping timeout: 244 seconds]
yo_ has quit [Ping timeout: 256 seconds]
kanishq24b has joined #mlpack
pd09041999 has quit [Ping timeout: 245 seconds]
satyam has joined #mlpack
< satyam> Do we just have to make c++ codes with similar algorithms used in python for machine learning.
satyam has quit [Client Quit]
pd09041999 has joined #mlpack
whoKnows has joined #mlpack
< whoKnows> where I should integrate documentation for a new feature I wanted to add
< ShikharJ> whoKnows: Depends on the size of the feature. What exactly are we talking about here?
< whoKnows> I want to support more metric classes for classification and regression models
< whoKnows> Although I generated a PR but rcurtin said to add test and documentation support
< whoKnows> I already wrote tests for those metrics but don't know where to integrate docs
< ShikharJ> whoKnows: I think Ryan might be better suited to answer this, though I'd reckon the places where the documentation of the previously implemented classes have been handled (mostly in the files themselves) might be the place where you'd wanna add the documentation.)
pd09041999 has quit [Ping timeout: 252 seconds]
< ShikharJ> rcurtin: The new website is super schway!
pd09041999 has joined #mlpack
< zoq> rcurtin: Sergobot: #1741 already has an initial implementation, but waits for further input before moving on. But I guess mirraaj would be happy to collaborate on this one.
< jeffin> zoq recurtin : shall i use openmp loop parallelism for the code base...??? At test dir to speed up test
< zoq> jeffin: You can already use ctest to run tests in parallel.
whoKnows has quit [Quit: Page closed]
Toshal has joined #mlpack
kanishq2 has joined #mlpack
kanishq24b has quit [Ping timeout: 272 seconds]
< jeffin> Ok thanks
Viserion has joined #mlpack
kanishq24 has quit []
< Viserion> I know this time people are lot busy here due to gsoc...I would be very obliged if someone could really help me point out the error I am doing in the following:
Viserion has quit [Quit: Page closed]
Viserion has joined #mlpack
< Viserion> In file included from /home/viserion/MLPACK/build/src/mlpack/bindings/python/generate_pyx_cf.cpp:34:0: /home/viserion/MLPACK/src/mlpack/methods/cf/cf_main.cpp: In function ‘void AssembleFactorizerType(const string&, const string&, arma::mat&, size_t)’: /home/viserion/MLPACK/src/mlpack/methods/cf/cf_main.cpp:399:95: error: no matching function for call to ‘PerformAction(arma::mat&, const size_t&, const size_t&, const double&)â
< Viserion> PerformAction<NMFPolicy, CombinedNormalization>(dataset, rank, maxIterations, minResidue);
kanishq24 has joined #mlpack
Kushkk has joined #mlpack
< Kushkk> Hi all, I want to solve generalized eigenvalue problem for n number of eigenvalue.
< Kushkk> Currently I am using eig_pair, but it gives me lot of eigenvalue. I wanted 10 or n number of eigenvalues and corresponding eigenvectors
< Kushkk> Is there any way to implement it such as eigs( A,B,"sm", 10) implemented in MATLAB
< rcurtin> Kushkk: it sounds like your question is an Armadillo question not an mlpack question
< rcurtin> but if you have a sparse matrix you can use eigs()
< rcurtin> rather, eigs_sym() and eigs_gen(), that is
< rcurtin> if you have a dense matrix, it is possible to modify the Armadillo code to have eigs_sym() and eigs_gen() accept a dense matrix; however, that would require code modification
< rcurtin> Viserion: you've asked for help with that problem for several days now and nobody has responded... what have you tried to solve the issue?
< rcurtin> it's pretty clear to me just from reading the error in the compiler that there is no function with the signature PerformAction(arma::mat& const size_t&, const size_t&, const double&)
< rcurtin> have you looked at cf_main.cpp line 399 to see if you are calling the function wrong?
< rcurtin> and have you looked at where PerformAction() is defined to see if there is even a version of it with those arguments defined?
< rcurtin> these should be relatively simple C++ errors to debug if you are willing to dig in a little bit; it shouldn't be necessary to be held up for many days on that one problem
< rcurtin> ShikharJ: thanks! there are still a few issues to work out with the website though :)
< Viserion> I have looked at it multiple times
< rcurtin> whoKnows: you'll need to go through the documentation in the library to see where things like the features you want to add are documented, and add documentation for your new features
< rcurtin> whoKnows: basically, nobody will read the source all the way through to see what functions are there; they will read the documentation instead
< rcurtin> and if there's functionality that's not mentioned in the documentation, most people won't know about it (so it may as well not exist)
< rcurtin> therefore, take a look through all the documentation you can find (maybe look around on the website) and see what might have something to do with the functionality you are adding
< Kushkk> Right now I am calculating all the eigenvalue and selecting required modes. But unable to get the corresponding eigenvectors
< rcurtin> Kushkk: eig_sym() and eig_gen() give you the eigenvectors also
< Viserion> I am calling the PerformAction function from another function Assemble Factorizer as you could see and gettig 2 another params in Assemble Factorizer
< Viserion> you could look at my forked repo: https://github.com/Zupiter/mlpack
< rcurtin> Viserion: I don't have time to do that. you'll need to take a close look at the code that you have and understand the error message so that you can fix it
< Kushkk> But eig_gen can not solve generalized eigenvalue problem
< rcurtin> Kushkk: okay, I see. the eig_pair() documentation shows that you can also recover the corresponding eigenvectors
< rcurtin> if you have found a bug or a problem, it would probably be best to file an issue on the Armadillo Gitlab repository
< Kushkk> Sorry, is there any community for armadillo which can help me to solve this issue?
< rcurtin> no problem; StackOverflow may be helpful, and if there is a bug you can file it on Gitlab
< Viserion> Thanks @rcurtin
< zoq> Viserion: In this particular case CombinedNormalization is a template class with no default type.
< rcurtin> Kushkk: see also http://arma.sourceforge.net/faq.html, it has some more information
< Kushkk> I tried there several times but not getting any response yet?
< rcurtin> that's unfortunate, I can't say why though. maybe nobody is watching it right now
< Kushkk> Even stackoverflow, there are 2 questions I asked, no response yet.
< rcurtin> well, like I mentioned, you can manually modify eigs_gen() and eigs_sym() to work for your particular problem
< Kushkk> So i come here with a hope
< rcurtin> sorry I can't be more helpful---I do contribute to Armadillo and help maintain it, but I am mostly familiar with the sparse matrix code :(
< Kushkk> I see. May I ask for a favour? Just instruct me how to convert this code to armadillo.http://phys.ubbcluj.ro/~tbeu/INP/include/eigsys.h
< zoq> Viserion: So, I see this could be tricky, but to solve the build issue you could use CombinedNormalization<> instead of just CombinedNormalization.
< zoq> Viserion: I hope this is helpful.
< rcurtin> Kushkk: I don't know very much about that code at all, but basically you can try to use arma::mat in place of the double** and I think that will help you get most of the way there
< rcurtin> the access operators [][] would need to be changed also
< Kushkk> I see
< Kushkk> Thanks again I will try.
< zoq> Viserion: Ignore the not default type part, this would't work if there wasn't a default type.
Kushkk has quit [Ping timeout: 256 seconds]
< sreenik> Does mlpack have an fp16 mode ?
gauravcr7rm has joined #mlpack
< gauravcr7rm> zoq: ??
gauravcr7rm has quit [Ping timeout: 256 seconds]
kanishq2 has quit [Quit: Leaving]
< rcurtin> sreenik: not at the moment, but for some algorithms you could use 'float' (float32) instead of double
< rcurtin> there are some open issues for this, to allow arbitrary precision
< rcurtin> but even if we could allow arbitrary precision, note that there are some limitations on what Armadillo can support that will restrict us
< rcurtin> and if we want to provide real fp16 or fp8 or lower support, we'll need something with the same API as Armadillo that supports lower precision
< rcurtin> (or native support in Armadillo for that)
< sreenik> I see. Not quite an easy job :(
< sreenik> rcurtin: The CLI (the one on which I am working) network provided by the user can have multiple errors. So shall the error message be a simple one like "invalid model declaration", "invalid loss declaration", etc. or something more detailed mentioning each of the errors and not only the first one?
< sreenik> Or maybe I should plan to initially keep it simple (i.e., print a simple error message) but structure it in such a way that it can be easily modified later?
pd09041999 has quit [Quit: Leaving]
Toshal has quit [Ping timeout: 244 seconds]
< rcurtin> sreenik: haven't had a chance to look at the issue again, but generally the more specific we can be with the error message, the better
< rcurtin> i.e. if we can provide a line number to the user can immediately go to that line in the file and see that something is wrong, that would be great
< sreenik> That sounds perfect. The issue is alright now. I hope to file a PR soon. You might be able to check that out this weekend if all goes right
< rcurtin> I'm booked all weekend (some friends are in town) but I'm looking forward to taking a look when I can
< rcurtin> :)
< sreenik> Haha sure whenever you can find time ^⌣^
sreenik has quit [Quit: Page closed]
< zoq> gauravcr7rm: What about proposing the two ideas in one proposal, mention the advantage and disadvantages.
Viserion has quit [Ping timeout: 256 seconds]
jeffin has quit [Ping timeout: 250 seconds]
Awesommesh has joined #mlpack
Awesommesh has left #mlpack []