verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
TD has joined #mlpack
< TD> I built my project but when I run it I receive the following message: Error LNK1112 module machine type 'x64' conflicts with target machine type 'X86'If I change Visual Studios to 'x64' then it doesn't recognize the mlpack library
< TD> Has anyone encountered this problem?
< rcurtin> TD: did you build mlpack in 32-bit mode? maybe it needs to be built in 64-bit mode?
< rcurtin> when you choose your CMake generator it should give the option of 32-bit Visual Studio or 64-bit Visual Studio, it sounds like you want to pick the latter
< TD> K, thank you! Cmake is going to be the death of me
< rcurtin> you are not its only victim :(
< TD> Do not add CMake to the system PATH or ADD CMake to the system PAth?
< rcurtin> I think there's no problem adding it to the system path
TD has quit [Quit: Page closed]
Mathnerd314 has quit [Ping timeout: 250 seconds]
Mathnerd314 has joined #mlpack
nilay has quit [Ping timeout: 250 seconds]
Mathnerd314 has quit [Ping timeout: 246 seconds]
nilay has joined #mlpack
nilay has quit [Quit: Page closed]
< zoq> nilay: Hello, can you open a new PR, for the feature extraction part and the feature extraction test?
marcosirc has joined #mlpack
Mathnerd314 has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#999 (master - 8d7e5db : Marcus Edel): The build passed.
travis-ci has left #mlpack []
ayush_ has joined #mlpack
ayush_ has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
< nilay> zoq: Hi, I just saw your message, i'll update it in a minute
< nilay> Also, If you have time right now can we discuss how to go about the next task?
< zoq> nilay: Sure, I guess, as you said in the last status update, finishing and testing the discretize function should be our top priority. Also, I think, we also have to clean up and add some comments to the feature extraction part, before we can merge it.
< nilay> yes that i thought i would do after you review that it is "mergeable"
< zoq> Okay, I'll take a look at the new PR, tonight.
< zoq> Also, once the discretize function is finished, sticking everything together should be straightforward. I really like the Hoeffding tree code, it's really well written, clean, etc... so maybe we could test it, and see if that works for us, before we modify the decision stump. If you like, I can run some tests, once you finished the discretize function.
< zoq> Btw. I'm not sure you seen it ... you can use the mlpack PCA function for the discretize function.
< nilay> zoq: seen what?
< zoq> The authors also tested k-Means, if you like you can test that as well.
nilay has quit [Ping timeout: 250 seconds]
< zoq> We could use a template parameter, to let the user choose the mapping rule (PCA, k-Means), something like we did for the KernelRule here: https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/kernel_pca/kernel_pca.hpp
nilay has joined #mlpack
< nilay> right now what i am looking at is after calculating the binary vector the subset of m dimensions that we choose from them, do we choose them randomly or we choose the most representative m dimensions
< nilay> i have seen the pca method. authors also say pca > kmeans
< nilay> for the task we are doing here
< zoq> We choose the most representative dimensions, like the first 3.
< nilay> zoq not the principal components
< nilay> before applying pca
< nilay> we have a 256C2 vector
< nilay> and we take a size 256 vector from this
< nilay> and then apply pca to it
< nilay> so i am talking about the size 256 vector
< zoq> ah, I tested it some time ago, random randomly works
< nilay> i think i am very confused right now
< zoq> you are talking about: ind = N.argmin(N.sum(zs * zs, axis=1)) right?
< nilay> yes
< nilay> i don't know the need of this thing
< zoq> yeah, I absolutely I agree, you could just randomly select n samples, that should work
< zoq> hm
< zoq> nah, I'm almost sure it works by randomly selecting n samples
< nilay> randomly select m samples and apply pca right
< zoq> yes
< nilay> and also why do we have to tweak segs
< nilay> we are calculating discrete label for a 16*16 seg only
< zoq> tweak?
< nilay> so we return that seg and corresponding label
< zoq> yes
< nilay> segs = segs[ind]
< nilay> doing reshpaes on segs
< nilay> reshape*
< nilay> so firstly these segs are structured labels. he has confusing variable names
< nilay> and so we have 256*20000 label
< nilay> and we return 5*20000 class label (assuming we take 5 principal components)
< zoq> yes, right, we could also use 256*20000, but it would take much longer to train the tree, so we "tweak" reduce the dimension of the label
< zoq> If I remember right, we could end up training for days, for bigger datasets. So it's a good idea, to reduce the dimension of the data.
nilay has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
< nilay> having network problems
< marcosirc> Hi, is stereomatchingkiss in irc? is him using the same nickname?
< nilay> It is github handle of tham
< zoq> nilay: oh, I get your pain, there was a country-wide disruption in the german Telekom mobile network caused by a weird problem in the central database ... last weekend.
< marcosirc> nilay: ok thanks.
< nilay> zoq: oh must be inconvenient. this is everyday thing here, so I am pretty much accustomed to it. :)
< nilay> marcosirc: you're welcome
< marcosirc> Hi @zoq , how are you? I would like to plot the progress of a specific metric for different values of a method parameter.
< marcosirc> In particular, I would like to plot the number of base cases for different values of approximation error epsilon (a parameter).
< marcosirc> I think this would be also useful with other metrics. For example, plot the runtime of KNN with different values for k parameter. with k=1 , k=5 , k=10, etc.
< marcosirc> Do you think this could be easily added to actual benchmarking system?
< zoq> marcosirc: Can't complain, how are you, btw. the project looks really good.
< zoq> A plot would be interesting, I guess, you like to use a line plot?
< zoq> Something like the Historical runtime plot: http://www.mlpack.org/benchmark.html
< zoq> If that's the case, all we need to do is to create a new benchmark view: https://github.com/zoq/benchmarks/tree/master/reports/js/benchmarks
< zoq> nilay: oh, every day ... in this case, the situation isn't really comparable :(
< nilay> zoq: so i think we could finish discretize in between when i implement one tree
< nilay> that would make it more intuitive for me
tham has joined #mlpack
< tham> nilay zoq : could we change the options to member functions?
< tham> I think this is easier to use
< nilay> tham: you mean initialize options in a separate function ?
< tham> yes
< tham> options["num_images"] = 2;
< tham> change it to
< tham> NumImages(size_t value);
< nilay> but that would make a lot of parameters
< tham> Yap
< tham> I think using map also need a to setup a lot of parameters
< tham> with map, the users may setup with wrong names
< nilay> wouldn't that be uncomfortable, carrying around so many parameters in all functions
< tham> with member function, user do not need to remember which name they choose
< nilay> ok now i get what you mean
< tham> you can hide the implementation details in map
< tham> NumImages(size_t value){options["num_images"] = value; }
< nilay> now i don't get you again. So member function do you mean means field of a class?
< tham> I think, this is not what I mean, I would paste the example on pastebin
< nilay> ok
< marcosirc> zoq: thanks! Yeah, a line plot will be fine.
< marcosirc> Ok, I will add a new benchmark view.
< marcosirc> Right now, I am updating allknn programs to include approximation (an epsilon parameter).
< tham> marcosirc : I am stereomatchingkiss, I pick this complicated name because many name are used when I tried to create one
< tham> any problem?
< marcosirc> Hi @tham!
< tham> hi @marcosirc
< tham> nice to meet you
< marcosirc> Well I was trying to talk with you, to see if you can help me with a problem compiling in windows... but only if you are available of course.
< tham> ok, no problem
< marcosirc> thanks! you too.
< marcosirc> I don't have a windows machine. The problem is with appveyor...
< marcosirc> But it is a similar problem to an old github issue.
< tham> Which issue you mention about?
< tham> Could you tell me the number?
< nilay> tham: so we call these member functions in main() function instead of writing options["num_images"] = 2;
< marcosirc> this is the PR failing: https://github.com/mlpack/mlpack/pull/693
< marcosirc> this is the similar issue: https://github.com/mlpack/mlpack/issues/476
< tham> Ok, I will give it a try and tell you the results, vc compiler do not support the standard well
< marcosirc> Ok. Thank you very much!
< tham> If it did, we could use expression SFINAE to replace some c++98 "hack" still used by mlpack
< tham> vc6.0 and vc2008 perform very well, but vc compilers become less and less attractive after that
< tham> but vc still remain as the defacto standard compiler on windows
< tham> nilay : yes
< marcosirc> tham: Mmm ok. I don't have experience with vc.
< tham> marcosirc : this would take some times to figure out what is happening, I would reply you after I find out what is happening
< nilay> tham: ok, i will do it this way then.
< marcosirc> tham: Ok, thank you for your time. No problem, I can continue working in other issues.
< tham> marcosirc : I will try to figure it out within one~two days
< tham> after that, I may ask you how to install mlpack on linux, I hope this would be very easy on linux(ubuntu)
< marcosirc> tham: great.
< marcosirc> Sure! Are you planning to install mlpack on a new linux pc?
tsathoggua has joined #mlpack
tsathoggua has quit [Client Quit]
< tham> marcosirc : yap
< marcosirc> Ok. Yes, at least for me it was really easy. I installed mlpack in ubuntu 16.04 with no problem.
< tham> nilay : thanks
< tham> marcosirc : could you tell me which file in tests folder will call the class NeighborSearch?Thanks
< rcurtin> tham: that'll be knn_test.cpp and kfn_test.cpp
< rcurtin> maybe also serialization_test.cpp
< rcurtin> (which should probably be split into many files)
< marcosirc> tham: knn_test.cpp , kfn_test.cpp and rectangle_tree_test.cpp
< tham> rcurtin marcosirc : thanks
< rcurtin> ah yeah, rectangle_tree_test.cpp too, I forgot that one :)
< tham> The test case--KNNModelTest of knn_test.cpp fail
< tham> It is due to the apply_visitor in BuildModel function of ns_model
< tham> marcosirc : vc2015 complain on this line--boost::apply_visitor(tn, nSearch);
< tham> I think the problem is vc2015 cannot specialize the alias template in this case
< tham> Declare it as NeighborSearch may solve the problem(but your sanity will suffer=_=)
< tham> maybe we could define a class to mimic the alias template
tham has quit [Quit: Page closed]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1003 (master - ae6c9e6 : Ryan Curtin): The build passed.
travis-ci has left #mlpack []