verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
TD has joined #mlpack
< TD>
I built my project but when I run it I receive the following message: Error LNK1112 module machine type 'x64' conflicts with target machine type 'X86'If I change Visual Studios to 'x64' then it doesn't recognize the mlpack library
< TD>
Has anyone encountered this problem?
< rcurtin>
TD: did you build mlpack in 32-bit mode? maybe it needs to be built in 64-bit mode?
< rcurtin>
when you choose your CMake generator it should give the option of 32-bit Visual Studio or 64-bit Visual Studio, it sounds like you want to pick the latter
< TD>
K, thank you! Cmake is going to be the death of me
< rcurtin>
you are not its only victim :(
< TD>
Do not add CMake to the system PATH or ADD CMake to the system PAth?
< rcurtin>
I think there's no problem adding it to the system path
TD has quit [Quit: Page closed]
Mathnerd314 has quit [Ping timeout: 250 seconds]
Mathnerd314 has joined #mlpack
nilay has quit [Ping timeout: 250 seconds]
Mathnerd314 has quit [Ping timeout: 246 seconds]
nilay has joined #mlpack
nilay has quit [Quit: Page closed]
< zoq>
nilay: Hello, can you open a new PR, for the feature extraction part and the feature extraction test?
marcosirc has joined #mlpack
Mathnerd314 has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#999 (master - 8d7e5db : Marcus Edel): The build passed.
< nilay>
zoq: Hi, I just saw your message, i'll update it in a minute
< nilay>
Also, If you have time right now can we discuss how to go about the next task?
< zoq>
nilay: Sure, I guess, as you said in the last status update, finishing and testing the discretize function should be our top priority. Also, I think, we also have to clean up and add some comments to the feature extraction part, before we can merge it.
< nilay>
yes that i thought i would do after you review that it is "mergeable"
< zoq>
Okay, I'll take a look at the new PR, tonight.
< zoq>
Also, once the discretize function is finished, sticking everything together should be straightforward. I really like the Hoeffding tree code, it's really well written, clean, etc... so maybe we could test it, and see if that works for us, before we modify the decision stump. If you like, I can run some tests, once you finished the discretize function.
< zoq>
Btw. I'm not sure you seen it ... you can use the mlpack PCA function for the discretize function.
< nilay>
right now what i am looking at is after calculating the binary vector the subset of m dimensions that we choose from them, do we choose them randomly or we choose the most representative m dimensions
< nilay>
i have seen the pca method. authors also say pca > kmeans
< nilay>
for the task we are doing here
< zoq>
We choose the most representative dimensions, like the first 3.
< nilay>
zoq not the principal components
< nilay>
before applying pca
< nilay>
we have a 256C2 vector
< nilay>
and we take a size 256 vector from this
< nilay>
and then apply pca to it
< nilay>
so i am talking about the size 256 vector
< zoq>
ah, I tested it some time ago, random randomly works
< nilay>
i think i am very confused right now
< zoq>
you are talking about: ind = N.argmin(N.sum(zs * zs, axis=1)) right?
< nilay>
yes
< nilay>
i don't know the need of this thing
< zoq>
yeah, I absolutely I agree, you could just randomly select n samples, that should work
< zoq>
hm
< zoq>
nah, I'm almost sure it works by randomly selecting n samples
< nilay>
randomly select m samples and apply pca right
< zoq>
yes
< nilay>
and also why do we have to tweak segs
< nilay>
we are calculating discrete label for a 16*16 seg only
< zoq>
tweak?
< nilay>
so we return that seg and corresponding label
< zoq>
yes
< nilay>
segs = segs[ind]
< nilay>
doing reshpaes on segs
< nilay>
reshape*
< nilay>
so firstly these segs are structured labels. he has confusing variable names
< nilay>
and so we have 256*20000 label
< nilay>
and we return 5*20000 class label (assuming we take 5 principal components)
< zoq>
yes, right, we could also use 256*20000, but it would take much longer to train the tree, so we "tweak" reduce the dimension of the label
< zoq>
If I remember right, we could end up training for days, for bigger datasets. So it's a good idea, to reduce the dimension of the data.
nilay has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
< nilay>
having network problems
< marcosirc>
Hi, is stereomatchingkiss in irc? is him using the same nickname?
< nilay>
It is github handle of tham
< zoq>
nilay: oh, I get your pain, there was a country-wide disruption in the german Telekom mobile network caused by a weird problem in the central database ... last weekend.
< marcosirc>
nilay: ok thanks.
< nilay>
zoq: oh must be inconvenient. this is everyday thing here, so I am pretty much accustomed to it. :)
< nilay>
marcosirc: you're welcome
< marcosirc>
Hi @zoq , how are you? I would like to plot the progress of a specific metric for different values of a method parameter.
< marcosirc>
In particular, I would like to plot the number of base cases for different values of approximation error epsilon (a parameter).
< marcosirc>
I think this would be also useful with other metrics. For example, plot the runtime of KNN with different values for k parameter. with k=1 , k=5 , k=10, etc.
< marcosirc>
Do you think this could be easily added to actual benchmarking system?
< zoq>
marcosirc: Can't complain, how are you, btw. the project looks really good.
< zoq>
A plot would be interesting, I guess, you like to use a line plot?
< marcosirc>
Right now, I am updating allknn programs to include approximation (an epsilon parameter).
< tham>
marcosirc : I am stereomatchingkiss, I pick this complicated name because many name are used when I tried to create one
< tham>
any problem?
< marcosirc>
Hi @tham!
< tham>
hi @marcosirc
< tham>
nice to meet you
< marcosirc>
Well I was trying to talk with you, to see if you can help me with a problem compiling in windows... but only if you are available of course.
< tham>
ok, no problem
< marcosirc>
thanks! you too.
< marcosirc>
I don't have a windows machine. The problem is with appveyor...
< marcosirc>
But it is a similar problem to an old github issue.
< tham>
Which issue you mention about?
< tham>
Could you tell me the number?
< nilay>
tham: so we call these member functions in main() function instead of writing options["num_images"] = 2;