verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
KeonKimMobile has quit [Ping timeout: 246 seconds]
sumedhghaisas has joined #mlpack
keon has joined #mlpack
keon has quit [Ping timeout: 248 seconds]
witness_ has quit [Quit: Connection closed for inactivity]
anatolemoreau4 has joined #mlpack
wasiq has joined #mlpack
not_mathaddict has quit [Quit: Leaving]
not_mathaddict has joined #mlpack
mentekid has quit [Ping timeout: 248 seconds]
not_mathaddict is now known as MathAddict
anatolemoreau has joined #mlpack
anatolemoreau4 has quit [Ping timeout: 246 seconds]
anatolemoreau1 has joined #mlpack
anatolemoreau has quit [Ping timeout: 268 seconds]
anatolemoreau2 has joined #mlpack
anatolemoreau1 has quit [Ping timeout: 246 seconds]
agobin has quit [Quit: Connection closed for inactivity]
adahp has quit [Ping timeout: 244 seconds]
sumedhghaisas has quit [Ping timeout: 244 seconds]
sumedhghaisas has joined #mlpack
ank_95_ has joined #mlpack
wasiq has quit [Ping timeout: 252 seconds]
anatolemoreau2 has quit [Ping timeout: 264 seconds]
Nilabhra has joined #mlpack
pin3da has joined #mlpack
sumedhghaisas has quit [Remote host closed the connection]
adahp has joined #mlpack
nilay has joined #mlpack
ftuesca has quit [Quit: Leaving]
chrishenx has quit [Ping timeout: 244 seconds]
nilay has quit [Ping timeout: 250 seconds]
chrishenx has joined #mlpack
palashahuja has joined #mlpack
< palashahuja>
zoq,rcurtin would we need a parallelization architecture for GoogLeNet ?
< palashahuja>
zoq , rcurtin would we need a parallelization architecture for GoogLeNet ?
< palashahuja>
because googlenet would require a lot of computational resources
skon has joined #mlpack
chrishenx has quit [Ping timeout: 244 seconds]
chrishenx has joined #mlpack
mentekid has joined #mlpack
Nilabhra has quit [Ping timeout: 276 seconds]
mentekid has quit [Read error: Connection reset by peer]
mentekid has joined #mlpack
Nilabhra has joined #mlpack
Rishabh has joined #mlpack
Rishabh is now known as Guest83258
Guest83258 has quit [Remote host closed the connection]
Rishabh_ has joined #mlpack
chrishenx has quit [Remote host closed the connection]
Rishabh_ has quit [Read error: Connection reset by peer]
pin3da has quit [Quit: Page closed]
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
palashahuja has joined #mlpack
Rishabh has joined #mlpack
Rishabh is now known as Guest90481
Guest90481 has quit [Read error: Connection reset by peer]
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
< Nilabhra>
can anyone tell me the difference between the name.cpp files and the name_imp.cpp files?
< Nilabhra>
I mean what factors differentiate them?
< mentekid>
Nilabhra: I think you mean .hpp right? From what I have figured:
< mentekid>
* name.hpp is function/class declarations
< mentekid>
* name_imp.hpp is the implementation, so the code for each function/class
< Nilabhra>
mentekid: no I meant cpp
< mentekid>
*name_main.cpp is the source compiled to make the executables in the build/bin/ folder
< Nilabhra>
mentekid: have a look at mlpack/methods/cf/
Rishabh_ has joined #mlpack
< Nilabhra>
mentekid: it has cf.cpp and cf_impl.cpp
< mentekid>
I only see cf.cpp and cf_main.cpp, the impl is hpp in my folder
< palashahuja>
I think the impl is more inclined towards the implementation that is more user oriented
< palashahuja>
because it has all the program files where all the documentation as well it is taking input from the user
< palashahuja>
I am talking only about cf here
Rishabh_ has quit [Read error: Connection reset by peer]
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
na1taneja2821 has joined #mlpack
< palashahuja>
cf_main has int main(..) in it
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
< Nilabhra>
palashahuja: yeah I realized that, but thought there would be more factors of differentiation. The "impl" is a little elusive. Thanks all of you for your replies :)
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
Rishabh has joined #mlpack
Rishabh is now known as Guest91462
Guest91462 has quit [Read error: Connection reset by peer]
wiking has quit [Quit: leaving]
Mathnerd314 has quit [Ping timeout: 252 seconds]
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
Rishabh_ has joined #mlpack
Rishabh_ has quit [Read error: Connection reset by peer]
Rishabh_ has quit [Read error: Connection reset by peer]
< uzipaz>
It seems that I cannot train a multi-layered perceptron model using the mlpack_perceptron program, can anybody confirm this? Also, does the mlpack library has support for multi-layered perceptrons?
Awcrr has quit [Quit: away for a while...]
Rishabh_ has joined #mlpack
apirrone has joined #mlpack
< apirrone>
Hello there
< apirrone>
i'm trying to build mlpack, but i have an error, i can't figure how to solve it
< uzipaz>
describe the error
< apirrone>
the error is : "c++ :error :unrecognized command line option '-ftemplate-backtrace-limit=0'
< apirrone>
when building discrete_distribution.cpp.o
< apirrone>
(which is weird btw)
< apirrone>
armadillo, cmake and boost are installed, and i followed the procedure described on the github Readme to build
< uzipaz>
try updating your g++ compiler
< uzipaz>
a quick search on google is telling me its probably compiler being outdated
< apirrone>
really? i couldn't find anything with "-ftemplate-backtrace-limit=0"
< apirrone>
thanks, i'll try
< uzipaz>
try updating your g++ compiler
mentekid has quit [Ping timeout: 276 seconds]
< apirrone>
I think i'll have to update my debian distribution... thanks anyway
< zoq>
palashahuja: Could be interesting to parallelize the architecture using OpenMP.
uzipaz has joined #mlpack
< uzipaz>
It seems that I cannot train a multi-layered perceptron model using the mlpack_perceptron program, can anybody confirm this? Also, does the mlpack library has support for multi-layered perceptrons?
Bartek has quit [Ping timeout: 248 seconds]
< zoq>
uzipaz: It's a single layer perceptron.
ank_95_ has quit [Quit: Connection closed for inactivity]
< rcurtin>
uzipaz: that perceptron code should probably be replaced with the ANN code in the long run anyway, so I am hoping that in the future it will go away, but I don't think it's time for that yet
< uzipaz>
zoq: is there are a way to specify multi-layered perceptron? Is there support in the library, I am going through the source code but unable to find anything about multiple hidden layers in ann
MathAddict has quit [Ping timeout: 244 seconds]
uzipaz_ has joined #mlpack
< zoq>
uzipaz_: You could use the FFN class from the master branch.
< uzipaz_>
zoq: thanks for answering, I was originally using WEKA to train this dataset that comprises of about 1500 features and 2000 samples, with ANN but it was just too slow
< uzipaz_>
zoq: Hence, I am trying to do the same with this mlpack library, figuring that it wil speed up the process because its implemented in C++
uzipaz has quit [Ping timeout: 250 seconds]
< rcurtin>
uzipaz_: do make sure you compile without debugging and profiling symbols if you want it to be fast :)
< rcurtin>
when you call cmake, you can just specify -DDEBUG=OFF -DPROFILE=OFF
chick_ has quit [Quit: Connection closed for inactivity]
< uzipaz_>
rcurtin: thank you for the tip, I don't know how to specify multiple-hidden layers with ann in mlpack, mlpack_perceptron suggests that it only works for linearly sperable data and hence, it may not work
< zoq>
uzipaz_: Yeah, weka isn't the fastest library :)
< zoq>
uzipaz_: If you tell me your network architecture I can probably tell you how to replicate that using the mlpack code.
< uzipaz_>
zoq: I do not have an explicit network architecture to work with, I was thinking that I would specify number of hidden layers between the input and output layers and also specify the number of neurons for each of those hidden layers
< uzipaz_>
zoq: I want to experiment with architectures of different complexities and see how they work on this particular dataset
< rcurtin>
uzipaz_: this may only be a place to start, but you might take a look at the tests in src/mlpack/tests/feedforward_network_test.cpp
< rcurtin>
and maybe you can base your code on the tests there
< rcurtin>
I think this is less good than the answer zoq can give but maybe it is still helpful :)
< uzipaz_>
rcurtin: I appreciate the help :) I will check the source code
< zoq>
nah, it's probably the best way to start :)
skon46 has joined #mlpack
tsathoggua has joined #mlpack
skon46 has quit [Read error: Connection reset by peer]
skon46 has joined #mlpack
skon46 has quit [Read error: Connection reset by peer]
chick_ has joined #mlpack
Bartek has joined #mlpack
uzipaz_ has quit [Ping timeout: 250 seconds]
anatolemoreau2 has quit [Ping timeout: 248 seconds]
skon46 has joined #mlpack
skon46 has quit [Client Quit]
palashahuja has joined #mlpack
agobin has quit [Quit: Connection closed for inactivity]
< palashahuja>
zoq, hi
< zoq>
palashahuja: Hello
< palashahuja>
is there a requirement for a parallelization architecture for GoogLeNet ?
< palashahuja>
Should I include in that proposal?
< zoq>
palashahuja: No, but it could be interesting.
< palashahuja>
Also, how are we going to do the training ?
< palashahuja>
the ILSVRC dataset is the order of 800 GB
< zoq>
Yeah, it's huge, I guess we could test it on a small subset.
< zoq>
The interesting part is the inception layer.
< palashahuja>
Yes, I think that I should focus on that for now .
< zoq>
And of course the special network structure, we have to figure out how to split the network.
ank_95_ has joined #mlpack
Mathnerd314 has joined #mlpack
< zoq>
palashahuja: Btw. I merged the DropConnect code. Thanks for the contribution.
< palashahuja>
Thanks for merging the PR
Awcrr has quit [Remote host closed the connection]
< palashahuja>
or we could do something on Azure
mentekid has quit [Ping timeout: 252 seconds]
< palashahuja>
or a platform as a service
Awcrr has joined #mlpack
Awcrr_ has joined #mlpack
< zoq>
palashahuja: yeah, I guess we could do that, to get a pretrained model.
< chrishenx>
Hi there, I have submitted my draft, can I get a review for it?
anatolemoreau2 has joined #mlpack
pin3da has joined #mlpack
Rishabh_ has quit [Ping timeout: 248 seconds]
Rishabh_ has joined #mlpack
agobin has joined #mlpack
< rcurtin>
chrishenx: I will look if there is time
< chrishenx>
rcurtin: thanks!
mentekid has joined #mlpack
pin3da has quit [Ping timeout: 250 seconds]
Rishabh_ has quit [Ping timeout: 264 seconds]
_Rishabh has joined #mlpack
anatolemoreau2 has quit [Ping timeout: 268 seconds]
mentekid has quit [Quit: Leaving]
mentekid has joined #mlpack
_Rishabh has quit [Ping timeout: 260 seconds]
Rishabh_ has joined #mlpack
< toshad>
rcurtin: In the "Implement tree types" project, can I consider adding multi-vantage point trees rather than ( or along with) vantage point trees
chick_ has quit [Ping timeout: 250 seconds]
< rcurtin>
sure, please provide a reference in your proposal to the literature on multi-vantage point trees
< rcurtin>
no time now, but thanks for the link :)
chick_ has joined #mlpack
miku has joined #mlpack
< miku>
Hey, I was interested in a few proposed ideas for GSoC, but haven't been able to contribute a patch until now. Will stuff like "Fast k-centers Algorithm & Implementation", which seems more research than implementation based, require such contribution?
< rcurtin>
yeah, all of the GSoC projects do require a code component
< rcurtin>
so although the k-centers algorithm is research based, we'll have to have at least some prototype implementations and code by the end of the semester; it can't all be theory (unfortunately)
< miku>
Oh that's fine!
Jim___ has joined #mlpack
< miku>
I meant, I haven't been able to contribute anything as of yet. Would you expect that before considering the application?
< rcurtin>
ah, sorry, I misunderstood
< rcurtin>
no, contributions aren't required before submitting an application
< Jim___>
hello. yesterday I submitted a draft via Google Summer of Code website. would I send it to you? or you would just review it from there?
< miku>
Alright, thanks :)
Titu has joined #mlpack
< rcurtin>
Jim___: there are currently about 50 proposals and about 2 mentors actively reviewing them in their spare time, so I wouldn't expect much of a response, unfortunately
< Jim___>
so I am going to send it anyway?
< rcurtin>
I don't know what you mean
< Jim___>
I mean
< Jim___>
I won't wait for a review of the draft
< Jim___>
I send my proposal anyway directly to Google
< rcurtin>
ah, okay, yeah
< Jim___>
right?
< rcurtin>
if we need any clarifications we can ask you questions after the deadline
< rcurtin>
I'm sorry that I can't put time into looking over your proposal; I'd like to, there are just too many of them and not enough time :(
< miku>
Where can I find the current list of proposals? Thanks.
< rcurtin>
miku: those are private, I think as a student you can only see what you have submitted
ank_95__ has joined #mlpack
< miku>
Oh alright :) Given that I'm rather late in applying, would have liked to know which projects are less crowded. Any way I could get that information?
< rcurtin>
miku: I don't think I'm supposed to share that, sorry. you might take a look on the mailing list to see what has been talked about a lot and what has been talked about less
< miku>
Cool, not an issue. Thanks for the help :)
ank_95_ has quit [Ping timeout: 250 seconds]
Rishabh_ has quit [Quit: No Ping reply in 180 seconds.]
ank_95__ is now known as ank_95_
Rishabh has joined #mlpack
Rishabh is now known as Guest25770
Guest25770 is now known as Rishabh_
< rcurtin>
miku: sure, no problem
WowCode has joined #mlpack
wasiq has joined #mlpack
Titu has quit [Quit: Page closed]
< rcurtin>
palashahuja: if you want, add your name/email to the list of contributors in src/mlpack/core.hpp and COPYRIGHT.txt and I'll merge it in. but don't feel obligated to if for some reason you don't want to :)
< palashahuja>
rcurtin, I'll do it right away
< rcurtin>
okay, sure, no hurry :)
adahp has quit [Remote host closed the connection]
< toshad>
Well, the Hilbert R-tree does look pretty involved but I couldn't find any good literature on UB trees
< toshad>
could you suggest anything?
< rcurtin>
not more than what I saw on google scholar when I typed in "ub tree"
DipeshCS has joined #mlpack
< rcurtin>
you could probably follow some of the references there to find the original paper that introduced the UB tree
< toshad>
fine :)
< toshad>
thanks
< rcurtin>
sorry I can't be more helpful
< na1taneja2821>
@rcurtin Hi, I would like to know how can we test the implemented tree types? I have found some parameters related to nearest neighbour search. I don't think any specific tests are implemented for any other trees except the unit tests.
< rcurtin>
how you test the trees is dependent on the tree itself
< rcurtin>
you may also consider taking a look at tests that use the trees in other algorithms
< rcurtin>
like the NSModelTest and RSModelTest for instance
< na1taneja2821>
ok thanks
< na1taneja2821>
Also, do we need to specify the unit tests that we will be implementing for the tree types
< rcurtin>
it would be helpful to provide details on the tests that you plan to implement
< na1taneja2821>
ok
na1taneja2821 has quit [Quit: Page closed]
DipeshCS has quit [Ping timeout: 250 seconds]
Rishabh_ has quit [Ping timeout: 260 seconds]
Rishabh has joined #mlpack
Nilabhra has quit [Remote host closed the connection]
Rishabh is now known as Guest4391
Guest4391 has quit [Remote host closed the connection]
Rishabh_ has joined #mlpack
wasiq has quit [Ping timeout: 248 seconds]
ank_95_ has quit [Quit: Connection closed for inactivity]
chrishenx has quit [Ping timeout: 248 seconds]
Bartek has quit [Ping timeout: 248 seconds]
wiking has joined #mlpack
Bartek has joined #mlpack
Rishabh_ has quit [Quit: No Ping reply in 180 seconds.]
Rishabh has joined #mlpack
Rishabh is now known as Guest94490
miku has quit [Ping timeout: 250 seconds]
lolsec_ has joined #mlpack
awhitesong has joined #mlpack
chrishenx has joined #mlpack
mentekid has quit [Remote host closed the connection]
agobin has quit []
agobin has joined #mlpack
lolsec_ has quit [Ping timeout: 250 seconds]
Jim___ has quit [Quit: Page closed]
Bartek has quit [Ping timeout: 252 seconds]
chrishenx has quit [Ping timeout: 268 seconds]
anatolemoreau2 has quit [Ping timeout: 276 seconds]