ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< zoq>
rcurtin: Will open an issue.
< rcurtin>
sounds good
< rcurtin>
let me check if I can reproduce it first
< rcurtin>
s/first/also/
< rcurtin>
interesting, it doesn't happen with every dataset, but it does with optdigits
aman_p has joined #mlpack
aloo has joined #mlpack
aloo has quit [Client Quit]
KimSangYeon-DGU has joined #mlpack
ahmedmaheralamir has joined #mlpack
ahmedalamir has quit [Ping timeout: 268 seconds]
aman_p has quit [Ping timeout: 245 seconds]
ayesdie has joined #mlpack
aman_p has joined #mlpack
Medabid has joined #mlpack
Medabid has quit [Client Quit]
KimSangYeon-DGU_ has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 256 seconds]
< rajiv_>
zoq: After I added Dense<> in layer_types.hpp in using LayerTypes = boost::variant<...>, I still get a similiar error: https://www.dropbox.com/s/1fza02w3j1024rv/build_log.txt?dl=0 I used arma::cube instead of arma::mat in Dense class as I had to concatenate for which I used arma::join_slices. Could that be an issue?
rajiv_ has quit [Quit: Page closed]
AyushS has joined #mlpack
hetshah has quit [Ping timeout: 256 seconds]
AyushS has quit [Quit: Page closed]
AyushSAyushS has joined #mlpack
kaushik_ has joined #mlpack
AyushSAyushS has quit [Ping timeout: 256 seconds]
ayesdie has quit [Quit: Connection closed for inactivity]
< ShikharJ>
rcurtin: I think they're the standard practice for any C++ based project. Plus it would double up as a good first issue.
< zoq>
ShikharJ: Personally, I think the C++ cast style is somewhat bloated, especially for numeric casts.
aman_p has joined #mlpack
anuragsarkar250 has joined #mlpack
< anuragsarkar250>
join
< anuragsarkar250>
hello everyone Im Anurag I was exploring different project ideas of MLPACK, would love to discuss more about it.
Shubhangi has quit [Ping timeout: 256 seconds]
anuragsarkar250 has quit [Quit: Page closed]
siddhant has joined #mlpack
vaibhav_smooth_o has joined #mlpack
aman_p has quit [Ping timeout: 268 seconds]
xyz_ has joined #mlpack
vaibhav_smooth_o has quit [Quit: Page closed]
xyz_ has quit [Client Quit]
< siddhant>
hii every one. I woud like to work for the project mlpack to tensorflow translator
vivekp has quit [Ping timeout: 255 seconds]
vivekp has joined #mlpack
siddhant has quit [Ping timeout: 256 seconds]
favre49 has joined #mlpack
Nisarg has joined #mlpack
mayuri has joined #mlpack
< mayuri>
Hi
< mayuri>
I wanted to know how to contact the mentor or contribute to the project in which I am interested?
kaushik_ has quit [Quit: Connection closed for inactivity]
< Nisarg>
I read a few research papers on constrained Particle Swarm Optimisation and also for multi objective problem . I want to send a proposal for this project. As mentioned in the ideas list, a good Gsoc proposal should contain possible places where the code has to be changed. Please guide me on what could be the best way to go for it. And also could you please send the format for the Gsoc proposal to the email I'd nisargbipinshah171me15
< favre49>
zoq: I noticed in a mail that you mentioned that NEAT would havt to work with arbitrary functions. However, in most of the "learning to learn" papers you pointed me towards, they take a history of gradients as an input, or use CPPNs. How would we make NEAT work with arbitrary functions?
< favre49>
The only way i can think of right now is if you were looking for just a direct mapping from input to ouput through NEAT, that couldnt be used for a different objetive or a different starting point
waahm7 has joined #mlpack
favre49 has quit [Quit: Page closed]
Nisarg has quit [Ping timeout: 256 seconds]
waahm7 has quit [Quit: Page closed]
Deepika has joined #mlpack
Deepika is now known as Guest87421
Guest87421 has quit [Client Quit]
ahmedmaheralamir has quit [Ping timeout: 240 seconds]
mayuri_ has joined #mlpack
mayuri has quit [Ping timeout: 256 seconds]
Omar has joined #mlpack
mayuri has joined #mlpack
< mayuri>
Hi
< mayuri>
I want to know how to contribute to the project idea in which I am interested
< mayuri>
If anyone could please tell!
mayuri_ has quit [Ping timeout: 256 seconds]
ahmedmaher has joined #mlpack
Omar has quit [Quit: Page closed]
< zoq>
favre49: You are right the arbitrary function type didn't make sense in this case, but I think in this case you would have to write another optimizer that wraps NEAT, since the usage is somewhat different.
< zoq>
anuragsarka: Please feel free to ask questions here or over the mailinglist.
< zoq>
rajiv_: How does the type you added to boost:variant look like?
< zoq>
siddhant: Feel free to send an application.
favre49 has joined #mlpack
< favre49>
zoq: Thanks for the clarification :) I'll start looking for something applicable
favre49 has quit [Client Quit]
< zoq>
favre49: As for now, be could focus on arbitrary functions.
favre49 has joined #mlpack
favre49_ has joined #mlpack
favre49 has quit [Ping timeout: 256 seconds]
< favre49_>
zoq: I understand this is in a way spoonfeeding, but are there any papers you can point me in the direction of that do this sort of thing? I've been struggling to find any. Even NEMO has no implementations as of yet, so it seems risky to implement without having an idea of its real performance.
favre49_ has quit [Client Quit]
riaash04 has joined #mlpack
< riaash04>
Hi, I went through Bang Liu's implementation of the Neat (not very deeply just to understand the flow and major functions), and It seems very good. Although, I was wondering if it's expected to build on that (like Kartik did) or a complete new implementation could be done for this year's GSOC idea?
mayuri has quit [Ping timeout: 256 seconds]
< riaash04>
Also, in the proposal should I mention my pr(if it gets merged), even though it's not directly related to this project ?
riaash04 has quit [Quit: Page closed]
< rcurtin>
favre49_: I think a lot of the optimizers that we implement we don't actually have a guarantee that they will work well
< rcurtin>
here's a fun fact: the vast, vast majority of papers introducing new optimizer techniques compare against other optimizers using only a *single* random initialization
< rcurtin>
therefore even though their optimizer may converge faster for *that particular starting point* they use in their experiments, there's no understanding of whether it will across lots of different problems
< rcurtin>
so I think a lot of the things we implement, we can't totally be sure if they are good, and we have to do some comparisons and simulations afterwards :)
shadowfox has joined #mlpack
< shadowfox>
is knowing ml a pre requsite for your projects?
Xyz_ has joined #mlpack
< shadowfox>
?
< shadowfox>
there?
shadowfox has quit [Quit: Page closed]
Xyz_ has quit [Ping timeout: 256 seconds]
junaidnz has joined #mlpack
junaidnz has quit [Client Quit]
ahmedmaher has quit [Read error: Connection reset by peer]