ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< rcurtin>
mulx10: thanks again for the fix, I can see that the Python imports are correct now that they are regenerated:
jeffin has quit [Read error: Connection reset by peer]
rf_sust2018 has quit [Quit: Leaving.]
rf_sust2018 has joined #mlpack
rf_sust2018 has quit [Quit: Leaving.]
rf_sust2018 has joined #mlpack
rf_sust2018 has quit [Ping timeout: 245 seconds]
rf_sust2018 has joined #mlpack
rf_sust2018 has quit [Quit: Leaving.]
jeffin has joined #mlpack
seewishnew has joined #mlpack
pd09041999 has quit [Ping timeout: 246 seconds]
jeffin has quit [Ping timeout: 245 seconds]
pd09041999 has joined #mlpack
pd09041999 has quit [Ping timeout: 245 seconds]
rf_sust2018 has joined #mlpack
< rcurtin>
nice, succeeding nightly and weekly builds
< rcurtin>
the monthly build looks like it hung up on just one job
< rcurtin>
mulx10: sure, I'll try and take a look when I have a chance
pd09041999 has joined #mlpack
IRC-Source_66359 has joined #mlpack
IRC-Source_66359 has quit [Client Quit]
atulim has joined #mlpack
rf_sust20181 has joined #mlpack
< atulim>
@rcurtin @zoq After going through mailing list, I would like to ask that how many pages would the gsoc proposal be coz I know that you won't read much if its lengthy. Can you please specify? It would be helpful
rf_sust2018 has quit [Ping timeout: 245 seconds]
rf_sust2018 has joined #mlpack
rf_sust20181 has quit [Ping timeout: 246 seconds]
< zoq>
atulim: We will read every application, regardless of the number of pages, but ideally you can keep it short, if the application covers everything from the application guide and you feel you added all the "necessary" information you are good to go.
< atulim>
thank you @zoq
jeffin143 has joined #mlpack
< atulim>
@rcurtin if you have time, I have completed that pull request sir, three to four days back kindly look at it.
< rcurtin>
atulim: like I've said before, I'll take a look when I am able to. there are now more than 60 PRs open...
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
< sreenik>
I had a short query. The activation functions are spread around with some of them having typedefs in base_layer.hpp
< sreenik>
I mean to ask, why is it particularly done? Moreover, some are defined as layers too
seewishnew has quit [Ping timeout: 250 seconds]
pd09041999 has joined #mlpack
pd09041999 has quit [Excess Flood]
pd09041999 has joined #mlpack
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
< zoq>
sreenik: The typedefs that are in the base activation layer, don't take additional parameter at construction time so they can all be combined (to keep the code size low).
< sreenik>
Oh I see :>
pd09041999 has quit [Ping timeout: 246 seconds]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
seewishnew has quit [Remote host closed the connection]
pd09041999 has joined #mlpack
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 268 seconds]
pd09041999 has quit [Ping timeout: 245 seconds]
saksham189 has joined #mlpack
pd09041999 has joined #mlpack
vivekp has joined #mlpack
pd09041999 has quit [Ping timeout: 268 seconds]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
rf_sust2018 has quit [Quit: Leaving.]
pd09041999 has joined #mlpack
rob has joined #mlpack
rob is now known as Guest36667
< Guest36667>
can the existing neuroevolution genetic algorithms (like CNE) included in ensmallen be used to tune hyperparameters for neural networks? Like number of layers, etc
< Guest36667>
because from what I can see they're used as an alternative to backprop for training basically. just using GAs to adjust the weights and biases
< zoq>
Guest36667: Hello, that's not implemented.
< Guest36667>
I think there was a PR a few years ago for 2.0.1 or something
< Guest36667>
not sure what happened to it
< Guest36667>
probably incomplete
< zoq>
Guest36667: Probably using NEAT instead of CNE.
< Guest36667>
Yeah I think so
< Guest36667>
because NEAT evolves both topologies and weights
< zoq>
Guest36667: Right, perhaps we can finish this one over the summer (GSoC).
< ShikharJ>
KimSangYeon-DGU: Ack, sorry I had been very busy lately. I'll get back on this.
< Guest36667>
zoq: do you think it would be put in ensmallen or just mlpack?
< zoq>
Guest36667: NEAT, would be part of ensmallen, but the tuning would be part of mlpack, integration into the exsisting hpt model would be great.
< Guest36667>
zoq: sweet. looking forward to it if it does happen
< Guest36667>
in the meantime I will see if that PR is actually buildable
< KimSangYeon-DGU>
ShikharJ: No problem!! I know you are so busy. :)
< KimSangYeon-DGU>
Thanks for your hard work!!
Guest36667 has quit [Quit: Page closed]
pd09041999 has quit [Quit: Leaving]
sreenik has quit [Quit: Page closed]
saksham189 has quit [Ping timeout: 256 seconds]
saksham189 has joined #mlpack
< ShikharJ>
KimSangYeon-DGU: Yeah, it's okay, since no one has a strong opinion on it.
jeffin143 has quit [Remote host closed the connection]
robb_ has joined #mlpack
< robb_>
is it possible to specify my own fitness function for an ensmallen optimizer?
saksham189 has quit [Ping timeout: 256 seconds]
< rcurtin>
robb_: yeah, that's the whole idea, just call Optimize() with the function you'd like to minimize
< rcurtin>
it needs to have an Evaluate() and Gradient() function (maybe more or less depending on what optimizer you want to use)
< KimSangYeon-DGU>
ShikharJ: Thanks! :)
< robb_>
rcurtin: Got it, thanks
robb_ has quit [Quit: Page closed]
jeffin143 has joined #mlpack
robb8 has joined #mlpack
< robb8>
where can I find an example of using an ensmallen optimizer to optimize a neural network? any test code possibly?