ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
rf_sust2018 has joined #mlpack
rf_sust2018 has quit [Ping timeout: 255 seconds]
i8hantanu has joined #mlpack
Shantanu has joined #mlpack
Shantanu has quit [Client Quit]
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 240 seconds]
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
i8hantanu has quit [Quit: Connection closed for inactivity]
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 264 seconds]
i8hantanu has joined #mlpack
Soonmok has joined #mlpack
< Soonmok>
Is there a way to get feedback on my proposals from MLpack contributors?. I've shared my GSOC proposals, and I wonder if I have to just wait or email to someone.
< zoq>
Soonmok: Hello, you have to wait, we will give feedback once we have a chance.
< Soonmok>
Thank you! I will wait.
< zoq>
rajiv_: Already glanced over the proposal, but haven't left any comments yet.
i8hantanu has quit [Quit: Connection closed for inactivity]
gauravcr7rm has joined #mlpack
< gauravcr7rm>
zoq: Hey Marcus, I have already uploaded proposal on my dashboard and I also have given access to you to comment on it as asked by you. It would be great if you can give some feedback it. Thanks in advance : )
gauravcr7rm has quit [Ping timeout: 256 seconds]
pd09041999 has joined #mlpack
pd09041999 has quit [Ping timeout: 246 seconds]
seewishnew has joined #mlpack
LAYMANN has joined #mlpack
seewishnew has quit [Ping timeout: 250 seconds]
LAYMANN has quit [Client Quit]
vallabh007 has joined #mlpack
vallabh007 has quit [Client Quit]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 268 seconds]
abhinavsagar has joined #mlpack
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 250 seconds]
sreenik has joined #mlpack
seewishnew has joined #mlpack
pd_09041999 has joined #mlpack
seewishnew has quit [Ping timeout: 250 seconds]
pd09041999 has quit [Ping timeout: 245 seconds]
pd_09041999 has quit [Ping timeout: 245 seconds]
pd09041999 has joined #mlpack
saksham189 has joined #mlpack
i8hantanu has joined #mlpack
robb9 has joined #mlpack
< robb9>
how do i specify my own error function for a NN? Like my own MSE?
< zoq>
mean_squared_error.hpp and mean_squared_error_impl.hpp are the files for the MSE loss function
< robb9>
oh. They were both .hpp so I thought they were just header files
< robb9>
thanks
robb9 has quit [Quit: Page closed]
paul_mlpack has joined #mlpack
pd09041999 has quit [Ping timeout: 250 seconds]
robb9 has joined #mlpack
< robb9>
What is the Backward() function for?
pd09041999 has joined #mlpack
< robb9>
what if I just want to define a "fitness" function for how far it got on a certain task ? What would I make Backward() do?
< zoq>
you have to return an error, one idea might be to take the maximum goal minus the achivement.
< robb9>
I thought Forward() determined the error
pd09041999 has quit [Ping timeout: 255 seconds]
< zoq>
true, in your setting this would be the same
< robb9>
ah, so I can just return the same output for both
< robb9>
I'm probably going to do (1/"fitness")
< robb9>
so higher fitness means lower error
< robb9>
because technically there's no maximum fitness value
< robb9>
zoq: thanks
pd09041999 has joined #mlpack
robb9 has quit [Ping timeout: 256 seconds]
Soonmok has quit [Quit: Connection closed for inactivity]
robb9 has joined #mlpack
< robb9>
sorry, I have another question. I'm assuming Backward() helps the network learn, so how would backprop work if the output (error) is the same for Forward() and Backward?
pd09041999 has quit [Ping timeout: 250 seconds]
< robb9>
do the ensmallen optimizers use the error function I provided automatically or do I have to explicitly state it
krgopal has joined #mlpack
< krgopal>
HELP
pd09041999 has joined #mlpack
krgopal has quit [Quit: Page closed]
i8hantanu has quit [Quit: Connection closed for inactivity]
Shubhangi has joined #mlpack
< robb9>
What if I want to evaluate the overall network, and I can't do it incrementally like Evaluate() does?
< robb9>
because using my fitness function I cannot return the "error" of a specified single datapoint
pd09041999 has quit [Ping timeout: 246 seconds]
sreenik has quit [Quit: Page closed]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
robb9 has quit [Ping timeout: 256 seconds]
robb9 has joined #mlpack
ReemGody has joined #mlpack
< ReemGody>
hello every one . I just want to apologize because I wa having problems with my mailing application as it required activation, so when I activated it, it sent all the mails I have written, so you may find the same mail sent from me multiple times on the mailing list
< ReemGody>
Really sorry
< ReemGody>
I have a question. I was exploring the tutorials on the library's repo and I found out that there were no tutorials for using reinforcement learning api, so I thought may be I could add a tutorial for this as I learn about the functionality.
< ReemGody>
I could submit a pull request with that if you like the idea
ReemGody has quit [Quit: Page closed]
Shubhangi has quit [Ping timeout: 256 seconds]
paul_mlpack has quit [Quit: Page closed]
< zoq>
ReemGody: Hello, if you like to write something up, please feel free.
< zoq>
robb9: ensmallen will only see the funcition you pass, and depending on the optimizer it calls Evaluate(...) which returns the loss and Gradient(...)
robb9 has quit [Ping timeout: 256 seconds]
< zoq>
robb9: If you batchSize = Number of samples, it coudl work
< zoq>
robb9: The backward should return the error w.r.t the current input: I think if you take a look at the MSE layer it might be comes clear.
< zoq>
Really like the new stackoverflow design :)
< rcurtin>
the sparkles help me concentrate on the answer :)