verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
kris1 has joined #mlpack
mikeling has joined #mlpack
partobs-mdp has joined #mlpack
kris1 has quit [Quit: kris1]
< partobs-mdp>
How can I extract gradient from SGD optimizer history? (complete list of gradients or maybe extract it on every iteration)
< partobs-mdp>
(I need it for unit-testing gradient clipping)
< partobs-mdp>
(Why would I ever want to send a message to MYSELF in a public chat room oO)
< zoq>
partobs-mdp: Unfortunately, there is no option to get the gradient from the optimizer, but I think you could just test the clipping policy by passing a generated gradient.
mikeling has quit [Quit: Connection closed for inactivity]
kris1 has joined #mlpack
govg has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
< lozhnikov>
kris1: I just looked through your blog post. Actually, the 8th week is over. So, I suggest to replace "Week 7" by "Week 8".
govg has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
< kris1>
Okay i will change that accordingly.
< partobs-mdp>
zoq: I thought a little bit and found a way to write a proper unit test for clipping. True, we can't sneak peek our gradient in SGD, but we're not force to use SGD interface - we can just call Update methods directly and see if they correctly update parameter values.
< partobs-mdp>
zoq: For vanilla updates it works
< partobs-mdp>
Compiling & testing for momentun updates
< partobs-mdp>
About the CrossEntropy test: should it be a test with a simple neural network just to check that it works together, or should it be a test checking the outputs for given inputs (e.g., it should check that [.5, .5, .5, .5] input yields 4ln(2) error value)
< partobs-mdp>
What do you think?
< zoq>
partobs-mdp: Right, the optimizer class is basically just a wrapper, for the parameter, gradient and some additional information about the optimization process.
< zoq>
partobs-mdp: For me testing it on some generated data is fine e.g. as you said test it on [.5, .5, .5, .5], but if I like you could also add another test that uses some simple network.
< partobs-mdp>
zoq: And where can I find a test for MeanSquareError? I couldn't find anything like that in ann_layer_test.hpp.
< zoq>
partobs-mdp: I think there is no test, I should fix that.
< partobs-mdp>
zoq: When you write a test for MeanSquaredError, could you show it to me? I just can't completely figure out how to write a test for CrossEntropy (but MeanSquaredError test would really help since they're both loss functions)
< zoq>
partobs-mdp: Okay, I can write a test later today, should be straightforward to adapt the test for the CE layer.
partobs-mdp has quit [Remote host closed the connection]
partobs-mdp has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
shikhar has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
sntgpt has joined #mlpack
< sntgpt>
hi
< sntgpt>
I would like to contribute in manner. I am not sure how to start.
< ironstark>
zoq: I was having a look at dlib-ml's documentation
< zoq>
ironstark: Sounds good, what do you think?
< ironstark>
They only have svm and kmeans implemented , is that correct?
< zoq>
I think I have seen neighbor search and logistic regression as well.
< zoq>
If we go for dlib-ml we should make some sort of list.
< ironstark>
http://dlib.net/ml.html Here I can see only the svm's and some clustering methods
< zoq>
ironstark: Also, I think we don't have to implement "everything" at least not now.
< zoq>
rcurtin: Do you have any method preferences?
< zoq>
ironstark: I'll have to step out, I'll take a look at the link once I get back.
< ironstark>
zoq: Sure. In the meantime I will work on understanding the library. Also should we implement using C++ or Python?
< ironstark>
as both are available
shikhar has quit [Read error: Connection reset by peer]
sntgpt has quit [Quit: Page closed]
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
kris1 has quit [Client Quit]
kris1 has joined #mlpack
partobs-mdp has quit [Read error: Connection reset by peer]
shikhar has joined #mlpack
< zoq>
ironstark: So looks like there is a function for approximate nearest neighbors search and nearest neighbors search: http://dlib.net/term_index.html
< rcurtin>
zoq: ironstark: sorry for the slow responses, I was unavailable from Friday through Sunday and I am just now catching up
< rcurtin>
I think benchmarking R is important but I don't think the order is important so if you do dlib-ml before R that's fine with me
< rcurtin>
give me a little while and I will have an in-depth response for what methods I think are good to benchmark from dlib-ml
< zoq>
rcurtin: Sounds good, I guess kmeans, neighbor search, "svm" are methods that I think would be interesting.
< zoq>
About python or c++, I would go with c++, but if you think python is easier go with python.
< ironstark>
rcurtin: zoq: So I will start working on these three implementations first.
< ironstark>
About python or c++, I will look up the documentation and implementation examples and will implement accordingly.
shikhar has quit [Ping timeout: 260 seconds]
shikhar has joined #mlpack
shikhar has quit [Read error: Connection reset by peer]
< rcurtin>
ironstark: there is a deep neural net toolkit too, so maybe the MLP implementation could be useful also
< ironstark>
rcurtin: okay, I will look into it.
< rcurtin>
the other three ideas would be good, but I would use both dlib's exact NN search, the approx one, and the LSH one (it has three implementations)
< ironstark>
okay
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
chenzhe has joined #mlpack
chenzhe has quit [Remote host closed the connection]