verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
alsc has joined #mlpack
vivekp has quit [Ping timeout: 248 seconds]
vivekp has joined #mlpack
witness has quit [Quit: Connection closed for inactivity]
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
alsc has quit [Quit: alsc]
alsc has joined #mlpack
< alsc>
rcurtin: the KNN part with kernel options is more or less here. it was prettier with just “rectangular” kernel but I had to use maps in the end… surely there is a better method but I had to code this quickly
< alsc>
when I come back at the end of next week we could decide wether it’s worth to do some work for a possible PR
< alsc>
other two thing I’ll plug-in quickly will be an “extern bool stopNow;” in scd_impl.h, because I need the possibility of stopping the descent at some point even if the tolerance hasn’t been met + a parameter that says the number of iterations in which the difference between consecutive losses remains under the tolerance… as it is now very often the loop returns because there has been one single case of same loss, ev
< alsc>
though the loss is still high and descending quickly
< alsc>
so if interested we could decide if if these are worthwile features.. the “extern” hack could be better done with an overridable callback that gives access to the model and the possibility of saving it
< alsc>
see u at the end of the next week then
< alsc>
* I’d use that to monitor the validation loss in parallel with the training loss… with this feature this ANN implementation becomes really usable and useful!