verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
robertohueso has left #mlpack []
travis-ci has joined #mlpack
< travis-ci>
yashsharan/models#10 (master - 186d997 : yash sharan): The build has errored.
witness has quit [Quit: Connection closed for inactivity]
< Prabhat-IIT>
rcurtin I've noticed a thing while debugging kmeans that if `initial_centroids` are all equal i.e. all centroids are the same point initially then the result is pure zero i.e all the points are assigned 0 in case of 'allow_empty_clusters` and `kill_empty_clusters` whereas I think in normal mode I doubt points are assigned rather arbitrarily as the training time is almost negligible as compared to the others. Shouldn't we handle t
< rcurtin>
Prabhat-IIT: a user using all zeros is a really bad initialization and we do not need to take into account how to handle that since it will cost a lot of extra computation
< Prabhat-IIT>
rcurtin: not all zeros but the same point as the centroid of all clusters
< Prabhat-IIT>
but yeah its very bad initialization
< Prabhat-IIT>
So, I agree with you :)
< Prabhat-IIT>
Moreover, after a whole night hardwork I was able to debug the issue in the kmeans test and the issue was that I've to explicitly pass the value of `allow_empty_clusters` as true or false in every test. I don't know why but it is not the case in CLI where this is an optional parameter :(
manthan has joined #mlpack
manthan has quit [Quit: Page closed]
Sayan98 has joined #mlpack
Sayan98 has quit [Client Quit]
Prabhat-IIT has quit [Ping timeout: 260 seconds]
Trion has joined #mlpack
caladrius has joined #mlpack
caladrius has quit [Quit: Page closed]
ank04 has joined #mlpack
ank04 has quit [Client Quit]
witness has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
yashsharan/models#11 (master - 67ce57c : yash sharan): The build has errored.
< scrcro>
Looks like it expects an output for every timestep. Is this true? In my case it does not make sense to have an output for every timestep. I am only interested in the last timestep
govg has quit [Ping timeout: 252 seconds]
< scrcro>
Is there a way to accomplish this?
travis-ci has joined #mlpack
< travis-ci>
yashsharan/models#18 (master - 16e4f40 : yash sharan): The build has errored.
< zoq>
scrcro: I see the problem, one simple solution is to use repmat to modify the responses. But I will keep that in mind and change the class accordingly.
< zoq>
ShikharJ: Hello.
< caladrius[m]>
zoq: Hi! I have submitted a PR for FReLU. I hope that's fine. Also, I think no work has been done on atrous convolution. Can I take it up then?
< ShikharJ>
zoq: Hi, nevermind, I posted my query in one of the PRs I am working on. Please take a look whenever you have some time :)
< zoq>
caladrius[m]: Yes please feel free, maybe we can find a way to provide a single convolution class, that handles different ideas, one idea is to use the policy design pattern.
< zoq>
ShikharJ: Okay :)
< caladrius[m]>
I'll look into that. For the time being, could I implement it as a separate class? I'll merege the two afterwards
< caladrius[m]>
*merge
< zoq>
caladrius[m]: Yeah, not even sure it's a good idea to merge everything into a single layer.
< caladrius[m]>
Yeah, most of the other libraries have it as a separate class.
< caladrius[m]>
Also I implemented the huber loss function. I guess it is useful for deep regression and some reinforcement learning applications. Can I submit a PR for that?
travis-ci has joined #mlpack
< travis-ci>
yashsharan/models#19 (master - ff4e859 : yash sharan): The build has errored.