ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
varuns has joined #mlpack
< KimSangYeon-DGU> zoq: The issue of compatibility with OpenAI gym seems to be resolved. :) A pull request for solving the problem will be made.
< KimSangYeon-DGU> in gym_tcp_api
KimSangYeon-DGU has quit [Quit: Page closed]
pradyumn has quit [Ping timeout: 256 seconds]
akfluffy has joined #mlpack
< akfluffy> Hey, I have a question. I'm doing a simple FFNN and I was wondering if each data point had to have a label? What if I wanted to predict time series
< akfluffy> Nevermind, that was a stupid question. I think I can just add rows onto the matrix for multiple data points in one label
akfluffy has quit [Client Quit]
KimSangYeon-DGU has joined #mlpack
varuns has quit [Ping timeout: 268 seconds]
< KimSangYeon-DGU> When using `for` loops, which one would be more preferable in an aspect of design for mlpack between `for (size_t i = 0; i < 10; i++)` and `for (size_t i = 0; i < 10; ++i)`?? it would be a fussy question.
heytitle has joined #mlpack
< heytitle> Hi, I've made a first Dockerfile of mlpack's benchmarks.
< heytitle> could you please give me comments?
< heytitle> currently, it only works with Shogun.
heytitle has quit [Client Quit]
sundar has joined #mlpack
< ayesdie> KimSangYeon-DGU: in one of the code snippet in the Design Guidelines, the for loop have ,15++i, so I think it should be the answer to your question (I may be wrong, so I would like to know about that too).
< ayesdie> It don't show the highlighting on logs correctly, so I'll correct it, `++i`was what I meant.
picklerick has joined #mlpack
< picklerick> @rcurtin any resource on how knn works with spill tree ...
< KimSangYeon-DGU> ayesdie: Yeah, thanks!! :)
< KimSangYeon-DGU> I think it is a bit fussy but interesting.
< ayesdie> Yea, I've also seen `i++` being used at many places in the already existing code.
sundar has quit [Ping timeout: 256 seconds]
< KimSangYeon-DGU> Yes, So I was curious that other developers thought about that like me.
robertohueso has quit [Ping timeout: 258 seconds]
picklerick has quit [Ping timeout: 258 seconds]
KimSangYeon-DGU_ has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 256 seconds]
< jenkins-mlpack2> Project docker mlpack nightly build build #246: STILL UNSTABLE in 3 hr 47 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/246/
rcurtin has quit [Ping timeout: 245 seconds]
rcurtin has joined #mlpack
KimSangYeon-DGU_ has quit [Quit: Page closed]
sumant has joined #mlpack
varuns has joined #mlpack
< sumant> I would like to contribute to mlpack. I am thinking about implementing capsule networks and data augmentation techniques. I would also like to implement gradient boosting as it is one of the most widely used algorithms out there.
< sumant> I'm fairly new to the codebase, so I'll start contributing right after I become familiar with it. Can anyone let me know what you thing about this idea?
sumant has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
< zoq> sumant: Hello, I would recommend to focus on a single idea, writing meaningful tests takes time.
< zoq> heytitle: Thanks will take a look once I have a chance.
< zoq> KimSangYeon-DGU: Great!
< zoq> KimSangYeon-DGU: From my side it's fine to use both forms.
travis-ci has joined #mlpack
< travis-ci> Soonmok/models#1 (master - 405f85c : Soonmok): The build failed.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> Soonmok/models#1 (master - c9bc2da : Soonmok): The build failed.
travis-ci has left #mlpack []
varuns has quit [Ping timeout: 250 seconds]
varuns has joined #mlpack
varuns has quit [Client Quit]
sic_parvis_magna has joined #mlpack
sic_parvis_magna has quit [Client Quit]
sumant has joined #mlpack
< sumant> @zoq thankyou for your advice. I'll focus only on capsule network.
sumant_ has joined #mlpack
sumant has quit [Client Quit]
sumant_ is now known as sumant
sumedhghaisas has joined #mlpack
KimSangYeon-DGU has joined #mlpack
< KimSangYeon-DGU> zoq: Thanks!! :)
< rcurtin> heytitle: thanks for opening the PR with the Dockerfile; I'll try to look at it when I have a chance
< rcurtin> KimSangYeon-DGU: I don't have a strong preference between ++i and i++; both work, but personally I'd typically write '++i'
< KimSangYeon-DGU> Okay!! :)
sreenik has joined #mlpack
mulx10 has joined #mlpack
< mulx10> Hello!
< mulx10> zoq: Please review my PR (https://github.com/zoq/gym_tcp_api/pull/13)
< mulx10> Thank you!
mulx10 has quit [Client Quit]
< rcurtin> mulx10: there is no need for a reminder, there are tons of open PRs that we have to review and we're aware of each of them :)
sreenik has quit [Ping timeout: 256 seconds]
Hemal has joined #mlpack
< Hemal> @shardulparab97, saw your message now. About the status of triplet loss function, I've completed the 'forward()', working on backward()
< Hemal> @zoq, For backward pass of Triplet Loss function, I've saw https://stackoverflow.com/questions/33330779/whats-the-triplet-loss-back-propagation-gradient-formula which says that there would be 3 different losses to be calculated and sent to the output , (all the other loss functions return only 1 float value in backward). So am I going right by returning an arma::mat having 3 losses ?
KimSangYeon-DGU has quit [Ping timeout: 256 seconds]
niteya has joined #mlpack
Hemal has quit [Quit: Leaving.]
niteya has quit [Client Quit]
Hemal has joined #mlpack
Hemal has left #mlpack []
picklerick has joined #mlpack
zoq_ has joined #mlpack
zoq has quit [Read error: Connection reset by peer]
iti_ has joined #mlpack
Milind has joined #mlpack
Milind has quit [Client Quit]
iti_ has quit [Ping timeout: 268 seconds]
zoq_ is now known as zoq
Milind has joined #mlpack
Milind has left #mlpack []
< picklerick> in spill tree traversal we are actually implementing hybrid spill trees ??
< rcurtin> picklerick: I believe that is correct, but I don't remember exactly for sure
< rcurtin> what we have implemented doesn't look *exactly* the same as in the paper, since we built it on mlpack's dual-tree framework
< picklerick> well if we are only doing spill tree traversal we would have only done defeatist search
< rcurtin> and that paper I linked you to didn't consider dual-tree algorithms, just single-tree
< rcurtin> right---so you can do defeatist search with it, but you can also do search with backtracking
< rcurtin> (which one you use depends on the SingleTreeTraverser and DualTreeTraverser that's used)
< rcurtin> oh, and also, we are accepted into GSoC this year :)
< picklerick> i considered singletreetraverser and looked through the code so if defeatist is true and it is overlapping too then we are doing defeatist tree search
< picklerick> in the paper we are doing the same i guess for hybrid spill tree
< rcurtin> I believe so, yeah---but it has been a long time, so I am not 100% sure
< rcurtin> what you wrote sounds about right though :)
< picklerick> also i think we are optimizing it further by pruning it too right by not considering farther nodes
< rcurtin> trust the code more so than me though :)
< rcurtin> right, but pruning is only needed if we're doing any backtracking (i.e. if we are not doing defeatist search)
< picklerick> yea and the error is being occured due to that i guess i am thinking of commenting out backtracking and looking into it
< picklerick> any other suggestions to looking for the bug ??
< picklerick> @rcurtin for neat idea would we have to implement hyperneat or just neat algorithm also i think it was implemented in previous year gsoc too ??
prateek0001 has joined #mlpack
< rcurtin> I don't know anything about NEAT or HyperNEAT so I can't say about that
< rcurtin> but if I'm remembering right, basically what we need to do to fix the issue where we don't get enough results from the spill tree when using defeatist mode,
< zoq> picklerick: The focus is on NEAT, there is an unfinished PR, which kinda worked.
< rcurtin> is that we need to stop descending the tree and do BaseCase() on all point pairs before node.NumPoints() < k
< picklerick> yea the tree isn't encountering enough nodes
< rcurtin> another way would be to descend to the leaf, run the base cases, then do a little backtracking
< rcurtin> i.e. backtrack to other nodes until the total number of points encountered is >= k
petris_ is now known as petris
< rcurtin> heh, I see that TensorFlow got accepted to GSoC... but their Ideas list is literally just a link to their github issues page
< picklerick> instead of running the basecase to node.Numpoints we can do like min(node.numPoints(),k) like this ??
< picklerick> @rcurtin thanks for the pointers i will look into it and work on it :)
< zoq> interesting
< rcurtin> picklerick: sorry for the slow response---I stepped out for a minute
< rcurtin> I'm not sure what you mean by 'running the basecase to node.Numpoints', but I think if you terminate the defeatist search early it can work
< rcurtin> one tricky part is, the defeatist traversal rules don't know anything about k
< rcurtin> so you may have to add some member to NeighborSearchRules or something like this that specifies the minimum number of BaseCase() calls that are needed
< rcurtin> (and you may need to specify that in the other Rules classes too?)
< rcurtin> I'm not totally sure. try it and see what happens :)
picklerick has quit [Ping timeout: 252 seconds]