ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
robb_ has quit [Quit: Page closed]
robb_ has joined #mlpack
< robb_> hey, where can I find an example of a CNN being used?
< robb_> also, is it possible to specify the depth of a certain convolutional layer?
robb_ has quit [Quit: Page closed]
< rcurtin> robb_: src/mlpack/tests/convolutional_network_test.cpp and the CNN example in the models/ repo are the best we have to show right now
< rcurtin> not sure about the depth of layers though (I'm not familiar enough with the code)
KimSangYeon-DGU has joined #mlpack
KimSangYeon-DGU has quit [Client Quit]
KimSangYeon-DGU_ has joined #mlpack
< rcurtin> robertohueso: I spent some time with the code tonight, and identified 3 issues that I left as comments on your Github branch
< rcurtin> let me know if you can't find those comments... they're not in a PR (just in the commits) so I don't know how easy that is to find
< rcurtin> when I fixed these things on my local branch, I found that the test code you used had 0 bound failures in the test case (for a couple of runs that I tried)
< rcurtin> maybe that is a little bit low, but it's also reasonable that the KDE bounds are pretty loose, so most datasets won't come close to the bound values
< rcurtin> I haven't tried with any other datasets than the one in the test set
< rcurtin> two other notes:
< rcurtin> (1) when you decrease the bandwidth size in the test, if you watch the Score() function, you can see it start to recurse (bw = 0.05 produces a lot of recursion, but bw = 0.3 produces none because the kernel values are so close for all reference points)
< rcurtin> I have no idea what the speed of that recursion is, but it still seems to satisfy the bounds
< rcurtin> (2) when we do recurse, you can see we cut beta in half at each iteration... this can mean a lot of recursion happens
< rcurtin> the paper briefly mentions the idea of "reclaiming probability" in just a single paragraph, but that's actually a really powerful idea (and not just for the MC sampling)
< rcurtin> I'm not sure how much the earlier reference [10] details it, but I would imagine that that can provide significant speedup
< rcurtin> I'll probably read through [10] tomorrow to see if it's what I think it is
< jenkins-mlpack2> Project docker mlpack nightly build build #354: STILL UNSTABLE in 3 hr 42 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/354/
xiaohong has joined #mlpack
< xiaohong> Hi, sorry for bothering you. I have a question why the compiler said that there is no matching constructor for the code? It's weird because I do pass the parameter to the loss function.
< xiaohong> The code is in here: https://paste.ubuntu.com/p/NksyrkNyDB/
< xiaohong> The compiler output is in here: https://paste.ubuntu.com/p/nMNGCJyxgm/
xiaohong has quit [Ping timeout: 256 seconds]
Yashwants19_ has joined #mlpack
< Yashwants19_> Hi xiaohong: I think so you must provide a default to epsilon .
< Yashwants19_> const double epsilon = 0.2 in "surrogate_loss.hpp" might work.
< Yashwants19_> :)
< Yashwants19_> Hi rcurtin: Do we have cli bindings for Cross validation or F1score.
< Yashwants19_> Thank You :)
Yashwants19_ has quit [Client Quit]
Suryo has joined #mlpack
< Suryo> zoq: following our discussion on the holder table function, I have currently removed the tests for holder table. With respect to the cross-in-tray function, I'm able to achieve better convergence by selecting a different (though reasonably far away) starting point
< Suryo> And for the schaffer N.4 function, I've been able to reduce the failure rate from 80% to 50%. I'm not sure if it is possible to do better than this because the function is such that the difference between two close local minima is really small, but they progressively build up as you traverse towards (0, 0) along any direction.
< Suryo> And the difference between the global minima and local minima thus finally become significantly large. It is an interesting function!!
< Suryo> But I strongly think that we should leave these last two - holder table and schaffer n.4 - for testing at a later time when PSO is ready.
< Suryo> Let me know what you think.
Suryo has quit [Client Quit]
xiaohong has joined #mlpack
< xiaohong> Yashwants19_: Hi, yes, that is. But I am not clear why this happened.
Yashwants19 has joined #mlpack
< Yashwants19> Here in FFN<SurrogateLoss<>, GaussianInitialization>
< Yashwants19> SurrogateLoss<> requires a default constructor.
< Yashwants19> But we don't have that.
< Yashwants19> I think so this must only be reason for the error.
Yashwants19 has quit [Quit: Page closed]
< xiaohong> Okay, I got it. Thank you for your detailed explanation.
favre49 has joined #mlpack
< favre49> zoq: I'm currently working on reproduction in NEAT, and i was wondering which reproduction method we should choose?
< favre49> Or should I implement them as different templatized policies?
< favre49> I mean selection method*
< favre49> rank selection, tournament selection, roulette selection, etc.
xiaohong has quit [Ping timeout: 256 seconds]
< favre49> I think providing different policies is ideal, but let me know what you think.
favre49 has quit [Quit: Page closed]
vivekp has joined #mlpack
akfluffy has joined #mlpack
< akfluffy> hey, do we have a 3d convolutional layer?
ImQ009 has joined #mlpack
< zoq> I really like the template policy idea, makes it easy to add a new strategy later.
< zoq> Suryo: Agreed, let's remove the test for now, 50% still pretty high; PSO should provide more stable solutions.
< zoq> akfluffy: no
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 245 seconds]
akfluffy has left #mlpack []
moraleja has joined #mlpack
< moraleja> Hi everyone! I have a question related to ensmallen, is there a community around that project? Or should I just shoot it here?
moraleja has quit [Quit: moraleja]
moraleja has joined #mlpack
< zoq> moraleja: Hello there, here is just fine.
ImQ009 has quit [Read error: Connection reset by peer]
moraleja has quit [Quit: moraleja]
moraleja has joined #mlpack
< moraleja> Ok, thanks in advance. I'm trying to do a very simple optimization problem, it's a quadratic function with some constraints. For reference, it is virtually the same as the one shown here: https://cvxopt.org/examples/tutorial/qp.html. I would like to port my code to ensmallen, for performance (I know it's a trivial problem, but I solve it over and over 1,000 of times). After reading the docs, (correct me if I'm wrong) my best shot is to use a Frank-Wolfe
< moraleja> optimizer. But after seeing the examples in the repository (on tests), I feel clueless as to how am I suppossed to feed the problem into it. So basically, I'm looking for any kind of pointer into the right direction regarding this :P
< zoq> There is a test case that is really similar to your problem.
< zoq> So, at this point you have to adapt the Evaluate/Gradient function and reuse everything else from the test.
< zoq> Does this help?
< moraleja> Looks good. This one is constrained to an lp-ball, if I wanted to use arbitrary constraints, like <=> the ones in that example, do I need to write my own LinearConstrSolverType ?
< moraleja> Sorry, by "tha example" I mean the example in the link from cvxopt
akfluffy has joined #mlpack
< akfluffy> Does anyone have suggestions on how to deal with 3 color channels in a convolutional neural network? I want input from all channels but we don't have 3d convolutional layers implemented