ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
mlpackuser100 has joined #mlpack
< mlpackuser100> Hi! I've been running various types of knn and all have encountered a stack overflow with 3 dimensional data, 10^3 test points, and as few as 10^6 reference points. I expect it's a simple mistake. Does anybody have any ideas or suggestions?
< mlpackuser100> This is rectified by using naive mode, so I suspect it is due to the recursive calls in the trees.
favre49 has joined #mlpack
< favre49> zoq: Yup, i did it something like this https://gist.github.com/favre49/b2062ae84d84be245ea3e638cbc81a9f
mlpackuser100 has quit [Quit: Page closed]
favre49 has quit [Client Quit]
< jenkins-mlpack2> Project docker mlpack nightly build build #339: STILL UNSTABLE in 3 hr 33 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/339/
mulx10 has joined #mlpack
< mulx10> mlpackuser100: did you try increasing the stack limit?
< zoq> favre49: Right, so does that work?
mulx10 has quit [Client Quit]
Toshal has joined #mlpack
< Toshal> ShikharJ: Sorry for missing the last meeting
< Toshal> And yes I am starting my campaign with label smoothing.
< Toshal> Let me know your thoughts regarding same.
< rcurtin> mlpackuser100: can you post some more details about the code you are using?
jeffin143 has joined #mlpack
< jeffin143> lozhnikov : we could use map<string,size_t> intead of keeping a vector of tokens and then then storing their string_view in map , since that vector would take up some space to store the string
vivekp has joined #mlpack
KimSangYeon-DGU has joined #mlpack
sreenik has joined #mlpack
Toshal has quit [Read error: Connection reset by peer]
Toshal has joined #mlpack
< KimSangYeon-DGU> sumedhghaisas_: Hey Sumedh~
sumedhghaisas has joined #mlpack
< sumedhghaisas> KimSangYeon-DGU: Hey Kim
< KimSangYeon-DGU> sumedhghaisas: Hey Sumedh :)
< KimSangYeon-DGU> I've found interesting point of our experiments
< sumedhghaisas> cool. Whats up?
< KimSangYeon-DGU> As you said, I tested the theta 90 degrees
< KimSangYeon-DGU> and I found interference phenomena
< KimSangYeon-DGU> more easily, because we did draw the probability space in 3D
< KimSangYeon-DGU> I'll upload them
< sumedhghaisas> hmm... I thought theta 90 should give classical GMM right?
< KimSangYeon-DGU> Ahh...
< sumedhghaisas> cosine term will collapse when theta is 90
< KimSangYeon-DGU> oops
< KimSangYeon-DGU> I misunderstood
< KimSangYeon-DGU> I set the theta pi/2
< KimSangYeon-DGU> Sorry
< sumedhghaisas> also, i think you are calculating cosine based on alphas right?
< KimSangYeon-DGU> Yeah
< sumedhghaisas> so rather than setting theta you are changing alphas right?
< KimSangYeon-DGU> No, I changed the theta and alpha at the same time
< sumedhghaisas> hmm... but they are linked. with the cosine equation given in the paper
< sumedhghaisas> they shoul be changed in a proper way i think
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> I just tested it with changed theta fixing alpha
< KimSangYeon-DGU> I'll try setting the theta 90 degrees.
< KimSangYeon-DGU> and check if the both of them are the same
Toshal has quit [Remote host closed the connection]
< KimSangYeon-DGU> sumedhghaisas: I just tested setting the theta 90 degrees, and found the both of them are the same
< sumedhghaisas> Good that seems more like it.
< KimSangYeon-DGU> Yeah, I wanted to show you them
< sumedhghaisas> Couple of question I had about your implementation though.
< sumedhghaisas> Sure send the links over here.
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> I'm ready to answer your question
< KimSangYeon-DGU> Please go ahead
< sumedhghaisas> In the paper, section 3.2
< sumedhghaisas> they give the constraints over the cosine
< sumedhghaisas> in that I think the summation should be an integral
< KimSangYeon-DGU> Yes
< KimSangYeon-DGU> Do you mean the equation 18?
< sumedhghaisas> basically we want to make sure the area under the quantum mixture distribution is 1
< sumedhghaisas> precisely... equation 18.
< sumedhghaisas> I have no idea why they put summation there
< KimSangYeon-DGU> Actually, I tried to use the integral in python, but there are some issues...
< KimSangYeon-DGU> So, I selected use summation approximattely.
< KimSangYeon-DGU> Is is wrong way to go?
< sumedhghaisas> hmm... I saw the approximation. Its not wrong but I think in this case we would require accurate values if we can
< sumedhghaisas> I have an idea in that way
< KimSangYeon-DGU> What??
< sumedhghaisas> So in Equation 13
< sumedhghaisas> we have 2 probability distributions right?
< KimSangYeon-DGU> Yes
< sumedhghaisas> integral of both of them should be also 1
< KimSangYeon-DGU> the the sum of them should be 1
< sumedhghaisas> so if you write those integrals out
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> Agreed
< sumedhghaisas> ahh no i mean something else
< sumedhghaisas> I mean P(X, 1|{params})
< sumedhghaisas> equals some right hand side
< sumedhghaisas> wait... let me think about it a sec
< sumedhghaisas> I am just thinking out loud now
< sumedhghaisas> if I integrate out X
< sumedhghaisas> that should give me prior right P(1)
< sumedhghaisas> but we don't have the closed form solution of this ... hmmm
< KimSangYeon-DGU> hmm..
< sumedhghaisas> basically we are missing integral (G1 * G2)... thats right?
< KimSangYeon-DGU> Yes
< sumedhghaisas> hmm... did you try expanding this integral?
< KimSangYeon-DGU> Yes, I try to use integral that python provides, but there are data space issue
< sumedhghaisas> I mean substituting expression of G1 and G2 and using https://en.wikipedia.org/wiki/Gaussian_integral?
< sumedhghaisas> Ohh i mean on the paper
< sumedhghaisas> I am not sure it has a closed form but maybe there could be
< KimSangYeon-DGU> Ohh, I didn't try it
< KimSangYeon-DGU> Wait a minute, I'll look into it
< sumedhghaisas> I suspect after expansion there will be some normal form
< KimSangYeon-DGU> Hmm...
< KimSangYeon-DGU> Surely, It seems useful approach
< KimSangYeon-DGU> Hmm, but the Gaussian we use is not normal distribution
< KimSangYeon-DGU> So I'm worried about it
< KimSangYeon-DGU> as you said.
< sumedhghaisas> let me know how it goes... I think doable, but I just went over it in my head
< KimSangYeon-DGU> Yes, I'll try it
< KimSangYeon-DGU> Thanks for knowing it
< KimSangYeon-DGU> *letting me know
< sumedhghaisas> what were you worried about?
< KimSangYeon-DGU> The Gaussian we use is not normal Gaussian
< sumedhghaisas> You will need to expand the terms in the multivariate definition
< KimSangYeon-DGU> Yes
< KimSangYeon-DGU> Agreed, I'll try it
< sumedhghaisas> As most of the terms in the exponent, multiplication of G1 and G2 will become addition
< sumedhghaisas> if you club all the terms involving equal powers of X then integrate them separately
< KimSangYeon-DGU> sumedhghaisas: Ah, got it
< KimSangYeon-DGU> sumedhghaisas: Actually, the current time of Korea is 12:35 AM, Is it okay to work on it tomorrow?
< sumedhghaisas> Sure. Take your time.
< KimSangYeon-DGU> Thanks
sumedhghaisas has quit [Ping timeout: 256 seconds]
KimSangYeon-DGU has quit [Quit: Page closed]
jeffin143_ has joined #mlpack
jeffin143 has quit [Read error: Connection reset by peer]
favre49 has joined #mlpack
< lozhnikov> jeffin143_: No, you can't use different key types in std::map.
< favre49> zoq: Yeah I got it to work. The test suite is mostly done now, just need to go through the code again.
< zoq> favre49: Great, if you open a PR I'll take a look.
< favre49> In the paper, they've made the action space of the Double Pole balancing continuous rather than discrete. Should I make an addition to the PR for a continuous version of the environment?
< zoq> favre49: Sounds like a good idea to me, I implement this as another class.
< zoq> *I guess implement this as another class.
< favre49> Yup I'll make a NEAT WIP PR tomorrow. I'll add to the multiple pole balancing PR as well.
< favre49> Thanks :)
< zoq> ohh, great :)
favre49 has quit [Quit: Page closed]
< sakshamB> zoq ShikharJ I was working on my highway networks PR and seeing a build failure after a rebase. After some debugging it seems that the problem might be that boost::variant can only handle upto 50 different types and after adding the highway layer type this limit is exceeded. Let me know if this is correct and how I should proceed.
< sakshamB> hmm..i tried doing that but I think 50 is the max size?
< sakshamB> that is supported
< zoq> hmm, not sure there is a limit
< zoq> can you push the changes to the PR?
< sakshamB> zoq: I get `"boost/mpl/list/list60.hpp”` not found
< zoq> same for 51?
< zoq> hm, okay, so either we implement the workaround mentioned in the post or we find another solution
vivekp has quit [Read error: Connection reset by peer]
< zoq> sakshamB: For now, let#s remove the RecurrentAttention and VRClassReward layer from the list.
< sakshamB> alright I will do that for now..and will try to find a solution for later. thanks for the help:)
< zoq> will take a look as well
vivekp has joined #mlpack
sumedhghaisas_ has quit [Ping timeout: 256 seconds]
sreenik has quit [Quit: Page closed]
jeffin143_ has quit [Remote host closed the connection]
xiaohong has joined #mlpack