ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 260 seconds]
favre49 has joined #mlpack
< favre49> zoq The Pendulum RL task doesn't have an isTerminal() method. I think it should, is there any reason it does not?
Suryo has joined #mlpack
< Suryo> favre49: hey, do you have a website or a public profile? :)
< favre49> Suryo nope, unless social media counts as a public profile :]
< Suryo> Sure! Do you mind sharing a link to any kind of a social media profile with me?
< favre49> sure, give me a sec
< Suryo> Thanks!
favre49 has quit [Remote host closed the connection]
favre49 has joined #mlpack
< favre49> It also looks like the value of action has been changed from double[1] to double in Pendulum in PR #1931. Shouldn't this be changed in all continuous environments?
< favre49> I can open a PR to make these changes
KimSangYeon-DGU has joined #mlpack
KimSangYeon-DGU has quit [Remote host closed the connection]
< sreenik[m]> akhandait: Thanks for asking. My health hasn't improved much yet. Temperature is still in the range of 100-104 F. I'm on medication, hope to recover soon :)
< akhandait> sreenik[m]: Ahh that's rough. Take good rest and get well soon!
< sreenik[m]> Thanks. Trying my best
KimSangYeon-DGU has joined #mlpack
< zoq> favre49: You are right the task is missing a Terminal method, do you like to add the missing function? Don't feel obligated, I can do this as well. Also, agreed we should open a PR to adapt the rest as well.
Suryo has quit [Remote host closed the connection]
< favre49> zoq: No issues, I'll make all the changes and create a PR.
favre49 has quit [Remote host closed the connection]
< jenkins-mlpack2> Project docker mlpack nightly build build #367: STILL UNSTABLE in 3 hr 29 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/367/
KimSangYeon-DGU has quit [Remote host closed the connection]
KimSangYeon-DGU has joined #mlpack
< zoq> favre49: Okay, great, thanks!
sumedhghaisas has joined #mlpack
< KimSangYeon-DGU> sumedhghaisas: Hi Sumedh, I'm ready :)
< sumedhghaisas> KimSangYeon-DGU: Hi SangYeon
< sumedhghaisas> Hows it going
< sumedhghaisas> ?
< KimSangYeon-DGU> I've completed implementing GMM in python and wrote QGMM code in python using the equation in my final proposal that edited the original paper's equation.
< KimSangYeon-DGU> But it was divergent, not convergent. So, after checking it, I'll plan to try the way to use the NLL + lamba * approximate constraint.
< KimSangYeon-DGU> Hmm.. the paper's equation doesn't work as you mentioned
< sumedhghaisas> I suspected so as well. So QGMM code that you implemented wasn't converging at all right?
< KimSangYeon-DGU> Yeah...
< sumedhghaisas> okay so thats out of the way then.
< sumedhghaisas> Have you posted the implementation on Github?
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> Wait a moment.
< KimSangYeon-DGU> The GMM works correctly, but the QGMM doesn't work
< KimSangYeon-DGU> The trained parameters are divergent
< sumedhghaisas> Cool. Again just to clarify, does the QGMM doesn't converge or does it converge to incorrect values?
< KimSangYeon-DGU> doesn't convergent
< sumedhghaisas> thats bad
< sumedhghaisas> lets see if NLL + lambda * approximate does any better
< KimSangYeon-DGU> Hmm.. I plan to double-check the equation and the codes
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> I'll try it
< sumedhghaisas> I was giving it some more thought I think NLL + lambda * constraint formulation is very weak so it would be very sensitive to initialization
< sumedhghaisas> just to keep in mind
< KimSangYeon-DGU> Yeah, got it
< sumedhghaisas> if it doesn't work try with different initial clusters
< sumedhghaisas> Couldn't find better formulation yet sorry :(
< KimSangYeon-DGU> I'll do my best to find better one
< KimSangYeon-DGU> thanks for your advice
< sumedhghaisas> Except for that do you have any other questions? :)
< sumedhghaisas> Did you find time to read upon lagrangian?
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> I read some references
< sumedhghaisas> great... You will enjoy the next task then.
< KimSangYeon-DGU> Yeah, thanks
< sumedhghaisas> I have never trained NLL + lambda * approximate constarint before so this is definitely new for me as well
< sumedhghaisas> Great. If you have any other question ping me. You can also setup similar meeting anytime during the week if you feel like.
< KimSangYeon-DGU> Yeah, I sent an Friday invitation for the meeting :)
< KimSangYeon-DGU> Have a nice day!
< sumedhghaisas> very short meeting indeed. I hope you don't find any bugs in your implementation.
< KimSangYeon-DGU> Yeah, I'll keep in mind.
< sumedhghaisas> I think there formulation has more problems than I thought
< sumedhghaisas> lets see ours then
< sumedhghaisas> okay. Have a great day.
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> sumedhghaisas: Oh sorry
< KimSangYeon-DGU> sumedhghaisas: Can you check the python implementation?
< sumedhghaisas> Yes surely. I am going to do that. :)
< KimSangYeon-DGU> Ahh, thanks. Actually, I wanted to tell you some great results today...
< KimSangYeon-DGU> but the research didn't go smoothly :(....
< KimSangYeon-DGU> I plan to work overnight.
< KimSangYeon-DGU> I'm really sorry for the short meeting...
< sumedhghaisas> ohh don't worry. Research doesn't always guarantee good results.
< sumedhghaisas> Talke it slowly.
< KimSangYeon-DGU> Thanks, Sumedh
< KimSangYeon-DGU> I also check if there are some bugs in my implementation
abernauer has joined #mlpack
< abernauer> :zoq Any tips on resolving on generated makefile errors? Working on mlpack bindings to R this summer.
abernauer has quit [Remote host closed the connection]
< KimSangYeon-DGU> sumedhghaisas: When I tried to set the phis to zero, the parameters were converged and showed quite a correct results. I'll dig into it for some hours.
< sumedhghaisas> KimSangYeon-DGU: hmm
< sumedhghaisas> interesting
< sumedhghaisas> so far phi zero the cos(phi) will be 1
< KimSangYeon-DGU> ahh, sorry cos(phis) = 0
< sumedhghaisas> so both clusters will be supporting each other
< sumedhghaisas> cos(phi) = 0 ahh yes
< sumedhghaisas> but thats just GMM case
< KimSangYeon-DGU> Yeah
< sumedhghaisas> thats expected
< KimSangYeon-DGU> Just GMM cases
< sumedhghaisas> that means your implementation is correct :)
< sumedhghaisas> Actually we should have thought about this as a test in the first place
< sumedhghaisas> haha
< KimSangYeon-DGU> Ah~ Agreed :)
< sumedhghaisas> actually could you also try convergence for 180?
< sumedhghaisas> that should cover the testing
< KimSangYeon-DGU> I'll try it
< sumedhghaisas> ohh wait sorry
< sumedhghaisas> what am I saying
< sumedhghaisas> cos 180 is -1
< sumedhghaisas> sorry for that
< sumedhghaisas> I meant 270
< KimSangYeon-DGU> Ah, yeah
< sumedhghaisas> then we can move onto NLL optimization
< KimSangYeon-DGU> Yes
< KimSangYeon-DGU> I'll reconnect after 1 hour
KimSangYeon-DGU has quit [Remote host closed the connection]
favre49 has joined #mlpack
< favre49> zoq: I have a question about the NEAT algorithm in the paper. It just occurred to me, why do we need innovation IDs?
< favre49> The paper says it provides historical information about the genome, but I'm not sure when we use this information. Why would we match genes by their history instead of their structure (i.e. source and target) during crossover?
< favre49> Moreover, in the paper it says that if the same mutation occurs in the different genomes, it is given the same global innovation ID for that generation. THe distance metric given in the paper uses "matching" genes and disjoint genes classified by their ID, but that would mean genes with similiar structure but different genetic history could be spec
< favre49> iated into different groups (since the same connection would have different IDs)
< favre49> wouldn't this mean that speciation would not be dividing by topologies anymore?
< favre49> If i stopped clearing the mutation buffer (meaning equivalent connections would hae the same ID across generations) I gained a significant increase in speed, and so far can't tell much of a difference in performance
< favre49> The speciation was taking the most time, as far as i can tell, since the innovation IDs would blow up over generations.
Suryo has joined #mlpack
< Suryo> Hi guys! I wrote an article based on my contributions and experience with ensmallen. Here's the link to it: https://medium.com/@suryodaybasak/unit-testing-the-cogwheels-driving-open-source-machine-learning-libraries-a9c971fe2ae4
< Suryo> Any thoughts or suggestions are welcome and greatly appreciated! :)
Suryo has quit [Client Quit]
< favre49> It seems that the python implementation does not use innovation IDs at all, and i think sharpNEAT uses solely for the sake of kmeans speciation.
KimSangYeon-DGU has joined #mlpack
favre49 has quit [Remote host closed the connection]
vivekp has joined #mlpack
gmanlan has joined #mlpack
gmanlan has quit [Remote host closed the connection]
gmanlan has joined #mlpack
gmanlan has quit [Remote host closed the connection]
gmanlan has joined #mlpack
gmanlan has quit [Remote host closed the connection]
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 272 seconds]
< zoq> favre49: Just for indentification purpose, in fact in a previous attemp we use the innovation id for sorting the genes/connections. But you are right there is not really a need.
gmanlan has joined #mlpack
< gmanlan> hi there, anybody has an example on how to use the hyperparameter tuner on Random Forest?