ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143> lozhnikov : how should I consolidate my work , do I have to make it something similar to gsoc application ..??
xiaohong has quit [Ping timeout: 245 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 246 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has quit [Read error: Connection reset by peer]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has quit [Ping timeout: 264 seconds]
jeffin143 has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 258 seconds]
xiaohong has joined #mlpack
jeffin143 has quit [Ping timeout: 264 seconds]
jeffin143 has joined #mlpack
jeffin143 has quit [Read error: Connection reset by peer]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< jenkins-mlpack2> Project docker mlpack nightly build build #423: STILL UNSTABLE in 3 hr 36 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/423/
< lozhnikov> jeffin143: You should write a final post at http://mlpack.org/gsocblog/
favre49 has joined #mlpack
< favre49> To be certain, the final work product looks something like http://www.mlpack.org/gsocblog/Haritha2018Summary.html ?
< zoq> favre49: Right, that is one example.
< favre49> Okay thanks, I'll get started on that
favre49 has quit [Remote host closed the connection]
< xiaohong> zoq, can you introduce many dependency when implementing something?
< xiaohong> can we
< xiaohong> sorry, miss typo. can we introduce new dependency when implementing something new?
< xiaohong> Since I find that the implementation of lunarlander in gym use the code box2d, it is hard to figure out all those box2d functions' implementations.
< xiaohong> My idea was using the box2d in c++ to implement the lunarlander enviroment. What do you think of it?
< zoq> Personally, I don't like the idea to add another dependency for one env. Do we really need a physics engine?
< zoq> We don't have to replicate the exact env, the C++ implementation referenced in the PR is okay as well.
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 272 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< xiaohong> zoq: Do you mean this c++ implementation
< xiaohong> ?
< xiaohong> But I can not clearly define the reward of the agent.
< zoq> iaohong: In the case of lunar lander, you can start with a positive initial reward, every time the model performs an action, it will receive a small negative reward e.g. activate the main engine, left/right. Also, the model gets a negative reward for moving further away from the landing zone. For landing, you can add positive reward as well but that's not necessary.
< zoq> xiaohong: Does this make sense?
< zoq> xiaohong: The finalAnalysis method is a good starting point.
< xiaohong> I am a little confused, but the finalAnalysis method analysis the final velocity of agent.
< xiaohong> Your meaning of landing is the finalAnalysis method part?
KimSangYeon-DGU has joined #mlpack
< xiaohong> If we follow the idea you mentioned, how to define the problem was solved?
sumedhghaisas has joined #mlpack
< sumedhghaisas> KimSangYeon-DGU: Hey Kim
< KimSangYeon-DGU> sumedhghaisas: Hey
< KimSangYeon-DGU> I'm ready
< zoq> xiaohong: That depends on the initial reward, so if we start with e.g. 1000 a good final result, could be 400, so everything above 400 is solved. We have to run some simulations to figure out what the final result should be.
< sumedhghaisas> I checked the document good work
< KimSangYeon-DGU> Thanks
< zoq> xiaohong: The velocity is just an indicator of how good we are, e.g a velocity > 10 is way to fast.
< sumedhghaisas> ]the interesting thing you mentioned about the intersectiuon between the failed cases of QGMM and GMM
< KimSangYeon-DGU> Right, there is no intersection.
< KimSangYeon-DGU> between QGMM with initial phi 0 and GMM
< sumedhghaisas> And with Aug vs Normal with phi 0 augmented is better right?
< KimSangYeon-DGU> In case phi 90, there is intersection.
< KimSangYeon-DGU> Right
< sumedhghaisas> ahh I see this is interesting cause phi 90 is basically GMM
< sumedhghaisas> so this behaviour should be expected
< KimSangYeon-DGU> Ahh, yes
< sumedhghaisas> okay just something that came to my mind
< sumedhghaisas> could you check in all those 100 cases if initial phi is changed?
< sumedhghaisas> from phi 0 I mean
< KimSangYeon-DGU> Okay
< KimSangYeon-DGU> phi_1 from 0 to 90, and phi_2 from 90 to 0, right?
< sumedhghaisas> ummm sorry I didn't get that
< sumedhghaisas> phi 0 is the difference right?
< KimSangYeon-DGU> Ahh
< xiaohong> zoq: Can you explain why the simulation can get the result?
< KimSangYeon-DGU> Do you mean initial difference changed right?
< KimSangYeon-DGU> Currently, I conducted 0 and 90
< sumedhghaisas> yes I want to see if initial difference is changed in the process
< sumedhghaisas> yeah
< KimSangYeon-DGU> Ahh okay
< KimSangYeon-DGU> I got it
< sumedhghaisas> great. So we have now empirical evidence of convergence against GMM
< sumedhghaisas> There are 2 problems with our method
< sumedhghaisas> we need to investigate what is happening with intial phi
< sumedhghaisas> and how to control it
< sumedhghaisas> and find out how to generalize this case for multiple clusters
< KimSangYeon-DGU> Right,
< sumedhghaisas> I suspect we will do worse for multiple case against GMM
< xiaohong> zoq: or can we discuss it in Github pull requests?
< KimSangYeon-DGU> I'll try
< sumedhghaisas> But now we should concentrate on preparing the final document
< KimSangYeon-DGU> I agree
< sumedhghaisas> We have lot of results so far lets put them briefly in 1 document
< KimSangYeon-DGU> Okay
< sumedhghaisas> I suggest just concentrating 2 cluster case
< KimSangYeon-DGU> Yeah
< sumedhghaisas> okay lets see what have we achieved so far
< sumedhghaisas> we tried the paper method
< sumedhghaisas> didn't work
< KimSangYeon-DGU> Right
< sumedhghaisas> we came up with an objective function that can optimized with gradient decsnet directly
< sumedhghaisas> we checked the validity of that
< sumedhghaisas> we found out the problems
< sumedhghaisas> we checked the same objective function against crazy dataset
< sumedhghaisas> analyzed the results
< sumedhghaisas> we compared it to GMM results empirically
< sumedhghaisas> found out lot of interesting points
< sumedhghaisas> does that sound like our timeline in short?
< KimSangYeon-DGU> Great!
< KimSangYeon-DGU> In addition, actually, I tried to control phi
< sumedhghaisas> ahh yes... that comes all under the validity of objective function
< KimSangYeon-DGU> However, I didn't find any improvement so I couldn't write it
< KimSangYeon-DGU> into the document..
< sumedhghaisas> controlling different parameters
< sumedhghaisas> ahh okay thats fine
< zoq> xiaohong: You mean a velocity > 10? You rotate the lander e.g. by using the right action and activating the booster non-stop, that will accelerate the lander into the ground.
< sumedhghaisas> for now we can concentrate on phi 0, phi 90 and phi 180 case
< sumedhghaisas> b8ut we should mention what do they mean and what happens in each one of them
< sumedhghaisas> I mean clusters get closer farther and etc
< KimSangYeon-DGU> Okay
< sumedhghaisas> Also give a brief description of what you think QGMM is what it could be used for
< KimSangYeon-DGU> Yeah
< zoq> xiaohong: Maybe it makese sense to run some simulations first: https://github.com/openai/gym/blob/master/examples/agents/keyboard_agent.py
< zoq> xiaohong: To get a feeling of how the env works.
< sumedhghaisas> KimSangYeon-DGU: Although the timeline looks short. 6 Days are remaining.
< sumedhghaisas> Try to complete as much as you can. :)
< KimSangYeon-DGU> sumedhghaisas: Okay, then, would it be a good idea to concentrate on writing the final document for now?
< sumedhghaisas> Yup.
< KimSangYeon-DGU> Okay
< sumedhghaisas> The final document would be important to see where we are and where we need to go from here
< sumedhghaisas> what problems we are facing and what success we have so far
< sumedhghaisas> we will take some time to analyze it and see what should be the next step
< KimSangYeon-DGU> Ahh, really thanks for the information
< KimSangYeon-DGU> That makes sense
< xiaohong> zoq: Great, I will try it. It is definitely better than reading the implementation of lunarLander.
< sumedhghaisas> No worries. Ping me anytime you have questions.
< KimSangYeon-DGU> Thanks :)
< sreenik[m]> zoq: Say I have a declaration LayerTypes<> layer = new LeakyReLU<>(0.8);
< sreenik[m]> If I want to access the value returned by the layer.Alpha(), how do I do it? Directly calling layer.Alpha() is a compilation error
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< zoq> sreenik[m]: Right, we could instantiate an LeakyReLU object and cast it once we have set the parameters. We could also write another visitor to set the parameter. I guess we could also use the constructor of LeakyReLU to set the parameter at construction time.
< sreenik[m]> I don't really need to modify or set the alpha parameter, just need to access the value. I had tried a simple type cast like auto alpha = ((LeakyReLU<>)layer).Alpha(); but that is a compilation error too. Is the cast wrong somewhere?
jeffin143 has joined #mlpack
< jeffin143> lozhnikov : there are 3 file , preprocess_string_util.hpp ( header file for function defination) , preprocess_string_util_impl.cpp ( src file for implementation ) , preprocess_string_main.cpp ( main file for calling function)
< jeffin143> Now i am including .hpp file in both , prerocess_string_main.hpp and preprocess_string_util_impl.cpp,
< jeffin143> Do i have to add #include "preprocess_string_util_imp.cpp" , at the end of the header file..???
< lozhnikov> jeffin143: No, .cpp file are not supposed to be included.
< zoq> sreenik[m]: What about: double alpha = reinterpret_cast<LeakyReLU<>*>(&layer)->Alpha();
< sreenik[m]> Oh let me try. I guess I am pretty weak with these things. Thanks :)
< jeffin143> The how does the compiler knows where the implementation is written..???
< sreenik[m]> zoq: Wow that did work!! Thanks :D
< lozhnikov> The compiler translates each .cpp file into an object file and then links the object files into an executable.
jeffin143 has quit [Ping timeout: 250 seconds]
xiaohong has quit [Read error: Connection timed out]
jeffin143 has joined #mlpack
xiaohong has joined #mlpack
< jeffin143> Yeah , but it throws an error undefined reference with the function name
< jeffin143> For some reason
< jeffin143> Do i have to add .CPP file in cmake..???
< lozhnikov> jeffin143: Probably you didn't include the implementation into CMakeLists.txt
< jeffin143> If i include it in cmake file, it throws an error , unknown binding type*
< jeffin143> It is take it as a binding type
< jeffin143> Taking*
< lozhnikov> jeffin143: Did you add the filename to the SOURCES variable?
< jeffin143> Yes
< lozhnikov> jeffin143: To be certain: did cmake throw errors if you write the following: https://pastebin.com/M7deCcmw ?
< jeffin143> Umm, i didn't add the main.cpp to sources
< jeffin143> Since none of the main.cpp were added , all those are of binding type
< lozhnikov> Yes, you are right the main.cpp file is redundant.
< jeffin143> Umm i will give a commit , can u please try and run make mlpack_test , i tried a lot to figure it out from days , i am sure I have done some silly mistake , but couldn't figure out that ,
< lozhnikov> ok
< Toshal> ShikharJ: zoq: Any feedback or suggestion from your side on my final project submission?
xiaohong has quit [Read error: Connection timed out]
< zoq> Toshal: Will take a look later today.
xiaohong has joined #mlpack
jeffin143 has quit [Read error: Connection reset by peer]
jeffin143 has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 260 seconds]
< lozhnikov> jeffin143: I pointed out the error. https://github.com/mlpack/mlpack/pull/1980#discussion_r315769743
xiaohong has quit [Read error: Connection reset by peer]
xiaohong_ has joined #mlpack
xiaohong has joined #mlpack
xiaohong_ has quit [Ping timeout: 246 seconds]
< sakshamB> ShikharJ: for my final work submission report do you want me to write a blog or create a github repo?
jeffin143 has quit [Read error: Connection reset by peer]
jeffin143 has joined #mlpack
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
ImQ009 has joined #mlpack
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
< sreenik[m]> zoq: There's another thing I'm finding difficult to solve. With reference to the layerstring.hpp that you had created as an example, if I add more functions, it is resulting in a compilation error when called from a main() or from another cpp file. The error is more than a couple of pages long but what I understood from it is that it probably expects each and every case possible considering the LayerTypes<> class. I have
< sreenik[m]> mentioned the code in a comment to layerstring.hpp https://gist.github.com/zoq/595906a62690befce85e3935ccc84f9f
jeffin143 has quit [Ping timeout: 244 seconds]
favre49 has joined #mlpack
< favre49> zoq I've shared a draft of the final report through mail, please check it out whenever you can
favre49 has quit [Remote host closed the connection]
ImQ009 has quit [Quit: Leaving]
< jenkins-mlpack2> Project docker mlpack weekly build build #61: FAILURE in 4 days 22 hr: http://ci.mlpack.org/job/docker%20mlpack%20weekly%20build/61/
< jenkins-mlpack2> Project docker mlpack monthly build build #11: STILL FAILING in 3 mo 7 days: http://ci.mlpack.org/job/docker%20mlpack%20monthly%20build/11/
< rcurtin> I'm cleaning up all the docker images on Jenkins, so expect failures for a little while...
< rcurtin> (I'll have to rebuild them all, possibly because I accidentally removed them ;))
< zoq> favre49: Great, will take a look later.
< zoq> sreenik[m]: Let me comment on the gist.
< jenkins-mlpack2> Project docker mlpack monthly build build #12: ABORTED in 14 min: http://ci.mlpack.org/job/docker%20mlpack%20monthly%20build/12/
< jenkins-mlpack2> * Ryan Curtin: Try to get more output for why it's taking so long.
< jenkins-mlpack2> * Marcus Edel: Update to XML version 2 (cppcheck).
KimSangYeon-DGU has joined #mlpack
lozhnikov has quit [Quit: ZNC 1.7.3 - https://znc.in]
lozhnikov has joined #mlpack