ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143>
lozhnikov : how should I consolidate my work , do I have to make it something similar to gsoc application ..??
xiaohong has quit [Ping timeout: 245 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 246 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has quit [Read error: Connection reset by peer]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has quit [Ping timeout: 264 seconds]
jeffin143 has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 258 seconds]
xiaohong has joined #mlpack
jeffin143 has quit [Ping timeout: 264 seconds]
jeffin143 has joined #mlpack
jeffin143 has quit [Read error: Connection reset by peer]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
favre49 has quit [Remote host closed the connection]
< xiaohong>
zoq, can you introduce many dependency when implementing something?
< xiaohong>
can we
< xiaohong>
sorry, miss typo. can we introduce new dependency when implementing something new?
< xiaohong>
Since I find that the implementation of lunarlander in gym use the code box2d, it is hard to figure out all those box2d functions' implementations.
< xiaohong>
My idea was using the box2d in c++ to implement the lunarlander enviroment. What do you think of it?
< zoq>
Personally, I don't like the idea to add another dependency for one env. Do we really need a physics engine?
< zoq>
We don't have to replicate the exact env, the C++ implementation referenced in the PR is okay as well.
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 272 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< xiaohong>
zoq: Do you mean this c++ implementation
< xiaohong>
But I can not clearly define the reward of the agent.
< zoq>
iaohong: In the case of lunar lander, you can start with a positive initial reward, every time the model performs an action, it will receive a small negative reward e.g. activate the main engine, left/right. Also, the model gets a negative reward for moving further away from the landing zone. For landing, you can add positive reward as well but that's not necessary.
< zoq>
xiaohong: Does this make sense?
< zoq>
xiaohong: The finalAnalysis method is a good starting point.
< xiaohong>
I am a little confused, but the finalAnalysis method analysis the final velocity of agent.
< xiaohong>
Your meaning of landing is the finalAnalysis method part?
KimSangYeon-DGU has joined #mlpack
< xiaohong>
If we follow the idea you mentioned, how to define the problem was solved?
sumedhghaisas has joined #mlpack
< sumedhghaisas>
KimSangYeon-DGU: Hey Kim
< KimSangYeon-DGU>
sumedhghaisas: Hey
< KimSangYeon-DGU>
I'm ready
< zoq>
xiaohong: That depends on the initial reward, so if we start with e.g. 1000 a good final result, could be 400, so everything above 400 is solved. We have to run some simulations to figure out what the final result should be.
< sumedhghaisas>
I checked the document good work
< KimSangYeon-DGU>
Thanks
< zoq>
xiaohong: The velocity is just an indicator of how good we are, e.g a velocity > 10 is way to fast.
< sumedhghaisas>
]the interesting thing you mentioned about the intersectiuon between the failed cases of QGMM and GMM
< KimSangYeon-DGU>
Right, there is no intersection.
< KimSangYeon-DGU>
between QGMM with initial phi 0 and GMM
< sumedhghaisas>
And with Aug vs Normal with phi 0 augmented is better right?
< KimSangYeon-DGU>
In case phi 90, there is intersection.
< KimSangYeon-DGU>
Right
< sumedhghaisas>
ahh I see this is interesting cause phi 90 is basically GMM
< sumedhghaisas>
so this behaviour should be expected
< KimSangYeon-DGU>
Ahh, yes
< sumedhghaisas>
okay just something that came to my mind
< sumedhghaisas>
could you check in all those 100 cases if initial phi is changed?
< sumedhghaisas>
from phi 0 I mean
< KimSangYeon-DGU>
Okay
< KimSangYeon-DGU>
phi_1 from 0 to 90, and phi_2 from 90 to 0, right?
< sumedhghaisas>
ummm sorry I didn't get that
< sumedhghaisas>
phi 0 is the difference right?
< KimSangYeon-DGU>
Ahh
< xiaohong>
zoq: Can you explain why the simulation can get the result?
< KimSangYeon-DGU>
Do you mean initial difference changed right?
< KimSangYeon-DGU>
Currently, I conducted 0 and 90
< sumedhghaisas>
yes I want to see if initial difference is changed in the process
< sumedhghaisas>
yeah
< KimSangYeon-DGU>
Ahh okay
< KimSangYeon-DGU>
I got it
< sumedhghaisas>
great. So we have now empirical evidence of convergence against GMM
< sumedhghaisas>
There are 2 problems with our method
< sumedhghaisas>
we need to investigate what is happening with intial phi
< sumedhghaisas>
and how to control it
< sumedhghaisas>
and find out how to generalize this case for multiple clusters
< KimSangYeon-DGU>
Right,
< sumedhghaisas>
I suspect we will do worse for multiple case against GMM
< xiaohong>
zoq: or can we discuss it in Github pull requests?
< KimSangYeon-DGU>
I'll try
< sumedhghaisas>
But now we should concentrate on preparing the final document
< KimSangYeon-DGU>
I agree
< sumedhghaisas>
We have lot of results so far lets put them briefly in 1 document
< KimSangYeon-DGU>
Okay
< sumedhghaisas>
I suggest just concentrating 2 cluster case
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
okay lets see what have we achieved so far
< sumedhghaisas>
we tried the paper method
< sumedhghaisas>
didn't work
< KimSangYeon-DGU>
Right
< sumedhghaisas>
we came up with an objective function that can optimized with gradient decsnet directly
< sumedhghaisas>
we checked the validity of that
< sumedhghaisas>
we found out the problems
< sumedhghaisas>
we checked the same objective function against crazy dataset
< sumedhghaisas>
analyzed the results
< sumedhghaisas>
we compared it to GMM results empirically
< sumedhghaisas>
found out lot of interesting points
< sumedhghaisas>
does that sound like our timeline in short?
< KimSangYeon-DGU>
Great!
< KimSangYeon-DGU>
In addition, actually, I tried to control phi
< sumedhghaisas>
ahh yes... that comes all under the validity of objective function
< KimSangYeon-DGU>
However, I didn't find any improvement so I couldn't write it
< KimSangYeon-DGU>
into the document..
< sumedhghaisas>
controlling different parameters
< sumedhghaisas>
ahh okay thats fine
< zoq>
xiaohong: You mean a velocity > 10? You rotate the lander e.g. by using the right action and activating the booster non-stop, that will accelerate the lander into the ground.
< sumedhghaisas>
for now we can concentrate on phi 0, phi 90 and phi 180 case
< sumedhghaisas>
b8ut we should mention what do they mean and what happens in each one of them
< sumedhghaisas>
I mean clusters get closer farther and etc
< KimSangYeon-DGU>
Okay
< sumedhghaisas>
Also give a brief description of what you think QGMM is what it could be used for
< zoq>
xiaohong: To get a feeling of how the env works.
< sumedhghaisas>
KimSangYeon-DGU: Although the timeline looks short. 6 Days are remaining.
< sumedhghaisas>
Try to complete as much as you can. :)
< KimSangYeon-DGU>
sumedhghaisas: Okay, then, would it be a good idea to concentrate on writing the final document for now?
< sumedhghaisas>
Yup.
< KimSangYeon-DGU>
Okay
< sumedhghaisas>
The final document would be important to see where we are and where we need to go from here
< sumedhghaisas>
what problems we are facing and what success we have so far
< sumedhghaisas>
we will take some time to analyze it and see what should be the next step
< KimSangYeon-DGU>
Ahh, really thanks for the information
< KimSangYeon-DGU>
That makes sense
< xiaohong>
zoq: Great, I will try it. It is definitely better than reading the implementation of lunarLander.
< sumedhghaisas>
No worries. Ping me anytime you have questions.
< KimSangYeon-DGU>
Thanks :)
< sreenik[m]>
zoq: Say I have a declaration LayerTypes<> layer = new LeakyReLU<>(0.8);
< sreenik[m]>
If I want to access the value returned by the layer.Alpha(), how do I do it? Directly calling layer.Alpha() is a compilation error
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< zoq>
sreenik[m]: Right, we could instantiate an LeakyReLU object and cast it once we have set the parameters. We could also write another visitor to set the parameter. I guess we could also use the constructor of LeakyReLU to set the parameter at construction time.
< sreenik[m]>
I don't really need to modify or set the alpha parameter, just need to access the value. I had tried a simple type cast like auto alpha = ((LeakyReLU<>)layer).Alpha(); but that is a compilation error too. Is the cast wrong somewhere?
jeffin143 has joined #mlpack
< jeffin143>
lozhnikov : there are 3 file , preprocess_string_util.hpp ( header file for function defination) , preprocess_string_util_impl.cpp ( src file for implementation ) , preprocess_string_main.cpp ( main file for calling function)
< jeffin143>
Now i am including .hpp file in both , prerocess_string_main.hpp and preprocess_string_util_impl.cpp,
< jeffin143>
Do i have to add #include "preprocess_string_util_imp.cpp" , at the end of the header file..???
< lozhnikov>
jeffin143: No, .cpp file are not supposed to be included.
< zoq>
sreenik[m]: What about: double alpha = reinterpret_cast<LeakyReLU<>*>(&layer)->Alpha();
< sreenik[m]>
Oh let me try. I guess I am pretty weak with these things. Thanks :)
< jeffin143>
The how does the compiler knows where the implementation is written..???
< sreenik[m]>
zoq: Wow that did work!! Thanks :D
< lozhnikov>
The compiler translates each .cpp file into an object file and then links the object files into an executable.
jeffin143 has quit [Ping timeout: 250 seconds]
xiaohong has quit [Read error: Connection timed out]
jeffin143 has joined #mlpack
xiaohong has joined #mlpack
< jeffin143>
Yeah , but it throws an error undefined reference with the function name
< jeffin143>
For some reason
< jeffin143>
Do i have to add .CPP file in cmake..???
< lozhnikov>
jeffin143: Probably you didn't include the implementation into CMakeLists.txt
< jeffin143>
If i include it in cmake file, it throws an error , unknown binding type*
< jeffin143>
It is take it as a binding type
< jeffin143>
Taking*
< lozhnikov>
jeffin143: Did you add the filename to the SOURCES variable?
< jeffin143>
Yes
< lozhnikov>
jeffin143: To be certain: did cmake throw errors if you write the following: https://pastebin.com/M7deCcmw ?
< jeffin143>
Umm, i didn't add the main.cpp to sources
< jeffin143>
Since none of the main.cpp were added , all those are of binding type
< lozhnikov>
Yes, you are right the main.cpp file is redundant.
< jeffin143>
Umm i will give a commit , can u please try and run make mlpack_test , i tried a lot to figure it out from days , i am sure I have done some silly mistake , but couldn't figure out that ,
< lozhnikov>
ok
< Toshal>
ShikharJ: zoq: Any feedback or suggestion from your side on my final project submission?
xiaohong has quit [Read error: Connection timed out]
< zoq>
Toshal: Will take a look later today.
xiaohong has joined #mlpack
jeffin143 has quit [Read error: Connection reset by peer]
jeffin143 has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 260 seconds]
xiaohong has quit [Read error: Connection reset by peer]
xiaohong_ has joined #mlpack
xiaohong has joined #mlpack
xiaohong_ has quit [Ping timeout: 246 seconds]
< sakshamB>
ShikharJ: for my final work submission report do you want me to write a blog or create a github repo?
jeffin143 has quit [Read error: Connection reset by peer]
jeffin143 has joined #mlpack
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
ImQ009 has joined #mlpack
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
< sreenik[m]>
zoq: There's another thing I'm finding difficult to solve. With reference to the layerstring.hpp that you had created as an example, if I add more functions, it is resulting in a compilation error when called from a main() or from another cpp file. The error is more than a couple of pages long but what I understood from it is that it probably expects each and every case possible considering the LayerTypes<> class. I have