ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
KimSangYeon-DGU has joined #mlpack
KimSangYeon-DGU has quit [Remote host closed the connection]
KimSangYeon-DGU has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143> lozhnikov : TF-Idf class, introduced a new data memeber, IDFdict to hold the idf values, can we take care of that using policy based design
< jeffin143> introduces*
jeffin143 has quit [Ping timeout: 260 seconds]
favre49 has joined #mlpack
< favre49> I was wondering, how do you guys keep up with the newest advancements in machine learning, deep learning etc., considering
< favre49> Considering the volume of new stuff published and the speed of advancement these days
favre49 has quit [Remote host closed the connection]
KimSangYeon-DGU has quit [Remote host closed the connection]
KimSangYeon-DGU has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143> lozhnikov : https://pastebin.com/e6qnjNPh : I tired Implementing Something, Not sure how well is policy based design
< jeffin143> Please take a look , and let me know
< jeffin143> I am still learning, and I would like to seek some help , initally just to be sure.
< jeffin143> I have some doubts , in Line 74, and also implementation at 92.
jeffin143 has quit [Ping timeout: 260 seconds]
< jenkins-mlpack2> Project docker mlpack nightly build build #376: STILL UNSTABLE in 3 hr 32 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/376/
xiaohong has joined #mlpack
< zoq> favre49: Personally, I focus on some topics, that helps to keep the number somewhat 'low'; but often I still end up with a backlog of papers/posts (I like to read) that doesn't fit on a single page. arxiv sanity is also really helpful.
< xiaohong> Hi, I have dump question about the nn model. When the model perform the loss computing when we passed the loss function.
< xiaohong> such as the q_learning_impl.hpp
< xiaohong> The forward function is the normal forward process, I think for the backward, the loss is already computed and store in target.
< zoq> xiaohong: You are right, the Backward() function returns the loss: double loss = model.Backward(...), but the Forward function doesn't since you don't have a target/label do calculate the loss.
< zoq> xiaohong: The Q-learning dosn't store or return the loss.
< zoq> jeffin143 lozhnikov: Not sure, but what about doing the update step inside the policy?
< zoq> xiaohong: Does that help?
< xiaohong> zoq: Yes, thank you.
< xiaohong> For the forward pass, the predict output store in the target. Line 156-167 computes the target. Both of them use the same variable.
< xiaohong> How to compute the loss if needed.
< xiaohong> I mean, just like the classification, we pass the true label, `target`, into the backward function, but I didn't see the predict label,
< lozhnikov> jeffin143: Yes, it's ok. You just need another policy function in order to resize the output.
KimSangYeon-DGU has quit [Ping timeout: 260 seconds]
< lozhnikov> Regarding line #92: It's ok, but I think it's better to move the line to the previous loop.
< lozhnikov> The policy can contain some data providing that you'll add an instantiated object of the policy to the base class.
< lozhnikov> zoq: It won't work in the case when the output is vector<vector<size_t>> since we do the encoding in only one pass. In that case we have to deal with the tokenizer and hence move the entire Encode() function to the policy.
xiaohong has quit [Ping timeout: 260 seconds]
< zoq> xiaohong: So in case of the Q-learning method, you don't have the true target/label, so naming but be somewhat confusion here: learningNetwork.Forward(sampledStates, target); in this case target contains the prediction, learningNetwork.Backward(target, gradients); here target should be the true target/label in case of classification, but since we are not interested in the loss, we don't care.
< zoq> lozhnikov: I see, thanks for the clarification.
xiaohong has joined #mlpack
< xiaohong> zoq: Thank you for your clarification. I got it now.
< zoq> xiaohong: If you have the true label, you can pass it to the backward function and get a meaningfull loss.
jixiaohong has joined #mlpack
jixiaohong has quit [Client Quit]
jixiaohong has joined #mlpack
jixiaohong is now known as xiaohong_
< xiaohong> So the gradients is the derivation of target not the loss?
< xiaohong> In this situation
xiaohong_ has quit []
KimSangYeon-DGU has joined #mlpack
< zoq> xiaohong: Correct.
jeffin143 has joined #mlpack
< xiaohong> zoq: Thank you for clarification.
< zoq> xiaohong: Happy to help :)
jixiaohong has joined #mlpack
jixiaohong has quit [Client Quit]
xiaohong_ has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong_ is now known as xiaohong
< jeffin143> zoq : you meant line 57,94 ?
< jeffin143> 57 to 94 ?
< zoq> jeffin143: 78 - 94
< jeffin143> the problem is that, then i have to pass different data member/variable depending on the type of template paramter
< jeffin143> take case for 1 , encode(dataset.size(),colsize)
< jeffin143> but for the other it would be encode(dataset.size(),mapping.size())
< zoq> jeffin143: Right, one solution would be to pass everything an only use the required parameters in the policy class.
< zoq> there might be a better solution is this case
< zoq> SFINAE and enable_if might be another option, if you like to support different functionality based on the type or the function a class implements.
xiaohong has quit [Remote host closed the connection]
< jeffin143> enable_if looks promising, I will try with it
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
favre49 has joined #mlpack
< favre49> zoq arxiv sanity looks good, thanks for the recommendation
< favre49> Also, I tested it on the OpenAI gym, but NEAT is still performing poorly, as well as in the double pole balancing with no velocities.
< favre49> I'm yet to figure out what's going wrong. If you get some time, please take a look
favre49 has quit [Remote host closed the connection]
< zoq> favre49: Sure, will take a look later today, the other double pole balancing env looks good?
ImQ009 has joined #mlpack
jeffin14316 has joined #mlpack
< jeffin14316> lozhnikov : suppose the base class has a data member , now the policy Cass needs to acces those data member, how should it access that data member
< lozhnikov> jeffin143: The base class has to pass the data to the policy.
< jeffin14316> So that policy should have same data member which is pointing to the base class.???
favre49 has joined #mlpack
< lozhnikov> jeffin143: If the base class uses an instantiated object of the policy, you can pass the data to the constructor of the policy.
< favre49> zoq Yup, but that doesn't mean as much, since NEAT can solve the Markovian version with no hidden nodes.
< lozhnikov> jeffin143: If the policy needs some data in a static function, you have to pass the data to this very function.
< jeffin14316> lozhnikov : Then, that means the base class datr
< jeffin14316> Data Member wouldn't hold any data..
< jeffin14316> Ok
< lozhnikov> jeffin14316: "Data Member wouldn't hold any data." I am not sure that I understand you right, could you elaborate a bit?
< jeffin14316> Sorry , I got it cleared , also I will make the change in pr1814, and introduce dictionary encoding policy in that PR and then other pr , I will make necessary changes and then introduced the other policies
< jeffin14316> Also can we merge the pr as we finish and go, I guess that it would be easy for me this way
< favre49> For reference, GoNEAT runs the non Markovian version for 100000 steps in its tests, and SharpNEAT (whose fitness function I used, since it was an improvement over Gruau's) can get fitnesses of 5k-10k according to an archived email (I'm unable to run it's tests myself)
favre49 has quit [Remote host closed the connection]
< lozhnikov> jeffin14316: Do you mean to merge #1814 as soon as you implement DictionaryEncodingPolicy?
< jeffin14316> Yes, if you let me know of any issues with this approach I m happy to use the other one
< lozhnikov> jeffin14316: Ok, no problem.
KimSangYeon-DGU has quit [Remote host closed the connection]
jeffin14316 has quit [Remote host closed the connection]
jeffin14310 has joined #mlpack
< jeffin14310> zoq : What is this error about
< jeffin14310> : /usr/bin/ld: warning: libboost_program_options.so.1.69.0, needed by /usr/local/lib/libmlpack.so, not found (try using -rpath or -rpath-link)
< jeffin14310> : /usr/bin/ld: warning: libboost_unit_test_framework.so.1.69.0, needed by /usr/local/lib/libmlpack.so, not found (try using -rpath or -rpath-link)
jeffin14310 has quit [Remote host closed the connection]
jeffin14343 has joined #mlpack
< jeffin14343> lozhnikov : u still there ?
< jeffin14343> Now tf-idf class has new data member , which is unordered_map and it stores IDF values,
< jeffin14343> Now, should I have it in base class or the policy class
< jeffin14343> Also , Word2vec would involve much more sketchy implementation since it would use model to predictpredict,
< jeffin14343> Predict* the next word
< jeffin14343> I agree, there is lot of code redundancy for the bow and dictionary class, but other class wouldn't be much similar, the implementation would change
jeffin14343 has quit [Remote host closed the connection]
< lozhnikov> jeffin14343: Regarding tf-idf: in the policy class.
< lozhnikov> jeffin14343 Regarding Word2vec: Let me think about it. If they differ significantly, then we'll implement two completely different classes.
< lozhnikov> jeffin14310: Regarding the error: Looks like boost is not installed or its version has been changed. Try to install boost or rebuild the entire project.
< zoq> lozhnikov jeffin14310: Agreed, we should make sure the boost version mentioned in the issue is available.
ImQ009 has quit [Quit: Leaving]