ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
travis-ci has joined #mlpack
< travis-ci>
robertohueso/mlpack#32 (mc_kde_error_bounds - f0aa09d : Roberto Hueso Gomez): The build is still failing.
< zoq>
favre49: At which state do you get the error?
favre49 has joined #mlpack
< favre49>
zoq It happens randomly, I couldn't tell what caused it
yoyo has quit [Ping timeout: 260 seconds]
< zoq>
favre49: Can you give it another try?
< zoq>
favre49: Also I think the results for the CartPole task are good.
< favre49>
Yup, but I still haven't gotten to the issue with the double pole balancing
< favre49>
Also, should I write a guide for NEAT? I don't think it's usage is that self explanatory
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< zoq>
favre49: Yeah, good idea.
< favre49>
Seems like the style linter doesn't recognize do while loops, treats them as two different loops.
< zoq>
favre49: Looks like a while loop will do as well?
< favre49>
Ah, you're right, the first gene should always be from the bias node
favre49 has quit [Remote host closed the connection]
jeffin143 has joined #mlpack
< jeffin143>
Could any one tell me how does SFINAE stops a function
< jeffin143>
I mean std::enable_if statement, How does it block the statement
< jeffin143>
In case where the statement true it will allow the function to do the required thing, but what happens when it is false ?
< jeffin143>
what is the error it throws, or does it even throw any error, or does it compile the code, and just skips through the function without throwing an error
< jeffin143>
I mean, does the project compile successfully, without any error ?
< jeffin143>
sorry for the typo : I mean std::enable_if statement, How does it block the statement --> I mean std::enable_if statement, How does it block the function*
< jeffin143>
:)
< lozhnikov>
jeffin143: It's hard to explain. I think it's easier to show by the example.
< jeffin143>
Is SFINAE really working in above code ?
xiaohong has quit [Remote host closed the connection]
< lozhnikov>
jeffin143: I think yes. The function will work with Policy2. But the compiler should throw errors with Policy1 because you commented out the general template.
k3nz0 has joined #mlpack
xiaohong has joined #mlpack
< jeffin143>
Ok, so strangely this was happening even when the class had template argument !
< jeffin143>
and now when i removed the template argument from class and added it to function and the make the structure specialization to false
< jeffin143>
It again throws an error, so should i consider it that SFINAE is doing its job
< lozhnikov>
jeffin143: jeffin143: Well, it's because you commented out the general template.
< jeffin143>
In our case, There is no general template, we want it to allow the encode function for only Dictionary Encoding class and the compiler should throw error for any other class. Right ?
< lozhnikov>
The compiler rejects the function at line 51 for Policy1. So if you uncomment lines 39--43, then the compiler will use this general template with Policy1.
< lozhnikov>
No, the general template is a function that accepts MatType. It should work for any policy.
< lozhnikov>
But we should reject a function that accepts vector<vector<size_t>> for each policy other than EncodingPolicy.
< jeffin143>
Ok then the issue here is, if the function rejects the other function, then it will follow the template function , but the template function has output.zeroes and that is not possible for vector<vector<size_t>>
< jeffin143>
since it doesn't have anything as such
< lozhnikov>
Yes, the general function is not supposed to accept vector<vector<size_t>>. That's why it throws errors.
< jeffin143>
ahaa, So any idea, how should we dodge that ?
< lozhnikov>
There is nothing we should fix. The only policy that works with vector<vector<size_t>> as the output is EncodingPolicy. If you pass vector<...> and EncodingPolicy to the Encode() function the compiler should choose the specialization that accepts vector<...> since it's the most special case.
< jeffin143>
ok, So if a user pass something as Encode(vecotr<vector<size_t>>, .. , data::TfIdf()) , then the compiler will throw error
< lozhnikov>
If you pass arma::(sp)_mat and any policy to the Encode() function the compiler should choose the general Encode() function since it's the only function that matches the arguments.
< lozhnikov>
But if you try to pass vector<...> and a policy other than DictionaryEncodingPolicy then the compiler should throw some errors since the specialization that accepts vector<...> is blocked by enable_if and the general function doesn't accept vector<...>.
< lozhnikov>
jeffin143: Yes.
< jeffin143>
Ok, I understood this because you made me to, But the error is very bad, I mean the user won't really understand what is happening , why did the error come
< jeffin143>
Is it ok ?
< jeffin143>
If you pass arma::(sp)_mat and any policy to the Encode() function the compiler should choose the general Encode() function since it's the only function that matches the arguments. -->> Is this only true for sp_mat or arma::mat would also work ryt ?
< lozhnikov>
jeffin143: The user should look at the documentation and find out that the general function doesn't work with anything other than arma::mat or arma::sp_mat and the specialization doesn't work with anything other than DictionaryEncodingPolicy.
< jeffin143>
ok
< lozhnikov>
>> Is this only true for sp_mat or arma::mat would also work ryt ? << No, actually we didn't restrict it. But generally mlpack works only with the armadillo matrices.
< jeffin143>
Thank you so much for the detailed explanation, I made the changes,of removing template from class and added it to function, and also committed, Do take a look once when you are free, and let me know if that works fine :)
< lozhnikov>
jeffin143: ok.
jeffin143 has quit [Ping timeout: 260 seconds]
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
< Toshal>
ShikharJ: I am heree
KimSangYeon-DGU has quit [Remote host closed the connection]
< ShikharJ>
Toshal: Okay let's start.
< ShikharJ>
Toshal: Sorry for the delay. I was cooking, it's early morning here, and I don't have the best judge of time in the morning :(
< ShikharJ>
sakshamB: Are you there?
< ShikharJ>
Toshal: How's the work on visitors going on? I assume you were planning on implementing that?
< Toshal>
ShikharJ: Yes
< Toshal>
Regarding Visitors I am still digging out.
< Toshal>
I have completed working on Inception Layer.
< Toshal>
I will make a PR about it today
< ShikharJ>
Toshal: Okay, so is that PR ready for review?
< ShikharJ>
I still see a WIP?
< Toshal>
Yes it's for FID
< Toshal>
For it Inception Layer is required which I am currently working for.
< Toshal>
For Visitor I think I will need to add a Trait
< ShikharJ>
Toshal: Will the layer be implemented in the same PR?
< Toshal>
To check weather we have a non bias wieght term
< Toshal>
ShikharJ: No It will not.
< Toshal>
I am making a different PR for it. What you think?
< ShikharJ>
How big do you reckon the code would be? FID is pretty small for now, so it might make sense to implement it in the same PR.
< sakshamB>
ShikharJ: yes I am here now
< Toshal>
Ah it's not that big currently but It may get big Because I have just implemented basic of it.
< ShikharJ>
Toshal: Okay, feel free to go for another PR then. Also, try to speed things up, the longer time we spend here, the lesser time we will have for implementing CGANs and LSGANs.
< ShikharJ>
sakshamB: How's the work on padding layer going on?
< Toshal>
ShikharJ: Yes I will speed up.
< sakshamB>
ShikharJ: regarding the padding layer I will make the PR. However, it only does padding for arma::mat so, currently I make multiple `Forward` calls for padding a cube.
< ShikharJ>
sakshamB: It might make sense to have a separate routing for cubes, where you first set the size and then copy the smaller cube to the sub-cube. Try out the runtimes for this approach, and see if it's better?
< sakshamB>
ShikharJ: hmm not sure if there would be any improvement because the current `Pad` overload for cubes also calls the `Pad` overload for arma::mat
< ShikharJ>
sakshamB: I meant creating a new temporary cube.
< sakshamB>
ShikharJ: I am not sure I understand what you mean. I guess I can open the PR and then we can discuss there?
< ShikharJ>
sakshamB: Okay, let's discuss there.
< ShikharJ>
sakshamB: Toshal: I would also hope you guys continue writing blog posts. That way I and others can stay abreast of your work.
< ShikharJ>
sakshamB: Also, I left some comments on MiniBatch PR, if you can address them, then we can merge.
< sakshamB>
ShikharJ: Okay, will do so. I have to leave for now. Have a good weekend :D
< ShikharJ>
sakshamB: Toshal: Okay, have a good one guys :)
ImQ009 has joined #mlpack
vivekp has joined #mlpack
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
abernauer has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143>
Do we have something of sought of ensemble methods
< jeffin143>
I mean can we combine algorithms in mlpack..??
abernauer has quit [Remote host closed the connection]
jeffin143 has quit [Remote host closed the connection]
travis-ci has joined #mlpack
< travis-ci>
robertohueso/mlpack#33 (mc_kde_error_bounds - e4d507d : Roberto Hueso Gomez): The build is still failing.