ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
abernauer has joined #mlpack
< abernauer> rcurtin: Yeah I am not positive what the problem is leaning towards sending an email to Dirk and James.
abernauer has quit [Remote host closed the connection]
abernauer has joined #mlpack
< abernauer> I looked at the stack again and printed programName in the r_util.cpp file got the following $2 = 0x5555570754a8 "\220".
< abernauer> Looks like the value of the memory address of the R function appended with the value of the variable name from the file cli.cpp.
abernauer has left #mlpack []
xiaohong has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> robertohueso/mlpack#58 (pca_tree - 2dbf84a : Roberto Hueso Gomez): The build was fixed.
travis-ci has left #mlpack []
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 246 seconds]
xiaohong has joined #mlpack
favre49 has joined #mlpack
< favre49> rcurtin zoq Is there a way to use the advanced constructor to convert a vector of arma::mat into a cube? I seem to remember someone saying that at some point, though I could be mistaken
favre49 has quit [Remote host closed the connection]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< rcurtin> favre49: you could go from a cube to a vector of arma::mat, but the thing is that when you have a vector of arma::mat, the memory of each matrix is in a different place
< rcurtin> so it isn't possible to go from vector<mat> to cube, since we don't have one contiguous memory pointer we can give to the cube advanced constructor that represents all the memory of each of the matrices :(
favre49 has joined #mlpack
< favre49> rcurtin Oh that's unfortunate, I think I'll have to just use a for loop to assign them then. I thought vectors stored memory contiguously though?
jeffin143 has joined #mlpack
< jeffin143> https://pastebin.com/yN2htzmV , Can anyone please help me with the code
< jeffin143> i have created a small neural network,but while training it throws an error
< jeffin143> error: Mat::operator(): index out of boundsterminate called after throwing an instance of 'std::logic_error' what(): Mat::operator(): index out of bounds
< jeffin143> did I miss something ?
< rcurtin> favre49: do you mean std::vector<>? it might store memory contiguously (depends on the implementation), but each individual mat will allocate its own memory
< rcurtin> actually I guess you have to mean std::vector<> since an arma::vec can't hold arma::mat inside of it :)
< rcurtin> I am too tired maybe, need to go to bed now ...
jeffin143 has quit [Ping timeout: 260 seconds]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< jenkins-mlpack2> Project docker mlpack nightly build build #418: STILL UNSTABLE in 3 hr 29 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/418/
< lozhnikov> jeffin143: I think the shape of the linear layers is incorrect.
< lozhnikov> model.Add<Linear<> >(x.n_rows,200);
< lozhnikov> The first argument should be the number of columns rather than the number of rows.
< lozhnikov> jeffin143: Looks like you don't initialize the features and the labels.
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< favre49> rcurtin Yup, sorry I wasn't clear
< favre49> By "allocate it's own memory" do you mean the mat would exist as a pointer to another memory location?
favre49 has quit [Remote host closed the connection]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
KimSangYeon-DGU has joined #mlpack
transwert has joined #mlpack
ImQ009 has joined #mlpack
transwert is now known as Transwert
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 245 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 246 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 246 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
Transwert has quit [Remote host closed the connection]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
KimSangYeon-DGU has quit [Remote host closed the connection]
jeffin143 has joined #mlpack
< jeffin143> https://pastebin.com/TmpGnZcT : lozhinkov
< jeffin143> If you run the above code it will throw an error
< jeffin143> but if you uncomment line no 123,124 the code will work normally I am not sure, why this happens since both of the matrix size is same
< jeffin143> and thus the network is not an issue, it is the matrix which is causing issue
< jeffin143> also in mlpack rows are the feature and columns are the data points and hence row numbers would be the first layer first arguments
< jeffin143> also for above code i have randomly initialize the matrix just to check the actual logic is not this
< lozhnikov> jeffin143: Again, you passed an incorrect argument to model.Add<Linear<> >()
< jeffin143> but if the arguments would have been incorrect then it should not run for trainX and trainY ?
< lozhnikov> that's why the code throws the error. If you uncomment lines 123-124 then the features matrix will become squared.
< lozhnikov> trainX is a squared matrix.
< jeffin143> x is also a squared matrix
< jeffin143> token.size() and tokencount both are 5
Transwert has joined #mlpack
< lozhnikov> jeffin143: You didn't initialize the labels correctly.
< lozhnikov> Open ann/loss_functions/negative_log_likelihood.hpp
< lozhnikov> The documentation states: "The layer also expects a class index, in the range between 1 and the number of classes, as target when calling the Forward function."
< jeffin143> Thank you so so very much
< jeffin143> These error was thrown since they are trying to access target(i), which is not there and hence that was the issue, i was confused since index out of bound was of the input matrix
Transwert has quit [Remote host closed the connection]
< jeffin143> Thanks once again
jeffin143 has quit [Ping timeout: 260 seconds]
KimSangYeon-DGU has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143> here is the initial class implementation
< jeffin143> it needs lot of refactoring, and that is why i didn't make any PR for that
< jeffin143> I will once again go through some paper to check for the correctness of implementation and other things :)
< jeffin143> zoq , rcurtin : do we have something like categorical_cross entropy for the categorical data
< jeffin143> like the ouput vector is ohe vectors [[0,0,1],[1,0,0],[1,0,0],[0,0,1],[0,1,0]]
< jeffin143> or anything similar to categorical_cross entropy
< jeffin143> for categorical data
< lozhnikov> jeffin143: I sent you an email. Look like Google was granted a patent for the word2vec algorithm.
< jeffin143> lozhinkov: I am not sure how this works, but if google has patent, then we can't use it ?
< jeffin143> I mean we can't rewrite it
< lozhnikov> Well, I am not quite sure, but it's possible that we can't.
< rcurtin> it's possible some of the work we have done here might be worth submitting to this NeurIPS workshop
< rcurtin> the focus tends to be larger-scale systems, though, so individual algorithm implementations (and the tricks that went into them) might not fit as well
< rcurtin> actually lozhnikov and jeffin143, perhaps the string-to-matrix conversion framework might be interesting, though it doesn't fit the CFP *exactly*
< jeffin143> ok, Fine. Then i will proceed with CLI binding for string encoding.
< lozhnikov> rcurtin: I'll think it through.
< rcurtin> it's also possible that the "beyond first-order methods" workshop might be a place to submit some ensmallen work: https://neurips.cc/Conferences/2019/Schedule?showEvent=13156
< rcurtin> however it's always tricky for these workshops since most of what we do is implementation moreso than novel algorithms or research, which are typically what these workshops are focusing on
< lozhnikov> rcurtin: Probably we might focus on the implementation differences (if any) and the difficulties.
< rcurtin> yeah, that could be interesting, it depends a lot on what the reviewers are looking for :)
< rcurtin> I have submitted too many papers that are a little "outside the box" in that they are focusing on implementation details more than novel theory or anything like this, and many times reviewers aren't sure what to do with it
< rcurtin> typically doesn't hurt to try though :)
jeffin143 has quit [Ping timeout: 260 seconds]
vivekp has quit [Ping timeout: 272 seconds]
ImQ009 has quit [Quit: Leaving]