ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143> error while loading shared libraries: libboost_program_options.so.1.69.0: cannot open shared object file: No such file or directory
< jeffin143> Does anyone know why this error ?
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has quit [Ping timeout: 260 seconds]
< lozhnikov> jeffin143: Perhaps boost isn't installed or has been updated. Try to install boost or rebuild the project from scratch.
< jenkins-mlpack2> Project docker mlpack nightly build build #417: STILL UNSTABLE in 3 hr 35 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/417/
ImQ009 has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143> lozhnikov : I tried building project from scratch a lot of times, still it throws the error, I have written the word2vec class, but just needs some testing
< jeffin143> but i cannot do it since i am unable to rectify this issue
< jeffin143> i will download the mlpack again and then run again
< zoq> jeffin143: Can you make sure libboost_program_options.so.1.69.0 does exist?
< jeffin143> zoq : i guess the issue is, while running cmake the mlpack detects my boost version as 1.69
< jeffin143> but when i runs it in c++ , as
< jeffin143> std::cout << "Using Boost " << BOOST_VERSION / 100000 << "." // major version << BOOST_VERSION / 100 % 1000 << "." // minor version << BOOST_VERSION % 100 // patch level << std::endl;
< jeffin143> it shows 1.65.1
< jeffin143> for some odd reasons
< zoq> jeffin143: can you delete the build folder an rerun the cmake step?
< jeffin143> dpkg -s libboost-dev | grep 'Version'
< jeffin143> this gives me my version as 1.65.1
< jeffin143> but i don't know how cmake gives me as 1.69
< zoq> jeffin143: Maybe you have both version installed?
< zoq> jeffin143: I guess you could just remove the boost package and install it again?
< jeffin143> zoq : https://pastebin.com/dgedKCH9 : line 29
< zoq> jeffin143: Are the boost libs in /usr/lib/?
< jeffin143> there is not boost directory in /usr/lib
< jeffin143> it only has libboost_program_options.so.1.67.0
< jeffin143> i mean, it has some more libs
< zoq> jeffin143: Strange, let's remove those boost packages and install the latest version?
< jeffin143> yeah most probably my boost has been messed up badly
< jeffin143> I wasted my whole day compiling the tar file of 1.69 and then getting these errors
< jeffin143> I will get back to you in a while
jeffin143 has quit [Ping timeout: 260 seconds]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143> Do we have softmax function ?
jeffin143 has quit [Remote host closed the connection]
jeffin143 has joined #mlpack
< jeffin143> suppose I have the following model : https://ibb.co/rvXwtmV
< jeffin143> one input layer, one hidden layer and one output layer
< jeffin143> ouput has softmax as activation
< jeffin143> is the following model correct : https://pastebin.com/yYBUUrtc
jeffin14362 has joined #mlpack
< zoq> jeffin143: Yes, looks correct, I think you intentionally left another activation function after the first linear layer?
jeffin143 has quit [Ping timeout: 260 seconds]
jeffin14362 has quit [Ping timeout: 260 seconds]
AndroUser has joined #mlpack
< AndroUser> Zoq: for some reasons , the output layer is not having size of 1*V , but having size 1*1 , I mean the layer has only 1 node as output
< AndroUser> Whereas I want V nodes,
AndroUser is now known as jeffin143
< zoq> jeffin143: Hm, will check later, for now you could use the Sigmoid function instead.
< jeffin143> Yeah* , I will go with sigmoid
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 245 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 245 seconds]
< jeffin143> Huh , finally resolved the issue *
< jeffin143> Install the rpm package
< jeffin143> Lozhnikov : I am almost done with the implementation, raw implementation it is not at all optimised and also not so good, I will make an intial commit once I am done with the testing of the class ,
< jeffin143> I haven't gone through the exact code of Google implementation of word2vec , since I am not able to understand the code structure, so I went through some implementations of it , and found out that they are using softmax , but we have logsoftmax , so I will go with it.
Yashwants19 has joined #mlpack
< Yashwants19> Hi rcurtin: While working on Go-bindings I am facing some issues.
< Yashwants19> Go doesn't provide a easy to load or save the CSV
< Yashwants19> It is very difficult when we convert those CSV into mat.Dense because go take CSV string array.
< Yashwants19> So should we develop something for Loading and saving of csv in our desired format
< Yashwants19> And When I was working with preprocess_split It take labels as mat.Dense and split them according. but other doesn't take labels as mat.Dense they take labels as mat.VecDense.
< Yashwants19> Here can we convert all our go-bindings functionality in mat.Dense rather than mat.VecDense.
Yashwants19 has quit [Remote host closed the connection]
favre49 has joined #mlpack
< favre49> Another update - Seems like my laptop's problem isn't that trivial, and the repair technician here can't fix it. I either have to go to Delhi to see if a certified service center can repair it, or buy a new one
< favre49> Considering the poor build quality of my current laptop (heard a lot of complaints by another guy who has the same one) I may buy a new one, not sure motherboard repair is worth the investment, Any suggestions?
favre49 has quit [Ping timeout: 260 seconds]
< Toshal> zoq: I need your help to decide the adjustment. There are two ways to make the adjustments in the visitors. One is to add one more overload of the `()` operator as mentioned in the link. And the other is to use SFINAE for overloading the `()` Operator. What do you think will be the best?
< Toshal> I personally feel SFINAE as in future more such type of addition may be required.
< lozhnikov> jeffin143: What do you mean by logsoftmax? I am not sure it's correct, it depends on the loss function.
< lozhnikov> jeffin143: We are not obliged to write exactly the same as the original google implementation. But I think we must follow the papers. Did you read the papers I sent you? Does your implementation correspond to the ideas proposed in the papers?
< lozhnikov> jeffin143: As far as I know mlpack doesn't provide the log softmax activation function.
KimSangYeon-DGU has joined #mlpack
< rcurtin> zoq: I guess I could just do it differently for future tags. Probably nobody is going to go download old versions :)
< rcurtin> Yashwants19: hmmm, good find. I'm quite surprised Go has no CSV reading or loading functionality into matrices
< rcurtin> I'm not opposed to providing a converter from what Go's CSV utilities already provide but I would prefer to avoid writing and maintaining a Go CSV reader :)
< rcurtin> for the mat.Dense and mat.VecDense issue, what we have done in other languages is to make the function acceot either type (if possible) and then cast to the necessary type
< rcurtin> *accept (not acceot, sorry---awful phone keyboard :))
< rcurtin> so for instance a VecDense is just a Dense matrix with one dimension of size 1
< rcurtin> and also, a Dense matrix that has one dimension of size 1 is a VecDense
< rcurtin> this typically requires runtime checks on the matrix/vector dimensions and an exception to be thrown if something is wrong
Yashwants19 has joined #mlpack
< Yashwants19> Cool I will work on casting. For now I will skip the csv part :)
< Yashwants19> Thank You.
Yashwants19 has left #mlpack []
< jeffin143> lozhnikov : yes I went through the paper
< jeffin143> So they are two methods too implement cbow, skip gram
< jeffin143> They only difference is suppose there is X and y , then for cbow X is input and y is output
< jeffin143> Where for skip gram y is input and X is output
< rcurtin> Yashwants19: sounds good, thanks for checking it out :)
< jeffin143> Both follows a single hidden layer neural networks
< jeffin143> One input one output and out hidden layer, the output layer has softmax function and calculate the error and then backward propagation ,simple to a single neural networks, the intermediate weights are the vector ( embeding vectors)
< lozhnikov> jeffin143: Yes, I see. I didn't look through the layers.
< lozhnikov> jeffin143: Okay, I glanced over only one of the papers. I am going to glance over the other papers and over the original implementation. Then I'll look through your code.
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
< zoq> favre49: Maybe you can go with a similar setup as Ryan uses, Phone (ssh + screen/tmux + vim), not sure this is something for everyone but I like the idea :)
< zoq> favre49: Honestly, not familiar enough with notebooks, that I could recommend something, I do have an xps and a macbook and both are okay; don't really like the touchpad on the xps, but I do like the keyboard, on the macbook the touchpad is good but the keyboard needs some time to get used to it.
< zoq> Toshal: hmm, you have to adapt SFINAE once you add a new type, for the first idea we just have to add another overload. Do you think the second idea is faster?
< zoq> rcurtin: If the modifcation of old tags is simple I guess we could do it anyway.
< Toshal> zoq: No I don't know whether the second idea is faster.
< Toshal> I will stick to the overload then.
< zoq> Toshal: I think the optimizer will optimize out the extra overload, so we might end up with what we get using SFINAE.
< zoq> Toshal: If you already started, we can go with SFINAE.
KimSangYeon-DGU has quit [Remote host closed the connection]
abernauer has joined #mlpack
< abernauer> rcurtin: I ran R through gdb loaded the shared object. Set some breakpoints and got a better stack trace.
jeffin143 has quit [Read error: Connection reset by peer]
abernauer has left #mlpack []
abernauer has joined #mlpack
< abernauer> rcurtin: Wanted to give you some more insight on that stack trace. I printed the variable name, after passing "Principal Components Analysis" had the value "\220" which in octal is that question mark symbol in ascii.
abernauer has quit [Remote host closed the connection]
ImQ009 has quit [Quit: Leaving]
< rcurtin> abernauer: sounds good, looks like you've identified the issue---it seems to me like strings are not being passed from R to C++ correctly
< rcurtin> (however I am not sure the details of why that is or how it can be fixed; my guess is maybe an error in how the types are being passed)
< rcurtin> zoq: yeah, I'll see how easy changing tags are when I have some time