ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
k3nz0 has joined #mlpack
k3nz0 has quit [Ping timeout: 258 seconds]
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
Yashwants19 has joined #mlpack
< Yashwants19>
Hi rcurtin: Can we add association rule learning (Apriori or Eclat algorithms) to the mlpack.
< Yashwants19>
Or CAR or MCAR algorithms
< Yashwants19>
..??
Yashwants19 has quit [Remote host closed the connection]
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
< sakshamB>
Toshal: let me know if the weight dimension visitor would work for you or not.
< Toshal>
sakshamB: weight dimension visitor would be useful. But I would say that instead of getting visitor only for getting dimension, it would be great to have a visitor for getting and modifying wieghts.
< Toshal>
It may not only save processing seperaton of the wieghts from biases but also is sustainable for future. In future one may need only weights for something. This is just an idea so let me know your thoughts regarding the same.
< Toshal>
separation*
< sakshamB>
I do require the weight dimension visitors. For getting the weights you can just do layer->Parameters() and then take the submat to separate the bias. For modifying the weights use the WeightSetVisitor. Let me know what you think
chandramouli_r has quit [Remote host closed the connection]
sreenik[m] has quit [Remote host closed the connection]
Sergobot has quit [Remote host closed the connection]
aleixrocks[m] has quit [Remote host closed the connection]
chandramouli_r has joined #mlpack
yoyo has joined #mlpack
< yoyo>
how to install cmake
petris has quit [Ping timeout: 252 seconds]
petris has joined #mlpack
< KimSangYeon-DGU>
yoyo: If you use ubuntu, type the command 'sudo apt-get install cmake' on a terminal.
< yoyo>
I have done that
< yoyo>
some packages are giving eroor while instaling
< yoyo>
*error
< KimSangYeon-DGU>
What errors occur?
< KimSangYeon-DGU>
Have you run the command 'sudo apt-get update' before?
aleixrocks[m] has joined #mlpack
Sergobot has joined #mlpack
sreenik[m] has joined #mlpack
seawishnew has joined #mlpack
seawishnew has quit [Client Quit]
< yoyo>
$ wget https://www.mlpack.org/files/mlpack-3.1.1.tar.gz$ tar -xvzpf mlpack-3.1.1.tar.gz$ mkdir mlpack-3.1.1/build && cd mlpack-3.1.1/build$ cmake ../$ make -j4 # The -j is the number of cores you want to use for a build.$ sudo make install
< yoyo>
i followed these steps to install mlpack
< yoyo>
after installing cmake
< yoyo>
make -j4 # The -j is the number of cores you want to use for a build.
< yoyo>
i run this command
< yoyo>
then it gives me an error that :-
< yoyo>
make: *** No targets specified and no makefile found. Stop.
< KimSangYeon-DGU>
Have you try the command cmake../ in mlpack-3.1.1/build directory, Right?
< KimSangYeon-DGU>
That error occurs when there is no makefile in current working directory.
< KimSangYeon-DGU>
You can generate the 'Makefile' using the command 'cmake ../' in mlpack-3.1.1/build directory.
< KimSangYeon-DGU>
Then, after generating it, try 'make -j4' again.
< KimSangYeon-DGU>
in build directory.
< yoyo>
yes
< yoyo>
okay
< yoyo>
ill try
< yoyo>
i typed cmake../ first
< yoyo>
then i typed make-j4
< yoyo>
make: *** No targets specified and no makefile found. Stop.
< yoyo>
i got this
< KimSangYeon-DGU>
Was Makefile generated in your current directory, after doing cmake ../ ?
< yoyo>
no
< yoyo>
it gave me something like this
< yoyo>
Not building Markdown bindings.-- Found Python: /home/karthik/anaconda3/bin/python-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE) -- Configuring incomplete, errors occurred!See also "/home/karthik/mlpack-3.1.1/build/CMakeFiles/CMakeOutput.log".See also "/home/karthik/mlpack-3.1.1/build/CMakeFiles/CMakeError.log".
< KimSangYeon-DGU>
That is the point to solve
< yoyo>
how to solve then
< KimSangYeon-DGU>
Is there only doxygen error?
< KimSangYeon-DGU>
So sudo apt-get install doxygen
petris has quit [Ping timeout: 250 seconds]
< yoyo>
okay
< yoyo>
it gave me this as well
< yoyo>
Unable to find the requested Boost libraries.
petris has joined #mlpack
< KimSangYeon-DGU>
Have you try the command 'sudo apt-get install libmlpack-dev' ?
< yoyo>
yes
< yoyo>
Unable to find the requested Boost libraries.
< yoyo>
Not building Markdown bindings.
< KimSangYeon-DGU>
I don't think Boost is installed properly
< yoyo>
what should i do then
< KimSangYeon-DGU>
Please install the Boost
< yoyo>
Unable to locate package boost
< yoyo>
getting this
yoyo is now known as karthik
< karthik>
tell me a suggestion for boost
< zoq>
karthik: You are on ubuntu right?
< karthik>
yes
karthik is now known as Guest15575
< zoq>
What command did you tried to install boost?
< Guest15575>
sudo apt-get install boost
< zoq>
I see, so on ubuntu it's: sudo apt-get install libboost-all-dev
< Guest15575>
sudo apt-get install libboost-all-dev i tried this as well
< zoq>
What did the command return?
< Guest15575>
it installed all the packages
< Guest15575>
and unpacked them
< Guest15575>
but after that still i couldnt locate the boost package
< zoq>
what does 'whereis boost' return?
< Guest15575>
boost: /usr/include/boost
< zoq>
okay
< zoq>
can you check if 'ls /usr/lib/x86_64-linux-gnu/' does list some boost libs?
< Guest15575>
yes
< Guest15575>
it does
< zoq>
okay, so I think the necessary boost packages are there
< zoq>
I guess, you already unpackaed mlpack into some directory?
< Guest15575>
what should i do then?
< zoq>
so, go to the uncompressed mlpack folder there should be another directory called build?
< Guest15575>
okay
< Guest15575>
then
< Guest15575>
paste the boost p
< Guest15575>
okay
< zoq>
we will start from scratch, so let's remove the build folder and create a new one
k3nz0 has joined #mlpack
< Guest15575>
okay
< zoq>
once created go into the build folder and run 'cmake ..'
< Guest15575>
done
< zoq>
would be great if you could post the output of the command on pastbin or something like that and post the link here
< zoq>
as mentioned in the output "No space left on device"
< zoq>
the build process does need some free space
< zoq>
how much free space do you have?
< karthik1972>
around 1gb
< zoq>
do you have any way to extend that?
< karthik1972>
yeah
< karthik1972>
just checkin it out
KimSangYeon-DGU has quit [Remote host closed the connection]
< akhandait>
sreenik[m]: Hey! You there?
< karthik1972>
done with make command
< karthik1972>
zoq are you there?
< sakshamB>
Toshal: I was thinking that we add getter method Weight that returns the weight (without bias) and just doing layer->Weight() without using a visitor. Would this work for you?
karthik1972 has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 244 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 248 seconds]
< sreenik[m]>
akhandait: Hey
< sreenik[m]>
I also wanted to let you know that I think I have found a solution to the problem regarding the dimensions that I had told you. It still gives an error but I think I am on the right track now. And regarding the batchnorm and selu modifications, I have done the batchnorm one but am thinking of how to test it. The selu modification is left but it is just an extra function overload I suppose
< sreenik[m]>
karthik1972: If you are new to mlpack you might face errors while compiling as there are a lot of libraries to link against. If necessary you can search for it in the mlpack issues section in github or ask here
< akhandait>
sreenik[m]: Oh, that's great.
< akhandait>
Can you share with me the source/paper which you implemented for the momentum parameter?
< akhandait>
Oh, it is just this I think: running_mean = running_mean * momentum + mean * (1 - momentum)
< sreenik[m]>
Yes exactly
< sreenik[m]>
Same for variance
< sreenik[m]>
Only comes into play while calculating the predictions. That's what zoq and I were discussing a couple of days back
< akhandait>
Hmm, let me thing
< akhandait>
I think a simple test should suffice, in which you give some small input whose output you calculate manually and see if it matches. In this case, you would want the running mean and running variance to be a certain value after a forward pass with deterministic=False
< sreenik[m]>
Oh yes, should be fine. Thanks
< akhandait>
Anything else?
< sreenik[m]>
This is it for now. I hope the convolution part works, there is some minor mismatch that is causing a segmentation fault. With that we would have a working converter
< sreenik[m]>
Excited to finish it!
< akhandait>
Me too!
< sreenik[m]>
Oh and one more thing
< sreenik[m]>
I have not researched thoroughly but I could not find a way to create an onnx model in c++. I am asking this for the "mlpack to onnx" converter. What I have concluded is that we can use caffe if it is not possible with onnx and then convert it from caffe
< sreenik[m]>
But just to make sure that creating an onnx model is not possible in c++, could you also just check it once?
< akhandait>
Oh, I would try and find something if I can.
< sreenik[m]>
Okay. Thanks :)
< akhandait>
I had a thing I wanted to ask about the onnx_to_mlpack.cpp file. onnx seems to be breaking the linear layer into Matmul and Add
< akhandait>
But I think you are ignoring Add completely
< akhandait>
Did I miss something?
< sreenik[m]>
Yes I am ignoring Add and treating Matmul as Linear. What I can do otherwise is to find certain pairs of layers (like matmul and add) and then merge them to a single layer (say linear) but it would mean the same thing because I am not mapping an individual Add layer to anything
< sreenik[m]>
I don't think individual Matmul or Add layers will exist in a model. I have also ignored a lot many layers for which I could not find the purpose they serve. Maybe with more testing over time we will have to un-ignore a couple of layers
< akhandait>
Hmm, so while copying the weights into mlpack's linear layer, Are you assuming that the layer after the Matmul is going to be Add?
< sreenik[m]>
Apparently, yes
< sreenik[m]>
Should I consider a validation?
< akhandait>
I think you should check, matmul without add is used sometimes
< akhandait>
We have linearNoBias in mlpack as well
< sreenik[m]>
Ohh yes thanks for pointing out, that was something important I missed
< sreenik[m]>
akhandait: Hey, let me know if anything else needs attention. I am going for dinner, will be back in a while and see your texts then :)
< akhandait>
Will do
< akhandait>
sreenik[m]: We do have an IdentityLayer defined in base_layer.hpp. I think we should just map the Identity operator of onnx to this and not ignore it.
< akhandait>
It might seem pointless but this layer is generally needed in framework specific technicalities.
< akhandait>
So, say if we convert a model from tensorflow to onnx and then to mlpack, make some changes and then again ->onnx->tensorflow, then we will lose the Identity layer in onnx which might be needed in tensorflow
yoyo has joined #mlpack
yoyo is now known as karthik1972
< karthik1972>
after building up the mlpack packages how to get started with it
< zoq>
karthik1972: You can run the executables, look into some tutorials, etc.
< karthik1972>
okay
travis-ci has joined #mlpack
< travis-ci>
robertohueso/mlpack#28 (mc_kde_error_bounds - 8147e8c : Roberto Hueso Gomez): The build is still failing.
< zoq>
the interesting part is: typename std::enable_if<HasGradientCheck<T, arma::mat&(T::*)()>::value, size_t>::type
< zoq>
note the HasGradientCheck macro from the first "step"
< favre49>
This looks like exactly what I want, glad it's already been implemented to be this simple
< zoq>
The T is the type e.g. the class we like to inspect, arma::mat&(T::*)() is the form of the method in this case we like to check if it implements arma::mat& Gradient().
< zoq>
In case you like to check if a class implements e.g. int Gradient(size_t, arma::mat&), the form will look like: int(T::*)(size_t, arma::mat&)
< zoq>
does this make sense?
< favre49>
Yup, i get it
< favre49>
I'll get back to you if I have any issues. Thanks for the help!
< zoq>
Okay, great.
favre49 has quit [Remote host closed the connection]
karthik1972 has quit [Ping timeout: 260 seconds]
< sreenik[m]>
akhandait: Oh okay will do that as well. I had decided to ignore it as I had converted a keras model to onnx (without an identity layer) but onnx decided to add an identity layer at the end on its own
< akhandait>
Oh, okay. Still safer to add it. :)
< sreenik[m]>
Yeah sure
< akhandait>
I checked the diension problem you mentioned,
< akhandait>
The mnist_mode.onnx has a kind of multi column structure. The weights and biases are not in order, you were right.
< akhandait>
Could you pick some simpler model for testing?
< akhandait>
Create it yourself perhaps?
< akhandait>
I would suggest using some tool like Netron to visualize the onnx models.
jeffin14327 has joined #mlpack
< jeffin14327>
Sorry for not using my exact handle, since I forgot to sign out from my office system
< jeffin14327>
Next , favr49 thanks for asking about SFIANE
< jeffin14327>
Also thanks zoq: for answering I also had same question , will go through it and get back to you
< jeffin14327>
lozhnikov : I made some changes to 1814, can you take a look , and let me know if the changes are apt ?
< sreenik[m]>
akhandait: Yes the mnist model is not a good choice for conversion. The "onnx_conv_model.onnx" that you see is made by me. I am building the converter based on that one. Trying Netron seems to be a good idea
< akhandait>
Okay, in the onnx_conv_model, the dimensions of the weights seem fine to me, except the maxpool layer
jeffin14327 has quit [Remote host closed the connection]
< akhandait>
Hmm, on opening it on Netron, there's some very weird stuff going on
< sreenik[m]>
Yeah, the maxpool layer has an empty weight matrix since it has no trainable parameters. But even I am unsure if it unfolds to the right number of neurons (9216 as far as remember)
< akhandait>
hmm, while creating this, did you directly go from maxpool to linear? Or did you use some other layers in between?
< sreenik[m]>
Had a reshape layer in between
< sreenik[m]>
Wait I'll just upload the relevant ipnb that I used
< akhandait>
Ahh, it seems like onnx uses multiple "Cast" and "Slice" operators to perform this reshaping
< akhandait>
I just sent you screenshots from the netron visualization