verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
sgupta has quit [Ping timeout: 255 seconds]
mikeling has joined #mlpack
chenzhe has quit [Ping timeout: 245 seconds]
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 260 seconds]
chenzhe has joined #mlpack
aashay has quit [Quit: Connection closed for inactivity]
vivekp has quit [Ping timeout: 272 seconds]
chenzhe has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
chenzhe has joined #mlpack
mikeling has quit [Quit: Connection closed for inactivity]
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
mikeling has joined #mlpack
kris2 has joined #mlpack
Trion has joined #mlpack
chenzhe has quit [Ping timeout: 260 seconds]
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 245 seconds]
chenzhe has joined #mlpack
kris2 has left #mlpack []
kris2 has joined #mlpack
< kris2> boost::apply_visitor(ForwardVisitor(std::move(input), std::move(output), linear);
< kris2> Forward vistor is not working for linear layer
< kris2> No matching function call!!! Can anyone help
< kris2> Linear has no member named apply_visitor
< lozhnikov> kris2: void Forward(const arma::Mat<eT>&& input, arma::Mat<eT>&& output);
< lozhnikov> maybe input or output doesn't match this template?
< kris2> input and output are arma::mat
< kris2> error: ‘class mlpack::ann::Linear<arma::Mat<double>, arma::Mat<double> >’ has no member named ‘apply_visitor’
< kris2> this is the actually error the confusing part is that ffn does use ForwardVisitor
< lozhnikov> can you send me the whole code?
< kris2> okay i created the same error with more simple version of the code https://gist.github.com/kris-singh/ba6c1f2b62214215ec45ea0587e103fd
< kris2> here is the gist
< lozhnikov> You forget a bracket:)
< lozhnikov> boost::apply_visitor(ForwardVisitor(std::move(input), std::move(output) /* here should be a bracket ')' */, linear);
< kris2> ohh sorry i did not update the gist after making that
< kris2> change
< kris2> Have a look now
aashay has joined #mlpack
< zoq> kris2: LayerTypes linear = new Linear<> (10, 10); should work, or you can just do linear.Forward(...).
mentekid has quit [Ping timeout: 255 seconds]
< zoq> but the visitor should also work, have to take a look
aashay has quit [Read error: Connection timed out]
aashay has joined #mlpack
shikhar has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
mentekid has joined #mlpack
< kris2> zoq: Using is also not working. I don't want to just use ForwardVisitor in my "real" program i also want to the weightsetvisitor and other visitors
< lozhnikov> kris2: you should add #include <mlpack/methods/ann/layer/layer.hpp>
< lozhnikov> instead of #include <mlpack/methods/ann/layer/linear.hpp>
chenzhe has quit [Ping timeout: 268 seconds]
< shikhar> Having a little bit of an issue here
< shikhar> I not able to get Armadillo to link properly on my system
< shikhar> Getting weird linker errors, with undefined reference to 'dgemm_', 'ddot_'... until I explicitly specify -lblas with -larmadillo
< kris2> lozhnikov: With that i am getting linker errors undefined reference to `mlpack::Log::Assert
< lozhnikov> kris2: try to add -lmlpack
< lozhnikov> to the linker options
< kris2> g++ -std=c++11 -lmlpack temp.cpp -larmadillo -lboost_serialization -o temp.o
< lozhnikov> hm... I used the following
< lozhnikov> g++ test.cpp -I src/ -I ../src/ -I ../src/mlpack/methods/ann/ -L lib/ -lmlpack -larmadillo
< lozhnikov> And that works fine
< lozhnikov> Do you have an installed version of mlpack (via a package manager) on your system?
< kris2> No i built it from source and did make install
< lozhnikov> maybe that version conflicts with the development one
< kris2> i have the latest version 2.2.3 i guess
< kris2> which do you have
< lozhnikov> I have built the git master version
< mentekid> shikhar: are you trying to compile an mlpack program or an armadillo program?
< shikhar> mentekid: Just solved the issue. I had another version of arma in $LD_LIBRARY_PATH. -Wl,--verbose to the rescue :)
< mentekid> ah. Good catch, sorry I wasn't more help!
shikhar has quit [Quit: WeeChat 1.7]
< kris2> lozhnikov : Yup it was the version error thing only
mentekid has quit [Ping timeout: 255 seconds]
mentekid has joined #mlpack
mentekid has quit [Quit: Leaving.]
mentekid has joined #mlpack
mentekid has quit [Quit: Leaving.]
mentekid has joined #mlpack
mentekid has quit [Quit: Leaving.]
mentekid has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
mentekid has quit [Quit: Leaving.]
mentekid has joined #mlpack
mentekid has quit [Quit: Leaving.]
mentekid has joined #mlpack
mentekid has quit [Quit: Leaving.]
mentekid has joined #mlpack
sgupta has joined #mlpack
< sgupta> Hi Ryan, you there?
< rcurtin> sgupta: yeah, for a little while
< sgupta> I installed bash and run the make command. It completed. But, when I run mlpack_test -t KNNtest it gives error: "Error loading shared library libarmadillo.so.7: No such file or directory (needed by bin/mlpack_test)
< sgupta> ". I check in /usr/lib64/ and it is indeed present there.
< rcurtin> how was Armadillo installed?
< sgupta> I installed it with tar.gz file
< sgupta> armadillo is not in package list for alpine.
< rcurtin> right, so ./configure then make then make install?
< sgupta> yeah
< sgupta> I followed the README.txt file that came with the source files.
< rcurtin> did it install into /usr/lib64 or /usr/local/lib64?
< sgupta> in /usr/lib64
< rcurtin> can you check the ldconfig search path:
< rcurtin> $ ldconfig -v 2>/dev/null | grep -v ^$'\t'
< sgupta> outputs nothing
< rcurtin> does 'ldconfig -v' output amything at all?
< sgupta> no. Still no output.
< rcurtin> you are running this command inside the container, right?
< sgupta> yes
< rcurtin> ok
< rcurtin> try running 'ldconfig' to rebuild the runtime linker library list
< rcurtin> then run ldconfig -v again
< rcurtin> or ldconfig -p could work also
< sgupta> Looks like "ldconfig" is not doing anything
< rcurtin> let mw read a little bit
< rcurtin> me*
< rcurtin> ah, I see that Alpine ships with uclibc not glibc
< rcurtin> and I think this will cause many other issues and produce a build environment too far away from what our users typically have
< rcurtin> so I think either we install glibc in the alpine container (probably ugly to do) or find another minimal base image with glibxlc
< rcurtin> glibc*
< sgupta> Just to let you know, I created a debian image and it is around 387 MB in size.
< rcurtin> ok
< rcurtin> that alpine container you have built... what if you base it against 'busybox' instead?
< sgupta> And this one is 205 MB in size as of now. Yes, I can try that.
< rcurtin> use 'busybox:glibc'
< rcurtin> I also see there are some Dockerfiles for Alpine with glibc on github too, you could also try those
< rcurtin> size savings of 2x is pretty good in my opinion :)
< sgupta> Okay.
< sgupta> So, I will base the container on an image with glibc and alpine
< sgupta> Install the libraries just as I am doing it right now.
< sgupta> Is there something else I have to keep in mind?
< rcurtin> I think that sounds fine, let's see if it works :)
< sgupta> okay sure thanks :)
sgupta has quit [Ping timeout: 260 seconds]
sgupta has joined #mlpack
Trion has quit [Quit: Have to go, see ya!]
mentekid has quit [Quit: Leaving.]
< ironstark> facing certain errors while installing shogun on my system
< ironstark> please suggest what can be done
sumedhghaisas has joined #mlpack
< zoq> ironstark: Do you use the install script that comes with the benchmarks?
< ironstark> yes
< zoq> hm, maybe it's easier to install shogun by using the package manager: https://github.com/shogun-toolbox/shogun/blob/develop/doc/readme/INSTALL.md#ubuntu
sgupta has quit [Ping timeout: 260 seconds]
sgupta has joined #mlpack
< ironstark> actually on running the command sudo apt-get install libshogun17 I get Unable to locate package
< ironstark> libshogun16 works but only on python2 and not python3 and on running make command python3 is used by default
< zoq> ah, right, you have to build against python3
< zoq> I haven't seen this error before, maybe it's a good idea to join #shogun and ask if anyone has a simple solution.
< ironstark> okay
< ironstark> thanks
< sumedhghaisas> zoq: Hey Marcus
< sumedhghaisas> How are you?
mikeling is now known as mikeling|afk
< zoq> sumedhghais: Fine, thanks, You?
< zoq> ironstark: Maybe switching to clang helps: something like:
< zoq> export CC=/usr/bin/clang
< zoq> export CXX=/usr/bin/clang++
< ironstark> getting the same error
< sumedhghaisas> zoq: great. Okay so I will update on the work I was doing this weak.
< sumedhghaisas> I have implemented the GRU layer. I will push that as soon as I write tests
< sumedhghaisas> little confused about that
< sumedhghaisas> also I was also working on that separate update policy method for neural networks
< sumedhghaisas> solved most of the problems there.... with some implementation of traits most of the problems can be solved without a lot of code changes
< sumedhghaisas> although 1 decision problems remains
kris2 has quit [Ping timeout: 255 seconds]
< sumedhghaisas> *design
< sumedhghaisas> So would like to discuss there 2 things with you when you are free
< zoq> sumedhghais: Sounds good, I guess you can test the GRU layer on the reber grammar task and probably reuse some of the tests from the ann_layer_test cases.
< zoq> sumedhghais: Sure go ahead.
< sumedhghaisas> reber grammar task? okay... do we already have that for LSTM or I should add the dataset?
kris1 has joined #mlpack
< zoq> it's implemented inthe recurrent_network_test
< sumedhghaisas> okay. So I figured out a way for using different update rule for RMSProp when using ANN
< sumedhghaisas> using traits
< sumedhghaisas> now the new update rule has the original update rule
< sumedhghaisas> the plan is to call the original update rule for every parameter separately
< sumedhghaisas> am I clear enough?
< zoq> yes, sounds like we have to modify each optimizer if we like to support the single layer gradient calculation right?
< sumedhghaisas> yes but not a lot
< sumedhghaisas> so each optimizer has to implement a typedef called DefaultUpdatePolicy
< sumedhghaisas> and rather than directly passing this DefaultUpdatePolicy just a call Trait::UpdatePolicy
< sumedhghaisas> the default trait is the OptimizerType::DefaultUpdatePolicy
< sumedhghaisas> this way when no special trait is there the DefaultUpdatePolicy will be used
< sumedhghaisas> when a special class ... like ANN partially specializes the trait... the special traits Update Policy will be used
< sumedhghaisas> whatever the specialized traits decided
< sumedhghaisas> so only these 2 small changes are required
< sumedhghaisas> adding a typedef ... and using the trait's update policy
s1998 has joined #mlpack
< sumedhghaisas> the bigger problem is ahead
< zoq> sounds really straightforward, still a little bit worried about the change, but I can't think of anything that avoid modifying the optimizer classes.
< sumedhghaisas> some optimizers are complicated and they store some statistics ... such as RMSProp
< sumedhghaisas> how to deal with that
< sumedhghaisas> cause stats for each different parameter will have to be separate
< sumedhghaisas> so each parameter should have its own updatePolicy object
< zoq> right, good point
< sumedhghaisas> can't think of any elegant way to handle this...
< zoq> we have to instantiate one for each "layer" if we like to reuse the orginal update rule
< sumedhghaisas> yes ... indeed
< sumedhghaisas> 1 way to shift the responsibility to layers
< sumedhghaisas> and use static polymorphism to implement some common code
< sumedhghaisas> ohh wait... not just 1 for each layer
< sumedhghaisas> we have to keep the possibility open that there may be many number of parameters for each layer
< zoq> I guess, what we could do is to instantiate one optimizer for each layer e.g. inside the FFN class, would that solve the problem?
< sumedhghaisas> I also thought about that... but no... for example
< sumedhghaisas> linear layer has 2 parameters
< sumedhghaisas> weights and bias
< sumedhghaisas> so we need 2 objects there
< zoq> do we? we could use the same optimizer for both parameter
< zoq> linear.Parameters() returns the parameter for bias and weights in a single matrix
< sumedhghaisas> but there gradient statistics will differ right?
< sumedhghaisas> ahh you mean that... but thats the whole transformation I was trying to avoid
< zoq> the statistics is elementwise anyway, at least for RMSProp
< sumedhghaisas> yes... I was just thinking about avoiding vectorizing things
< zoq> I mean it's not idea, but we would save memory
< zoq> ideal
< sumedhghaisas> but the separate policy objects will take the same amount of memory as they are storing the same amount of statistics... in 1 object or multiple
shikhar has joined #mlpack
< sumedhghaisas> its the management of these multiple objects that may cause mayhem
< zoq> I agree
< sumedhghaisas> zoq: if I am able to create an single for each parameter
< sumedhghaisas> *a single object
< zoq> for the statistics?
< sumedhghaisas> yup... like a layer has to register each of its parameter at initialization to the BaseLayer... this BaseLayer may keep the track of these objects
< sumedhghaisas> but again the problem remains .... when updating the BaseLayer has to know which object has to choose
< zoq> would be easier if we would write a new update rule for this case :)
< sumedhghaisas> haha... I agree
< kris1> zoq: For Reset() in ffn you do this offset += boost::apply_visitor(WeightSetVisitor(std::move(parameter), offset), network[i]); boost::apply_visitor(resetVisitor, network[i]); I do understand the weightsetvisitor will intialise the parameters of model. But i did no understand boost::apply_visitor(resetVisitor, network[i]) why are we reseting parameters ?
< sumedhghaisas> but there as well I need to somehow keep different statistics for each parameter ... arghhhhhhh
< zoq> sumedhghais_ I think it would be a good idea to open a separate issue for that and collect the ideas, maybe someone else has a good idea. What do you think?
< sumedhghaisas> yeah... I agree... its just so difficult to explain this problem in words ... haha :P
< sumedhghaisas> nonetheless ... I shall create an issue
< s1998> REGISTER noob sranjan.sud@gmail.com
< zoq> At the end of course we could write special optimizer for the ann code, but maybe we can avoid that.
< sumedhghaisas> yeah... not a fan of that solution either
< sumedhghaisas> zoq: we can take the idea of variable identifiers from Tensorflow
< sumedhghaisas> this way maybe they keep the track
< zoq> kris1: The Reset Visitor calls the Reset function of a specific layer if it's implemented. E.g. for the linear layer it's used to initialize the weight and bias parameter based on the parameter matrix. All network parameter are stored in a single matrix (continues memory) if some layer has more than one parameter, we have to merge them.
s1998 has quit [Quit: Page closed]
< zoq> sumedhghais: That might be an option, yes.
< kris1> zoq: I do understand that part. but why intialise and then reset the parameters. I am confused about that part
< kris1> why call weigth_set_visitor and then reset the parameters
s1998 has joined #mlpack
< kris1> Do you get my point??
< zoq> yes, so let's do this for the linear layer, WeightSetVisitor will initialize the parameter matrix (layer.Parameters()), the parameter matrix holds the layer parameter e.g. the weights + bias
< zoq> but what we like to do is to call output = (weight * input) + bias;
< zoq> there is no easy way to get the output e.g. output = input * parameter dosn't work, or is not what we like
s1998_ has joined #mlpack
< zoq> so we have to split up the paramter matrix into weight and bias
< zoq> we don't allocate new memory for both parameter, we just use the parameter matrix
< zoq> so the parameter matrix looks like [1, 2, 3, 4, 6]
< zoq> the reset function just points the weight matrix to the right position of the parameter matrix
< zoq> e.g. weight -> [1, 2, 3, 4]
s1998 has left #mlpack []
< zoq> and bias -> [6|
s1998_ has quit [Client Quit]
s1998_ has joined #mlpack
< zoq> does this make sense?
s1998_ has quit [Client Quit]
s1998 has joined #mlpack
< kris1> Hmmm yes it does make sense now. I thought reset there meant that we initializing the parameters again. Now i see the main purpose of reset is set weight and bias parameters appropriately using the parameter matrix
< zoq> yes, correct
< kris1> Is this that correct
< kris1> Just a question? If i am creating a new layer can i do this that i have one matrix for weight and one matrix for bias or is it discouraged ?
< zoq> it's basically only necessary if you have more than one parameter that should be optimized
< zoq> you can do that
< kris1> though we also optimise 2 parameters right weights and bias right.
< zoq> the parameter structure is independence from the whole parameter matrix
< zoq> right, you still have to implement the Reset function, I can tell you how, once you are there.
< zoq> basically it's nothing more than: weight = arma::mat(weights.memptr(), outSize, inSize, false, false);
< zoq> this creates a weight matrix with size outSize, inSize using the allocated memory of weights
< kris1> Yes i did see that. I will create a gist and let you know
< zoq> okay, sounds good :)
< rcurtin> ironstark: did you install libgomp? to me it looks like that is what is missing
< rcurtin> s1998: you might want to pick a different password for your irc account now :)
< s1998> rcurtin: I did that :)
< rcurtin> ah good :)
< sgupta> Hi Ryan, used an image of alpine with glibc. Still getting the same error.
< rcurtin> see what ldconfig says, and if that outputbis still weird it might be worth trying the busybox:glibc base
< sumedhghaisas> zoq: hey marcus... just posted it as issue... will continue thinking on it.
< ironstark> rcurtin: Its already installed
shikhar has quit [Remote host closed the connection]
mikeling|afk has quit [Quit: Connection closed for inactivity]
< rcurtin> ironstark: is it the correct version? and how was it installed?
< ironstark> libgomp1 is already the newest version (6.3.0-12ubuntu2).
< ironstark> this is the message i am getting
< ironstark> on running sudo apt-get install libgomp1
< rcurtin> the same version (6.3.0) is installed on the benchmarking build slaves also
< rcurtin> ah wait hang on, that error is not that libgomp doesn't exist, that's that Shogun isn't linking against it properly
< rcurtin> this would definitely be a question for their channel then
< ironstark> I have already put up a query that I am having trouble installing shogun on their channel
< ironstark> awaiting response
< rcurtin> yeah, I see the message you put there, it was about using apt to install shogun
< rcurtin> I would instead ask about the specific build error you were having with linking against libgomp
< ironstark> okay
< ironstark> I will put the pastebin link there
< rcurtin> for using apt to install shogun, my guess is that you just have the package name wrong
< ironstark> i just pasted the commands from their docs
< zoq> sumedhghais: nice, me too
sumedhghaisas has quit [Ping timeout: 260 seconds]
s1998 has left #mlpack []
< ironstark> Since I work with anaconda python distribution I ran the command https://paste.ubuntu.com/24763936/
< ironstark> but now getting the following errors https://paste.ubuntu.com/24763963/
chenzhe has joined #mlpack
aashay has quit [Quit: Connection closed for inactivity]
kris1 has quit [Quit: Leaving.]
rajatkb has joined #mlpack
< rajatkb> hi
< zoq> rajatkb: Hello there!
< rajatkb> hi :) . Came here through GSOCs page.
< rajatkb> I just wanted to know that what are the things i need to be good at apart from C++ to apply for mlpack next year's gsoc and have chance of getting selected to contribute.
< rajatkb> Thanks for those links :-) let me check them out :-D
rajatkb has quit [Quit: Page closed]
< zoq> rajatkb: Starting early is definitely a good idea; a lot of time to dive into the codebase.