verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
jeremy has joined #mlpack
jeremy is now known as Guest48550
Guest48550 has quit [Client Quit]
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
chenzhe1 has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
chenzhe1 is now known as chenzhe
kris1 has quit [Quit: kris1]
govg has joined #mlpack
kris1 has joined #mlpack
mikeling has joined #mlpack
chenzhe has quit [Quit: chenzhe]
partobs-mdp has joined #mlpack
sumedhghaisas has quit [Ping timeout: 255 seconds]
sgupta has quit [Ping timeout: 260 seconds]
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
zoq_ has joined #mlpack
zoq has quit [Read error: Connection reset by peer]
kris1 has quit [Quit: kris1]
mentekid has joined #mlpack
< rcurtin>
sikoizon: how do you have armadillo configured? it looks like the wrapper is not enabled
< rcurtin>
it's almost always easier if you install armadillo via the package manager, i.e. 'apt-get install armadillo-dev'
< rcurtin>
sorry that's libarmadillo-dev, not armadillo-dev
zoq_ is now known as zoq
< zoq>
partobs-mdp: Hello, can you rebase the PR against the latest master banch. That should resolve the merge issues.
aashay has quit [Quit: Connection closed for inactivity]
< partobs-mdp>
zoq: Hello! Trying to rebase, getting an error message. What am I doing wrong?
< zoq>
partobs-mdp: strange, I can also resolve the issue
< partobs-mdp>
zoq: if possible, let's also discuss parameter grid for week4 report
< zoq>
partobs-mdp: just let me know
< partobs-mdp>
zoq: trying to rebase, still nothing :(
< zoq>
partobs-mdp: Sure let's discuss some parameter, I guess it might take some time to test a bunch of reasonable parameters. Also I'm wondering whether the end marker as used in the NTM paper is esseintial for the repeat copy task.
< zoq>
So I would say, let's keep the repeat copy task out for the moment. Just to save some time.
< zoq>
partobs-mdp: Also, I can run the parameter search for you, if you don't have the machine to do it; maybe we can also use masterblaster for some of the experiments, not sure, have to see what rcurtin has to say.
shikhar has joined #mlpack
< partobs-mdp>
zoq: First of all, about the search procedure: should we make yet another C++ executable or a Shell script for running the program on our grid? (Personally, I like second option, but I would like to hear your opinion)
aashay has joined #mlpack
< zoq>
partobs-mdp: I think for now a bash script is enough, as I said in the comments Kirill is working on a hyperparameter framework, that I'd like to use in the future. So, if you like to do a parameter search now, something simple like a batch script is good enough.
< partobs-mdp>
Ok, nice, but what about parameter values? For example, we will run CopyTask benchmark only for nRepeats = 1, right?
< partobs-mdp>
zoq: Also we can pick maxLen = 2, 3, ..., 10, as it was in the last report, what do you think?
< zoq>
Yes, sounds good, what else do we test? The hidden layer size, if we use SGD the learning rate?
< partobs-mdp>
You mean testing not only task parameters, but also learner parameters?
< zoq>
I'm pretty sure, the model does not converge with each learning rate, so if we use SGD, that might be something we should look into.
< zoq>
If we use Adam or RMSProp, we can just test task parameter for now.
< partobs-mdp>
Looked to the code, we use Adam now
< zoq>
yeah, I got better results, using a tuned SGD optimizer, but I think Adam is good if we like to save some time.
< partobs-mdp>
zoq: So we also have AddTask and SortTask, what maximum binary length would you suggest?
< zoq>
I would start with like 4 and maybe go up to 10 and see if it makes sense to go up further.
< zoq>
What do you think?
< partobs-mdp>
"Up to 10" - seems nice, "start with 4" - what if we start from some truly basic cases like 2 or even 1 - just to see that our model can rote the inputs?
< zoq>
sure, let's start with 1
< partobs-mdp>
Ok, so we use range 1..10 for binary length
< partobs-mdp>
But we also have sequence length in SortTask
< partobs-mdp>
I suggest to set the range this way: bit count <= 32 (e.g., for bitLen = 4 seqLen = 8)
< partobs-mdp>
What is your opinion?
< zoq>
hm, okay sounds reaonsable to me, I guess you include some debug output, maybe we can stop the experiments if we see the model doesn't converge.
< zoq>
I have the feeling running with all the parameters takes some time.
< partobs-mdp>
So I think that we're (so far) set with ranges, so now just a technical question: can I somehow feed the final score directly to the shell script (without parsing) - modifying the program seems also valid for me
< zoq>
hm, you could pipe the results and parse it later on, I guess if it makes things easier you can also adapt the program.
< partobs-mdp>
zoq: I managed to do CopyTask shell script. Right now doing two others - shouldn't be hard
< partobs-mdp>
zoq: By the way, is it okay if I will push them to models repo?
< zoq>
partobs-mdp: Nice, sure go ahead.
< zoq>
Excited to see the results.
< partobs-mdp>
zoq: I pushed all three scripts (they are all ran from the directory that contains executables)
< partobs-mdp>
zoq: Now I'll try to run them, but I'm not sure my computer will get over such a stress :)
< zoq>
As I said I can run the code for you, just let me know.
< partobs-mdp>
zoq: About results: I get 48% on maxLen = 4 in CopyTask. For reference, what was the score from your experiment?
< zoq>
partobs-mdp: 100% I think my model looked like:
< zoq>
btw. I used the latest rnn class, which uses the NetworkInitialization to initalize the parameter.
< partobs-mdp>
Also got 100%, thanks! Running the experiment further
< partobs-mdp>
Now I get 35% on length = 5. Did you also get 100% there?
< partobs-mdp>
*30%
< zoq>
Yes, 100% but the initialization part is important, I used GaussianInit instead of RandomInit.
< zoq>
ah, and I used way more samples, something between 5000 and 250000
Sikhansoi has joined #mlpack
< Sikhansoi>
mlpack 23:00 < sikoizon> while downloading version 2.2.3 of mlpack on linux 64 machine with ubuntu 16.04 LTS 23:00 < sikoizon> and using the tutorial for build on mlpack .. i am getting an error 23:03 < sikoizon> https://pastebin.com/YCkTnHSM 23:03 < sikoizon> help 23:04 < sikoizon> this is the page i referred to 23:04 < sikoizon> http://www.mlpack.org/docs/mlpack-2.2.3/doxygen.php?doc=build.html#build
< Sikhansoi>
Can somebody guide me
< Sikhansoi>
?
< zoq>
Sikhansoi: Hello there, have you seen rcurtin's message?
< zoq>
hm, strange, have you installed armadillo before?
< shikhar>
Sikhansoi: I had a similar error a few days ago. The issue here was that I had an older version of Armadillo in my $LD_LIBRARY_PATH, which was being picked up.
kris1_ has joined #mlpack
< Sikhansoi>
No i havent
< Sikhansoi>
I installed fresh ubuntu 16.04 and then did it
< Sikhansoi>
It is version 2.2.3
< Sikhansoi>
<shikar> u mean exactly like that i have ?
< Sikhansoi>
* <shikhar>
< shikhar>
If I understand correctly, you installed Armadillo from the official repo, so no other copies of the library exist.
Sikhansoi has quit [Ping timeout: 260 seconds]
< shikhar>
Here, my issue occured when I upgraded from 16.04 to 17.04, having older Armadillo in the linker's path. What happened was Armadillo's dependencies(like BLAS and LAPACK) were re-installed, which required Armadillo to be recompiled.
Shikhankoi has joined #mlpack
< Shikhankoi>
<sikhar> what is the path ?
< Shikhankoi>
I will delete then all files
< Shikhankoi>
And make new files
< shikhar>
You can compile a simple Armadillo program, and give command line -Wl,--verbose to the compiler to see which files are being picked up.
< partobs-mdp>
zoq: running AddTask - the model failed to even add two 1-bit numbers (i have 27% score for bitLen = 1)
< zoq>
partobs-mdp: hm, okay, I'll take a look at the task later tonight, the basline result from the HAM paper for the Add task is something like 61%
< zoq>
partobs-mdp: Can you update #1005 and integrate the reshape into the task itself?
< partobs-mdp>
zoq: sure, i'm working on it, but i keep catching weird bugs - i think i'd rather have a sleep and fix them in the morning :)
< partobs-mdp>
it'll be 9pm soon here
< zoq>
partobs-mdp: okay, sounds fine for me, let me know if you need any help to solve the issues.
< partobs-mdp>
zoq: btw, i've already written a report for Week 4. I've already queued the three tasks to execution - so tomorrow i'll insert them into my report
< zoq>
partobs-mdp: yeah, I've seen the post, no need to rush here, we can update the report once we have the results.
< partobs-mdp>
so, let's call it a day then. bye everyone! :)
partobs-mdp has quit [Quit: Leaving]
< zoq>
Shikhankoi: So it's a fresh Ubuntu you didn't upgraded from e.g. 14.04 or something like that?
shikhar has quit [Quit: WeeChat 1.7]
kris1_ has quit [Quit: kris1_]
kris1_ has joined #mlpack
Shikhankoi has quit [Ping timeout: 260 seconds]
kris1_ has quit [Quit: kris1_]
kris1_ has joined #mlpack
mikeling has quit [Quit: Connection closed for inactivity]
dfne has joined #mlpack
< dfne>
hi
< rcurtin>
dfne: hi, I saw your messages some days back but I had no chance to respond (and still don't have time to really dig in)
< rcurtin>
if the decomposition fails, it usually means the matrix itself is not positive definie
< rcurtin>
definite
< rcurtin>
so be sure you aren't specifying -P, and consider adding a larger value of noise (with -N)
dfne has quit [Ping timeout: 260 seconds]
< kris1_>
does arnadillo have any method for fast multiplication of a diagonal matrix and say dense matrix
< lozhnikov>
kris1_: just multiply each column or each row by the corresponding diagonal element
< kris1_>
Yes i know just wanted to know if armadillo has such support otherwise i would do that
< lozhnikov>
kris1_: were you able to write the blog post for the fourth week?
< kris1_>
Also i see that we don’t have support for arma::Cube of type sparse matrix is not supported so do we have to do this std::vector<arma::sp_mat>
< kris1_>
Ahhh sorry it slipped my mind totally i will write about it now
< lozhnikov>
Why do you need sparse cubes?
< kris1_>
Well the slab variable are basically diagonal matricies so i think it is better if i declare them of type sp_mat
< kris1_>
rather than amra::mat
< kris1_>
*arma::mat
< lozhnikov>
I guess it is better to declare these variables as arma::vec
< kris1_>
Ohhh yes….
< kris1_>
mikhail can i use the image of mnist example that was generated when you ran the test’s
< rcurtin>
typically avoiding arma::sp_mat where possible is a good idea, at least with the current implementation it is only valuable to use sp_mat if the matrices are extremely huge and very very sparse
< rcurtin>
for diagonal matrix * dense matrix, you could do...
< rcurtin>
arma::diagmat(vector) * matrix
< rcurtin>
and armadillo should do this efficiently
< kris1_>
Ahhh thanks rcurtin
< lozhnikov>
kris1_: Do you mean the parameters matrix?
< kris1_>
No i mean to say the images that you generated with code for comparing deep learning.net example
< lozhnikov>
Ah, of course
< kris1_>
ssRBM paper does not explain the FreeEnergy function. Should i derive it and implement it or should we not implment it at all.
< lozhnikov>
The function could be derived from the energy
< kris1_>
okay i will try my hand at it then
kris1_ has quit [Quit: kris1_]
kris1 has joined #mlpack
kris1 has quit [Client Quit]
kris1 has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
kris1 has quit [Client Quit]
kris1 has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]