verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
sumedhghaisas has quit [Ping timeout: 260 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Quit: Ex-Chat]
sumedhghaisas_ has joined #mlpack
vivekp has joined #mlpack
mikeling has joined #mlpack
kris1 has joined #mlpack
sumedhghaisas_ has quit [Ping timeout: 260 seconds]
kris1 has quit [Quit: kris1]
partobs-mdp has joined #mlpack
kris1 has joined #mlpack
mikeling has quit [Quit: Connection closed for inactivity]
< zoq>
Also, maybe you should make the parameter part of the Constructor, that way you don't have to adjust anything.
< partobs-mdp>
zoq: I tried to add resetPolicy paramtere there, but it was just giving similar error messages
< partobs-mdp>
zoq: Part of SGD constructor?
< zoq>
yes
< zoq>
Here, means the hpp and the cpp file (regularized_svd_function)?
< zoq>
Ah I missed something, if you put the parameter in the constructor, you also have to expose some function to set the reset parameter.
shikhar has quit [Ping timeout: 240 seconds]
shikhar has joined #mlpack
shikhar has quit [Ping timeout: 240 seconds]
shikhar has joined #mlpack
shikhar_ has joined #mlpack
shikhar has quit [Ping timeout: 240 seconds]
kris1 has quit [Quit: kris1]
rajiv_ has joined #mlpack
rajiv__ has joined #mlpack
< rajiv__>
Hello! I have so far contributed once to this repo and am currently working on another issue. But, I find the code base to be difficult to understand. How should I go about understanding the code?
rajiv__ has quit [Client Quit]
rajiv_ has quit [Quit: Page closed]
rajiv_ has joined #mlpack
rajiv_ has quit [Client Quit]
kris1 has joined #mlpack
kris1 has quit [Ping timeout: 268 seconds]
kris1_ has joined #mlpack
kris1_ has quit [Ping timeout: 276 seconds]
kris1 has joined #mlpack
shikhar_ has quit [Quit: WeeChat 1.7]
< rcurtin>
zoq: ironstark: I can install libsvm-dev via apt on all the benchmark systems if you think that will help solve #89
< ironstark>
rcurtin: I think it should help. Lets try that but i will keep #89 clean (only MATLAB) implementations. Opening up another PR for the shogun problem.
< rcurtin>
sure
kris1 has quit [Quit: kris1]
< rcurtin>
ok, done
< rcurtin>
now if you do 'make setup' again and rebuild all the libraries shogun should be built against.libsvm
< rcurtin>
*against libsvm
mentekid has quit [Quit: Leaving.]
mentekid has joined #mlpack
shikhar has joined #mlpack
shikhar_ has joined #mlpack
partobs-mdp has quit [Remote host closed the connection]
shikhar has quit [Ping timeout: 246 seconds]
kris1 has joined #mlpack
shikhar_ has quit [Quit: WeeChat 1.7]
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
< ironstark>
okay thanks :)
pretorium[m] has quit [Ping timeout: 264 seconds]
pretorium[m] has joined #mlpack
kartik_ has joined #mlpack
< kartik_>
<zoq> for the neural network. Im considering each link as a dimension and then using cmaes for it to converge. then problem that persist is that sometimes the network converges giving error less than threshold
< kartik_>
and some times it doesnt
< kartik_>
i think its always hard for an algorithm based on probability to converge large dimensional values
< kartik_>
where the vanilla network i was using was large enough
< kartik_>
for the rosenbroch function also many instantiated functions of it converged except for some
< kartik_>
for the rosenbroch function also many instantiated functions of it converged except for some
< kartik_>
im trying it since 2 days. I think ill be able to fix the vanilla issue. But for rosenbroch can i go for less number of function evaluations than did previously ?
kartik_ has quit [Ping timeout: 260 seconds]
rajiv_ has joined #mlpack
mentekid has quit [Quit: Leaving.]
mentekid has joined #mlpack
rajiv_ has quit [Client Quit]
shikhar has joined #mlpack
< zoq>
kartik_: I left a comment on the PR.
andrzejk_ has joined #mlpack
< rcurtin>
zoq: pretorium[m]: the HAM paper uses mini-batch optimization also, so I agree it would be better to use MiniBatchSGD for now
< rcurtin>
(sorry for the slow response, I wasn't really available over the weekend)
< zoq>
rcurtin: It probably makes sense to refactor SGD or MiniBatchSGD to support update policies, so that we can use Adam, RmSProp, etc. with batches. Maybe after #1047 is resolved.
< rcurtin>
yeah, agreed
< rcurtin>
I was thinking the same thing
< zoq>
But for now, MiniBatchSGD works just fun, at least in my experiments.
< zoq>
fine
< rcurtin>
for all three tasks?
< zoq>
Unfortunately no, the network wasn't able to learn the Add task, as Konstantin said the performance stuck after a few iterations. I'll take a deeper look at the task tomorrow.
< zoq>
Maybe we should split the copy task from the other tasks, and move on with the HAM implementation, once we figured out why the add task isn't working we can merge it.
< rcurtin>
agreed, that would probably be a good idea
< rcurtin>
does the sort task fail also?
shikhar_ has joined #mlpack
< zoq>
Haven't tested the full parameter range yet, but it worked for the parameter I tested.
shikhar has quit [Read error: Connection reset by peer]
shikhar_ has quit [Ping timeout: 276 seconds]
< zoq>
Nice catch!
< rcurtin>
you can thank chenzhe, he is the one who pointed it out :)
< zoq>
chenzhe: Nice catch!
< chenzhe>
Sure. ^_^
< chenzhe>
I know this might not be a proper place to ask C++ questions, but I guess this question would be very easy for you guys~
< chenzhe>
If I want to output some thing by input, say foo(arma::mat& x)
< chenzhe>
do I need to allocate memory before I call this function?
< chenzhe>
It seems that the compiler doesn't check this, because I get runtime error
< chenzhe>
What I have is:
< chenzhe>
x = arma::vec(N); x_new = std::move(x); foo(x);
< rcurtin>
it's ok, you can ask C++ questions here :)
< chenzhe>
in foo(), I assume that the size of x is known
< rcurtin>
so, foo() will modify x, right?
< chenzhe>
yes
< rcurtin>
ok
< rcurtin>
in the code you gave, when foo(x) is called, x will be an empty matrix
< rcurtin>
this is because of the x_new = std::move(x), which moves (instead of copies) the contents of x into x_new
< chenzhe>
so the memory of x is not allocated?
< chenzhe>
so I need to allocate memory for x again in foo()
< rcurtin>
no, when foo(x) is called, the size of x will be 0x0
< rcurtin>
er, 0 rows by 0 columns, that is
< rcurtin>
so inside foo(x), you'll need to call x.set_size(rows, cols)
< chenzhe>
I see
< chenzhe>
then, before I call foo(x), I actually don't need to use arma::vec x(N), I can just use arma::vec x
< chenzhe>
otherwise it will be a waste?
< rcurtin>
sorry for the slow response---I got distracted...
< rcurtin>
it seems that you could just do:
< rcurtin>
x_new.set_size(N);
< rcurtin>
foo(x);
< rcurtin>
and that would be equivalent to the code you had before
< rcurtin>
Armadillo handles the actual memory allocation, so you don't need to worry about pointers or anything, you just need to call set_size() or do a matrix operation where the size can be deduced
< rcurtin>
and when you call "x = std::move(y)" where both x and y are Armadillo objects, then after that call x will hold all of the data that y previously did, and y will hold nothing (it will be equivalent to an empty-constructed object, i.e., y = arma::mat())
< chenzhe>
OK, I think I understand. Thanks~
< rcurtin>
sure, happy to help :)
< chenzhe>
another question, in the example I gave before, because of the API, I need to declare x as arma::mat x outside the function call; however,inside foo(x), x is actually just a arma::vec. So could I just call x.set_size(N) in foo(x), and get x as arma::vec of size N?
< rcurtin>
I don't think that you can call foo(x) with x of type arma::mat when the signature of foo is foo(arma::vec& x)
< rcurtin>
so do you mean the other way around? where x is of type arma::vec and the signature of foo is foo(arma::mat& x)?
< rcurtin>
in either situation I would try to make the type of the passed x and the type of the parameter to foo() the same type, otherwise there will be some difficulties
< chenzhe>
what I did is x is of arma::mat, and the declaration of foo(arma::mat& x)
< rcurtin>
yeah, so in that case I think there is no problem
< rcurtin>
if you want x to be a vector, then you could just call x.set_size(N, 1)
< chenzhe>
ok, so inside armadillo, arma::vec is just a arma::mat with 1 column?
< rcurtin>
arma::vec is a subclass of arma::mat which is hardcoded to have one column, yeah
< chenzhe>
:)
chenzhe has quit [Remote host closed the connection]