ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
akfluffy has quit [Read error: Connection reset by peer]
akfluffy has joined #mlpack
akfluffy has quit [Client Quit]
akfluffy has joined #mlpack
Hemal has quit [Quit: Leaving.]
akfluffy has quit [Remote host closed the connection]
akfluffy has joined #mlpack
< akfluffy>
hey, is anyone available to take a look at a few lines of code? I keep getting the same Mat::SubMat() error with my RNN. I feel like it's a simple fix but I can't figure it out: https://gitlab.com/hexrays/my-error
< akfluffy>
I'm thinking it has something to do with my rho or the size of the input cube
< akfluffy>
so far I've tried changing basically all of the network and input parameters to no avail
akfluffy has quit [Remote host closed the connection]
pd09041999 has joined #mlpack
seewishnew has joined #mlpack
braceletboy has joined #mlpack
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
braceletboy has quit [Quit: Ping timeout (120 seconds)]
braceletboy has joined #mlpack
pd09041999 has quit [Ping timeout: 268 seconds]
seewishnew has quit [Remote host closed the connection]
seewishnew has joined #mlpack
seewishnew has quit [Ping timeout: 250 seconds]
braceletboy has quit [Remote host closed the connection]
pd09041999 has joined #mlpack
pd09041999 has quit [Excess Flood]
pd09041999 has joined #mlpack
pd09041999 has quit [Ping timeout: 245 seconds]
pd09041999 has joined #mlpack
Masstran_ has joined #mlpack
rf_sust2018 has joined #mlpack
rf_sust2018 has quit [Quit: Leaving.]
Masstran_ has quit [Ping timeout: 246 seconds]
witness has joined #mlpack
rajiv_ has joined #mlpack
< rajiv_>
zoq: Thank you for reviewing the proposal! I have made the changes you suggested and also replied to the queries. Please let me know if anymore changes have to be made :)
< rajiv_>
rcurtin: It would be great if you could also give your feedback about the proposal :)
rajiv_ has quit [Client Quit]
rf_sust2018 has joined #mlpack
< Sergobot>
zoq: hi ;) just updated my proposal, hope you take a look soon. thanks in advance :)
Soonmok has joined #mlpack
Kiyani has joined #mlpack
no has joined #mlpack
Kiyani has quit [Ping timeout: 256 seconds]
no has quit [Quit: Page closed]
akfluffy has joined #mlpack
trinity has joined #mlpack
< trinity>
ss
< trinity>
anyone get me with the github link of the mlpack
< Masstran>
Hello! May I ask question about GSoC here or is it better to do it by email?
sayan__ has quit [Ping timeout: 256 seconds]
chandramouli_r has joined #mlpack
< chandramouli_r>
Hi guys
saksham189 has joined #mlpack
< chandramouli_r>
I wanted to participate in GSoC 2019 this year
< chandramouli_r>
I would like to submit a proposal
< chandramouli_r>
should i first contribute and then write a proposal or should i write a proposal now ? and then start contributig ?
chandramouli_r has quit [Quit: Page closed]
Masstran_ has joined #mlpack
chandramouli_r has joined #mlpack
Masstran has quit [Ping timeout: 246 seconds]
< chandramouli_r>
I would like to work on Reinforcement learning
< chandramouli_r>
what could be the first contribution for this project
ayesdie has joined #mlpack
favre49 has joined #mlpack
favre49 has quit [Client Quit]
ayesdie has quit [Remote host closed the connection]
Masstran_ has quit [Ping timeout: 250 seconds]
favre49 has joined #mlpack
< favre49>
Is it fine if i just listen in on the video call? I can't talk or anything but I just want to know what's going on :)
< rcurtin>
favre49: of course, feel free
< favre49>
okay thanks :)
favre49 has quit [Client Quit]
< zoq>
chandramouli_r: Welcome, feel free to submit an application and start contributing, there is no requirement to contribute something before, but it is helpful for sure.
< zoq>
Masstran: You can ask here.
< chandramouli_r>
@zoq: i have built an AI that plays the game 2048 using python which calculates the moves using heuristic score
< chandramouli_r>
and by using alpha beta pruning method to increase the efficiency
< zoq>
chandramouli_r: Wow, nice.
< chandramouli_r>
is this similar or kind of different ?
< chandramouli_r>
Will you be mentoring the project
< chandramouli_r>
?
< zoq>
The project would be more on the method side, 2048 would be a nice example to show what can be done with it. Ideally we are looking for soemthing that is novel or provides something that isn't available in another toolkit.
< chandramouli_r>
2048 written in python is good enough for an example ?
< chandramouli_r>
or in C++ ?
< chandramouli_r>
Okay what is the first step towards this project ?
< zoq>
hm, that could work, but this is a C++ library, we used open ai gym for some examples, which we wrapped so that it can be used in C++
< chandramouli_r>
like what should be done to show I am qualified and interested ?
akfluffy has quit [Remote host closed the connection]
ayesdie_ has joined #mlpack
< zoq>
I can see that you are interested :), you can write a nice application, perhaps include a link to the python project you already did
< chandramouli_r>
okay yeah i will
< zoq>
also maybe you find find an interessting issue on github
< rcurtin>
ShikharJ: agreed, it kind of blew my mind a bit :)
< chandramouli_r>
Toshal: Thanks I have gone through that page but I need the nicknames of the mentors in this channel. I am yet to know them in this channel.
favre49 has joined #mlpack
Toshal has quit [Ping timeout: 256 seconds]
< favre49>
Just read the slides, thats some pretty cool stuff. I just wanted to point out a tiny mistake, I didn't add the CNE optimizer, merely optimized it. Kartik Nighania was the one to write the code, I believe.
< favre49>
Also, I will start working on that CMA-ES inconsistency fix after I give in my gsoc proposal. I think I've misunderstood some things, so I'll hopefully have something by mid-April
pd09041999 has quit [Ping timeout: 246 seconds]
< rcurtin>
favre49: ah, thanks, I figured that there was at least one error I made :)
favre49 has quit [Quit: Page closed]
KimSangYeon-DGU has quit [Quit: Page closed]
KimSangYeon-DGU has joined #mlpack
akfluffy has joined #mlpack
< akfluffy>
For an RNN: I should have 1 row for each feature, 1 column for each
< akfluffy>
datapoint in a timestep, and one slice for each timestep right?
< rcurtin>
akfluffy: I believe that is right but I am not 100% sure
< rcurtin>
I've seen your posts over the past few days but I'm not entirely sure of the solution
< rcurtin>
I hink you said you used gdb but have you tried 'catch throw' to catch the armadillo ezception and do a backtrace?
< rcurtin>
sorry for bad spelling... phone typing :(
< akfluffy>
no, I will try that. thanks
< rcurtin>
it's not a pretty solution but it should at least get you closer to what's wrong
< rcurtin>
make sure to post the result if you find out what is wrong... ideally we would want to give better output so people don't have to dig so deep to figure out what is wrong
< akfluffy>
alright, will do
< akfluffy>
someone had a similar error a while back but it turned out it was because they had a LogSoftLayer but gave labels not in that range
< akfluffy>
LogSoftMax layer**
Soonmok has quit [Quit: Connection closed for inactivity]
< rcurtin>
yeah, I think that may be true for NegativeLogLikelihood too; the labels need to be [1, num_classes] not [0, num_classes - 1]
< akfluffy>
Unfortunately I'm not even training the set, it does this when I evaluate it or use Predict()
chandramouli_r has quit [Ping timeout: 256 seconds]
< akfluffy>
rcurtin: I think I've found the problem, it has to do with the results cube.
< akfluffy>
results.slice(seqNum).submat(0, begin, results.n_rows - 1, ...) For some reason, results.n_rows here is 0 so it substracts 1 from 0 and errors
< Masstran>
The second is about the thing I'd want to do, NEAT algorithm. I'm actually not that sure it's gonna fly at all with given API. I was looking through the optimization API and it doesn't really add up with how I understand that NEAT works. As I understand, NEAT only optimizes object functions which take Neural Networks as inputs (and finds the Network, which gives best result on that function). This doesn't correspond with
< Masstran>
optimization API, as it doesn't seem to make any limitations on function input. Wouldn't this issue make it impossible to implement NEAT with those requirements?
< Masstran>
We could probably solve this problem by creating new FunctionType, but I'm not sure it's the best solution
< akfluffy>
outputSize gets set to 0 and never set to anything else in rnn_impl.hpp
< akfluffy>
so since outputSize is always 0, the results cube will have 0 rows, and then later down the line it will subtract 1 from the result cube rows
< rcurtin>
akfluffy: makes sense, maybe the responses cube you are passing in has 0 for n_rows? just an idea, I am not sure
< akfluffy>
I just used arma::cube predictions;
< rcurtin>
Masstran: sorry I can't help with the NEAT project, but maybe the architecture of the network can be understood as a matrix itself, which may make optimization in ensmallen's framework more easily possible. I'm not sure, I'm not the mentor for the project; maybe there has been other discussion already
< rcurtin>
I'll try and take a look at the proposal but can't promise anything---my night is already booked and I'm out of town over the weekend
< Masstran>
Thanks, sure, no problem
< Masstran>
I'll think a bit more about how it might be possible
< akfluffy>
I tried specifying the dimensions for the outputCube myself, but outputSize still ends up as 0. The RNN constructor sets it to 0 and doesn't change it AFAIK
saksham189 has quit [Ping timeout: 256 seconds]
akhandait has quit [Quit: Connection closed for inactivity]
krgopal has joined #mlpack
akfluffy has quit [Remote host closed the connection]
akfluffy has joined #mlpack
< akfluffy>
imo it looks like a problem with mlpack but I doubt it
krgopal has quit [Quit: Page closed]
zoq has joined #mlpack
jeffin143 has quit [Read error: Connection reset by peer]
jeffin143 has joined #mlpack
< zoq>
one of my hdds failed ... good to have redundancy
< rcurtin>
definitely
jeffin143 has quit [Ping timeout: 246 seconds]
< rcurtin>
did you buy each HDD in the RAID from different suppliers? :)
< rcurtin>
I always do that but it's probably too paranoid
< rcurtin>
the observation is that if you get, e.g., 5 drives off amazon, they're likely to be from the same batch, and this increases the likelihood that they will fail with the same number of hours on them
< rcurtin>
I don't know if that's just paranoia or it's actually well-supported
< zoq>
not at all paranoid, I do the same, and yes they are different
< zoq>
unfortunately I did something wrong with boot partition, so it took me way more time to get this back
< zoq>
hopefully the resilver process will finish before the other HDD fails
< akfluffy>
lol raid 0 here
< akfluffy>
anyways, I will keep tracing with gdb but it gets harder because I did not write rnn_impl.hpp lol
i8hantanu has quit [Quit: Connection closed for inactivity]
jenkins-mlpack2 has quit [Ping timeout: 268 seconds]
jenkins-mlpack2 has joined #mlpack
akfluffy has quit [Ping timeout: 246 seconds]
akfluffy has joined #mlpack
akfluffy has quit [Remote host closed the connection]
akfluffy has joined #mlpack
< akfluffy>
ls
< akfluffy>
sorry again. this client has no differentiation from a normal shell
< akfluffy>
is there a way to compile my program single thread without having to rebuild mlpack? I want to reverse-step in gdb
akfluffy has quit [Remote host closed the connection]
< rcurtin>
akfluffy: you could run with OMP_NUM_THREADS=1
< rcurtin>
otherwise you'd have to reconfigure CMake with -DUSE_OPENMP=OFF and then rebuild