verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
jenkins-mlpack has joined #mlpack
kris__ has quit [Quit: Connection closed for inactivity]
< ironstark>
rcurtin: zoq: This is the blog I have written about the work done so far.. for submission in the final evaluation
< kris__>
lozhnikov: Just saw your message on the logs. I did try to vary all the parameters that you mentioned.
< kris__>
I did not vary the noise type though.
< kris__>
in the case of the digits dataset.
Ariel has joined #mlpack
Ariel is now known as Guest67887
< Guest67887>
hi where can I fins examples of your neural network models, RNN and LSTM
vivekp has quit [Ping timeout: 248 seconds]
< Guest67887>
correction: Hi where can I find examples of your neural network models for RNN and LSTM?
< kris__>
Just mlpack_test folder and look for recurrent_neural_network_test.cpp file.
< kris__>
They could serve as a starting point.
< Guest67887>
i have looked at it, i guess that is all i got.
< kris__>
Well i think this year HAM model and NTM models use RNN a lot. Maybe it would be intresting to look at those tests as well.
< kris__>
You can find them at the github page.
< Guest67887>
thanks kris, will have a look at those
kris1 has quit [Quit: kris1]
Guest67887 has quit [Quit: Page closed]
kris1 has joined #mlpack
govg has quit [Ping timeout: 248 seconds]
govg has joined #mlpack
kris___ has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
partobs-mdp has joined #mlpack
< kris__>
Quick question how is the GPU going to help right now. I don't think the present code is cuda compatible. So how are we going to run for eg rnn on a GPU.
vivekp has joined #mlpack
< lozhnikov>
kris__: If I am not mistaken armadillo doesn't support GPU right now. Therefore there is no way to run your code on GPU
kris1 has quit [Quit: kris1]
< zoq>
kris__: You can link against NVBLAS which is a GPU accelerated implementation of BLAS. NVBLAS can accelerate most BLAS Level-3 routines, so until Bandicoot is released that's all we have right now.
gtank has quit [Ping timeout: 276 seconds]
gtank has joined #mlpack
kris1 has joined #mlpack
< kris1>
lozhnikov: Did you see the results on the mnist7 dataset.
kris1 has quit [Quit: kris1]
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 252 seconds]
vivekp has joined #mlpack
govg has quit [Ping timeout: 248 seconds]
govg has joined #mlpack
partobs-mdp has quit [Remote host closed the connection]
< lozhnikov>
yes, I did
< kris__>
Okay what do you think
< kris__>
The max epochs i ran for was 20
< kris__>
Right now have changed the noise dim to 10 since I found we were getting cleaner images if noise dim was low
< kris__>
Right now testing for epoch 50 it might take 2-3 hrs
< lozhnikov>
I think the work is far from finish. Maybe it is reasonable to change the layer types, the number of hidden units and so on
< kris__>
Okay i will try that those too ....
< kris__>
If your machine is free could run some test if possible
< lozhnikov>
yes, if you prepare a test and I'll run it
< kris__>
Great I will send one with in a few hours btw should we also think which test to add for gan in the PR
< lozhnikov>
It seems we can't add a test for training since it requires a lot of time
< lozhnikov>
But I think we can check some properties using pretrained parameters
< kris__>
Hmmm that seems a good idea ....
< lozhnikov>
I guess it is possible to add a simple test for a network which doesn't require a lot of parameters
< kris__>
1d Gaussian example uses pretty small network that should be doable
< lozhnikov>
sounds good
< kris__>
We also got good results for that we converged to the real distribution at around 80 epochs but I would have to write the kl divergence metric for finding the similarity between the distribution
< lozhnikov>
how many time should take the test?
< lozhnikov>
* how much
< kris__>
It takes longer then ssRBM for sure
< kris__>
So we can't directly use it
< lozhnikov>
How many parameters does the test require?
< kris__>
Not much the the hidden layer size is around 6 and 3 layers in discriminator and 1 layer in the generator
kris1 has joined #mlpack
< lozhnikov>
hmm.. in that case you could set pretrained parameters inside the source code of the test
< kris__>
Sure that can be done
< kris__>
Btw we had our reading group discuss W-gan today in their paper the actually prove that gan working is pretty much a luck thing that if you get the right initial parameters
kris1 has quit [Client Quit]
vivekp has quit [Ping timeout: 276 seconds]
vivekp has joined #mlpack
govg has quit [Ping timeout: 240 seconds]
partobs-mdp has joined #mlpack
kris1 has joined #mlpack
govg has joined #mlpack
mikeling has quit [Quit: Connection closed for inactivity]
partobs-mdp has quit [Remote host closed the connection]
aravindaswmy047 has joined #mlpack
kris1 has quit [Quit: kris1]
< aravindaswmy047>
hello
< aravindaswmy047>
I am new to mlpack
< aravindaswmy047>
Can anyone tell me where should I start after building the mlpack in my system to understand how it is designed and its functionality
kris1 has joined #mlpack
< zoq>
aravindaswm: Hello and welcome! If you are looking for how to get started, these two pages may be helpful:
< zoq>
In addition, the tutorials page should provide some useful documentation on getting comfortable with the library, using the command-line executables, and understanding the code: http://www.mlpack.org/tutorial.html
< kris__>
hey lozhnikov: Check the results out....
< zoq>
aravindaswm: I hope this is helpful, don't hesitate to ask.
< aravindaswmy047>
thank you will start looking into it
< kris__>
These are the parameters ./gan_keras.o -i train7.txt -m 1000 -e 50 -n100 -N5 -D1024 -G1024 -b 8 -x 2 -r 0.001 -o epoch1_output.txt -v
< zoq>
kris__: This is looking better and better.
< kris__>
zoq: Yes zoq finally some good results.....
< kris__>
Lozhnikov: Should we stop with this example and test the CNN example or should i continue...
< kris__>
zoq: Any suggestions for the ssRBM pr timings were could we possibly reduce the time.
vivekp has quit [Ping timeout: 248 seconds]
< zoq>
kris__: Let me take a look.
vivekp has joined #mlpack
shikhar has joined #mlpack
shikhar has quit [Quit: WeeChat 1.4]
aravind047 has joined #mlpack
aravind047 has quit [Client Quit]
aravinaswamy047 has joined #mlpack
aravinaswamy047 has quit [Client Quit]
aravindaswmy047_ has joined #mlpack
< aravindaswmy047_>
Hello again
< zoq>
welcome back :)
< aravindaswmy047_>
while building mlpack in my system what are the options that I should activate during build
< aravindaswmy047_>
should I Switch ON any options with D flag ?
< zoq>
The default values are just fine, if you like to debug the code e.g. using gdb you should build with -DDEBUG=ON to get debug symbols.
< aravindaswmy047_>
oh ok
< aravindaswmy047_>
thanks
< aravindaswmy047_>
sorry for asking very primitive questions
< zoq>
nah, we are here to help.
aravindaswmy047_ has quit [Quit: Page closed]
< lozhnikov>
kris__: The picture doesn't look perfectly, but your results definitely converge to sevens. In that case I agree, it is reasonable to focus on the oreilly example
< kris__>
Well the the dataset only consists of 7's i did not want to train on the full mnist.
< kris__>
For the orilley example i would be requiring the resize layer. If you have the time today could you review it.
< lozhnikov>
yes, I know, I just pointed out that the results look like sevens
< lozhnikov>
sure, I'll look through that again, but I didn't find any serious issues last time, so you can use it
< kris__>
okay i will merge from the resize layer branch directly.
< kris__>
I will stop now ...... with the keras example i think this is enough. I still don't get why we end up generating the same image even though the noise is obvious diffrent.
< lozhnikov>
if I remember right that happens if the generator overtrains the discriminator