verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
tham has joined #mlpack
< tham> I have a quick question about the test cases of convolution neural network
< tham> The example "BuildVanillaNetwork" said
< tham> 28x28x1 input layer, * 24x24x6 convolution layer, 12x12x6 pooling layer
< tham> 8x8x12 convolution layer and a 4x4x12 pooling layer
< tham> Should not it be
< tham> 28*28*1 input layer, 24*24*8 convolution layer, 12*12*8 pooling layer, 8*8*12 convolution layer and a 4*4*12 pooling layer?
< tham> The first convolution layer output 8 feature maps, so I think it should be 8 but not 6
< tham> Another test case, "VanillaNetworkDropoutTest"
< tham> I think it should be
< tham> 28x28x1 input layer, 24x24x4 convolution layer, 24x24x4 droup out layer, 12x12x4 pooling layer, 12x12x8 convolution layer
< tham> 6x6x8 pooling layer
< tham> But the number of the comments are
< tham> 28x28x1 input layer, 24x24x6 convolution layer, 12x12x6 pooling layer, 8x8x12 convolution layer, 8x8x12 Dropout Layer and a 4x4x12 pooling layer
< tham> Could anyone explain how do you calculate the dimensions of each layer?
tham has quit [Ping timeout: 246 seconds]
lotas has joined #mlpack
< lotas> Hi
< lotas> I followed the 'Building mlpack from source' tutorial with no issues
< lotas> however I can't get Netbeans to work with mlpack
< lotas> No matter what I do, it can't resolve libxml/parser.h
< lotas> My include directories are set to /usr/include/mlpack/;usr/include/libxml2/libxml/
< lotas> But it looks like netbeans can't find parser.h
< lotas> Besides, parsing the file (a very simple example from the mlpack website) takes ages and eventually becomes "suspended"
< lotas> It's a mlpack/libxml configuration issue, becasue other java and c++ projects on the same IDE work ok
< lotas> Can anyone help me please?
< lotas> I'm logging out, but I will be checking the log. Thanks
lotas has quit [Quit: Page closed]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#299 (master - 1190791 : Marcus Edel): The build is still failing.
travis-ci has left #mlpack []
tham has joined #mlpack
< tham> I saw the update of the comments of cnn, thanks for it
< tham> But I think it may lack a dropoutlayer
< tham> I have another question of the cnn, it is how could I predict the result by cnn?
< tham> The validationError of the example is lower than 0.7%, but the prediction results(on the training set, not test set) is not that good
< tham> The prediction results(true positive) of first example are 0% for 4s, 100% for 9s
< tham> The prediction results(true positive) of secondexample are 68% for 4s, 97.6% for 9s
< tham> I post the codes at http://pastebin.com/9PZFkijf
< tham> What kind of error did I make?Thank you
< tham> Is it possible to add the prediction as part of the unit test?This way the users could learn how to predict and test the results of prediction at the same time
tham has quit [Ping timeout: 246 seconds]
tham has joined #mlpack
< tham> I just study the source codes, looks like the ValidationError do not return the %
< tham> but something similar to cost value
< tham> However, is the way of prediction on http://pastebin.com/9PZFkijf correct?
witness_ has joined #mlpack
tham has quit [Ping timeout: 246 seconds]
< rcurtin> lotas: hi there, consider using the newest version of mlpack instead of mlpack 1.0.12 (or whatever other release)
< rcurtin> in the latest git revision, the dependency on libxml2 is dropped in favor of boost serialization, so I think that will fix your issues
< rcurtin> alternately, I suspect setting your include directories to '/usr/include/mlpack/;/usr/include/libxml2/' might solve your issue with netbeans
< rcurtin> the libxml2 include directories were always a source of pain... on some distributions parser.h was in /usr/include/libxml/, on others, /usr/include/libxml/libxml2/... happy to not be depending on that anymore
< rcurtin> I'm not sure which example you're referring to about the parsing, though, so I don't know if I can be helpful there without more information
< rcurtin> I'll be online tomorrow from about 0830 UTC to 1730 UTC (long plane flight...) so maybe I can help with issues then, if you're around
< rcurtin> tham: I think you read the IRC logs too; I'll spend some more time reading what you wrote tomorrow... need to get some sleep first :)
witness_ has quit [Quit: Connection closed for inactivity]
< zoq> tham: I tested the code, but my results are completely okay even with 5 epochs instead of 50 (predict accuracy of 5 : 0.972, predict accuracy of 8 : 0.976).
< zoq> tham: I guess the problem you run into are 'bad' initial weights. You can try another weight init rule or change some parameters of the current method, e.g., you could try the OrthogonalInitialization method. Another problem could be overfitting, you could track the prediction accuracy over time.
< zoq> tham: I used the following code to keep track of the training process: https://gist.github.com/zoq/e8e93df88eb5fe6c0711