verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
ironstark has quit [Ping timeout: 276 seconds]
ironstark has joined #mlpack
< ironstark> rcurtin: Sure, I'll run them for an algorithm and see how we can display the results better. Also, I ran the command make run METHODBLOCK=NBC LOG=True on slake but the reports.db file did not get generated. How do I generate that?
govg has quit [Ping timeout: 246 seconds]
ironstark has quit [Quit: Connection closed for inactivity]
govg has joined #mlpack
vivekp has quit [Ping timeout: 255 seconds]
kris___ has joined #mlpack
vivekp has joined #mlpack
kris1 has joined #mlpack
kris1 has quit [Remote host closed the connection]
< zoq> ironstark: Did you use driver: 'sqlite' and database: 'reports.db' in the config file? Also you can benchmark specifc libs: make run METHODBLOCK=NBC LOG=True BLOCK=mlpack,shogun CONFIG=config.yaml
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
kris1 has joined #mlpack
< kris___> Here are the pre-processing scripts changed according to the paper.
< kris___> Though it seems incorrect to me to compare 2 algorithms with different feature vectors.
< kris___> Let me know if the feature extraction scripts are correct i would then run the ssRBM on top of it.
< lozhnikov> kris___: the paper doesn't describe how many patches we should get from each image
< lozhnikov> right now you get only one patch from each image
< lozhnikov> that could be incorrect
< kris___> no if you look at the file random_patch.py
< kris___> for j in range(0, random_samples):
< kris___> where random_samples is the number of random samples you want to take
< lozhnikov> ah, I see
< lozhnikov> the paper doesn't describe the preprocessing part. I mean "return preprocess(allpatch[0:5000][:])"
< lozhnikov> so, I think it isn't needed
< kris___> In all of the experiments, "images were pre-processed by PCA whitening retaining 99% of the variance"
< kris___> This is from the paper.
< lozhnikov> let me check the paper
< kris___> This is in paragraph 4
< lozhnikov> hmm, I see. I think the transform is incorrect. The paper states the images were pre-processed PCA whitening i.e. the PCA matrix is equal to sqrt(S) * U' * X in your notation
< lozhnikov> * by PCA whitening
< kris___> Yes i used the zca whitening. Because i don't want another parameters k in my calculations
< kris___> For pca whiteing you would have to define the number of components.
< lozhnikov> could you provide the link?
< kris___> To what ??
< lozhnikov> there are no extra parameters
< rcurtin> lozhnikov: kris___: I know I have not been a part of the discussion, but maybe WhitenUsingSVD() or WhitenUsingEig() in src/mlpack/core/math/lin_alg.hpp is helpful?
< rcurtin> sorry if my comment is off base :)
< lozhnikov> rcurtin: nice idea:)
< lozhnikov> kris___: I see. Maybe ZCA whitening gives the same results, maybe not. The only thing I know: the paper uses PCA whitening.
< kris___> hmmmm sure i could pca whitening from skelarn and leave the n_components parameter empty...
< lozhnikov> I think it is reasonable to verify that you use reshape() properly. Are you sure that the function reshapes matrices in correct order?
< lozhnikov> and the paper states "In all of the experiments, images were pre-processed". Actually, you pre-precess patches rather than images
< lozhnikov> so, I think it is reasonable to sample patches from pre-processed images. How do you think?
< kris___> Ahhh sorry i did do that in patches.py file
< kris___> but i think i did not change that in random_patches.py file
< kris___> I will make the changes sometime later today. And send you the gist. I will also zca --> pca.
govg has quit [Ping timeout: 240 seconds]
< lozhnikov> sounds good. what do you think about Ryan's idea ("maybe WhitenUsingSVD() or WhitenUsingEig() in src/mlpack/core/math/lin_alg.hpp is helpful")?
< rcurtin> I guess, I only suggested that in the case that it might save you the time it takes to implement a whitening method yourself
< kris___> Ahhh well i would like to handle this in python. Mainly because i can code it quickly and i want to keep the image processing part in python.......
< rcurtin> I haven't looked through it too specifically, but I would imagine that WhitenUsingSVD() is almost identical to PCA whitening
< kris___> rcurtin: Well sklearn already has support for pca whitening.
< rcurtin> ah, if you are just doing some offline preprocessing and you are already using python, maybe this is the better idea, but it is up to you; I didn't know if you needed to implement the whitening in code you were already using or what
< kris___> lozhnikov: Any updates on the conv - gan model ??
< kris___> I tried to test the discriminator alone like you said but i would need some more time....
< lozhnikov> kris___: I'll dig into the on Saturday
< lozhnikov> * into that
kris1 has quit [Ping timeout: 240 seconds]
kris1 has joined #mlpack
< rcurtin> kris1: I read your blog post, very cool to see that the RBM implementation is 1.5x faster than scikit
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
govg has joined #mlpack
aravindaswmy047 has quit [Quit: Connection closed for inactivity]
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
< kris___> rcurtin: Thanks...
Cyrinika has joined #mlpack
kris1 has quit [Quit: kris1]