ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< rcurtin> nishantkr18[m]: thanks for writing that! can't believe we are 4 weeks in already
< KimSangYeon-DGU[> Yes, so fast
< KimSangYeon-DGU[> nishantkr18 (@nishantkr18:matrix.org): Nice blog :)
< nishantkr18[m]> Glad u liked it! :)
ImQ009 has joined #mlpack
< zoq> nishantkr18[m]: Thanks for the update, two more movies for my list :)
< jeffin143[m]> > It might happen that we stop maintaining it. So just confirm it.
< jeffin143[m]> zoq (@freenode_zoq:matrix.org): @rcurtin:matrix.org , is it ok to write Blog on mlpack:: blog ??
< zoq> jeffin143[m]: Sure, use whatever works for you the best.
< jeffin143[m]> Thanks :) , cc @walragatver:matrix.org
< kartikdutt18[m]> Hey @zoq, Did you get an accuracy for training and validation after the end of the epoch?
< zoq> 0.00105969[====================================================================================================] 100% - ETA: 0s - loss: 0.00105956
< zoq> 7977/7977 [====================================================================================================] 100% - 44s 5707ms/step - loss: 0.00105969
< zoq> this is the last thing I see
< zoq> It's still doing something, CPU utilization is at 600%, but not sure what.
< kartikdutt18[m]> It's calculation the metric (accuracy on train and valid dataset).
< zoq> kartikdutt18[m]: Does that part take long?
< kartikdutt18[m]> * It's calculating the metric (accuracy on train and valid dataset).
< kartikdutt18[m]> It takes more than an hour and a half on mine.
< zoq> kartikdutt18[m]: Okay, will take a look at the output in an hour.
< kartikdutt18[m]> Great, Thanks a lot.
< jeffin143[m]> > It's still doing something, CPU utilization is at 600%, but not sure what.
< jeffin143[m]> 600%
< zoq> kartikdutt18[m]: Train accuracy : 0.000000
< zoq> Validation accuracy : 0.000000
< zoq> [WARN ] Unable to open file './../weights/darknet19_1_0_001060.bin' to save object 'darknet19'.
< zoq> kartikdutt18[m]: I think there is no weights folder.
< kartikdutt18[m]> Yeah, but Train accuracy equal to 0 is not possible (I think).
< kartikdutt18[m]> Even if it predicted a single number it would get atleast 10% right.
< zoq> kartikdutt18[m]: agreed
< zoq> especially I see 7977/7977 [====================================================================================================] 100% - 44s 5707ms/step - loss: 0.00105969
< kartikdutt18[m]> The batch size was 1 right?
< kartikdutt18[m]> or 8?
< zoq> 1
< kartikdutt18[m]> For a batch size of 1 then shouldn't the total iteration be equal to number images i.e. 48k.
< zoq> Will terminate training for now, until we figured out what went wrong.
< zoq> Yes, should be equal to the number of images.
< kartikdutt18[m]> Agreed.
< kartikdutt18[m]> Could you please share the script (with the changes).
< zoq> For this I guess we can train for like one step with 10 images and see if we can reproduce the issue.
< kartikdutt18[m]> Figured it out.
< kartikdutt18[m]> The cifar10 directory contains train and test folder. The correct path is cifar10/train. I think I suggested that yesterday.
< kartikdutt18[m]> For batch size of 1 I got this result on my machine for a few iterations.
< zoq> kartikdutt18[m]: Changed from ./../data/cifar10-small/ to ./../data/cifar10/
< zoq> and the other change I did was the batchSize
< kartikdutt18[m]> It should be ./../data/cifar10/train/
< kartikdutt18[m]> cifar10 directory contains train and test folder.
< zoq> I thought LoadImageDatasetFromDirectory will look for train and test
< kartikdutt18[m]> Ahh, sorry about that. LoadImageDataset will look for classes. I think the model loaded 0 images from train and all images from test set and trained it on that.
< kartikdutt18[m]> for train and testing it would have to be called twice like in keras.
< kartikdutt18[m]> Also about the pretrained weights, I'm able to access pretrained weights of darknet 19 for each layer. I'm trying to do the same in mlpack.
< zoq> kartikdutt18[m]: So this is why it's faster and also the number is somewhat low.
< zoq> kartikdutt18[m]: Nice
< zoq> I can see that it lists the train images as well
< kartikdutt18[m]> Sorry that the build wasted your time.
< zoq> No worries, just started the it
< kartikdutt18[m]> Alternatively you could use Dataloader<> dataloader("cifar10", true); that will download and load dataset.
< zoq> ohh yeah
< kartikdutt18[m]> And I get an error if I use sequential.
< zoq> Hm, okay, will have to look into this, but as a workaround can you use bottleNeck->...?
< kartikdutt18[m]> I was trying to do this after the FFN was compiled.
< zoq> kartikdutt18[m]: Thanks that makes it easier to reproduce the issue.
< kartikdutt18[m]> Thanks :)
< KimSangYeon-DGU[> kartikdutt18zoq : Great, I'm training the Darknet-19 model of the Darknet framework on the `imagenette` 320 dataset (higher resolution than CIFAR-10, https://github.com/fastai/imagenette) and the loss is decreasing little by little from 2.4 to 0.4. Maybe, after 5 hours, the training will be done, so let me report the validation accuracy.
< kartikdutt18[m]> Awesome!, Thanks a lot. If it gives good result we can switch to this dataset.
< KimSangYeon-DGU[> Yes, the number of training images is about 9,000. However, the training time will take so long time with CPU, so I think the best situation is to convert the pre-trained weights to mlpack.
< kartikdutt18[m]> Makes sense, I'm working on it. The current plan that I have in mind is since we can access weights and biases of each layer for darknet we could store them in a bin file and have a cpp to file to load the correct parameters into the layer. Kindly let me know if this makes sense?
< kartikdutt18[m]> * Makes sense, I'm working on it. The current plan that I have in mind is since we can access weights and biases of each layer for darknet we could store them in a bin file and have a cpp file to load the correct parameters into the layer. Kindly let me know if this makes sense?
< KimSangYeon-DGU[> That makes sense to me
< shrit[m]> rcurtin: are you here?
< shrit[m]> Is there any file should have the name changed other than the 3 files that you have mentioned in the review?
< rcurtin> shrit[m]: yeah, I'm here
< rcurtin> I didn't check through all of the files in the diff, but it shouldn't be too hard to look through and decide which ones need to change
< rcurtin> usually files are named after the classes that they hold, so, e.g., since the CLI class was changed to IO, then cli.hpp -> io.hpp (and cli.cpp -> io.cpp, etc.)
< shrit[m]> Hmm, ok but I kept the name for BINDINGS_CLI instead of IO
< rcurtin> right, so files there probably don't need to change
< rcurtin> I mean I would just say take a look through all of the filenames and make a decision; it's tedious, but it's the best way to do it right :)
< shrit[m]> Yes, I understand, I am just looking to find a logic that works well.
< rcurtin> yeah; hopefully what I described above makes enough sense?
< shrit[m]> Yes, it does make sense
< shrit[m]> class name = filename
< rcurtin> usually... but not always :)
< shrit[m]> OK, would it be logical to keep the binding name as cli, but change the namespace to IO?
< shrit[m]> So everything related to binding classes, include guard, file name, directory are cli, only namesace is switched to cli
< shrit[m]> * switched to io
< rcurtin> why would we want to change the namespace mlpack::bindings::cli -> mlpack::bindings::io?
< rcurtin> that namespace contains information about the command-line bindings (hence 'cli')
< shrit[m]> because it has a conflict with CLI11
< rcurtin> the CLI11 code is in the "CLI" namespace, so there should be no collision
< shrit[m]> Actually the compiler complains about namespace issues since it can not find the function of CLI11 inside mlpack and vice versa,
< rcurtin> can you paste what the error is? this should be an issue we can resolve, since the namespaces don't collide
< shrit[m]> it was a long time ago, since I removed the boost po, now everything is named to IO in my repository
< rcurtin> ok, maybe you can try changing back from mlpack::bindings::io -> mlpack::bindings::cli, and if you have trouble we can try to work together to resolve it?
< shrit[m]> I will give it a try and see what will happen
< shrit[m]> meta/mlpack/src/mlpack/bindings/cli/add_to_po.hpp:35:19: error: ‘mlpack::CLI::App’ has not been declared
< shrit[m]> 35 | CLI::App& app,
< shrit[m]> | ^~~
< shrit[m]> rcurtin: This the error I am getting
< rcurtin> maybe a `using namespace CLI` could be useful here, then you just call it `App&`?
< shrit[m]> I will give a try
< shrit[m]> We can always name the program options library namespace CLI to CLI11 since it is in third party and one header file, this will avoid as from change in mlpack
< rcurtin> I'm still not convinced that there actually is a namespace collision issue here
< rcurtin> since namespaces are case-sensitive
< rcurtin> now, if your branch has not changed the mlpack::CLI class in src/mlpack/core/util/ to mlpack::IO, that could cause a collision
< shrit[m]> In this case I would get the error before during the library compiling since cli.cpp is in the core. However, I got this error when compiling methods
< shrit[m]> I have checked the Class it is renamed as CLI in cli.hpp
< shrit[m]> I will do some check
< shrit[m]> I have a better understanding now,
< shrit[m]> Actually it is the namespace of CLI that is conflicting with the CLI class is core/util/cli.hpp
< rcurtin> right but the idea was to rename mlpack::CLI to mlpack::IO, but leave mlpack::bindings::cli as the same namespace
< rcurtin> if you rename CLI in src/mlpack/core/util/cli.hpp to IO, I think that should work
< shrit[m]> I agree I mixed both of them because of name. Sorry for that
< shrit[m]> I will re-do as clean as possible regexp work
< rcurtin> sounds good, no worries :)
favre49 has joined #mlpack
< zoq> kartikdutt18[m]: Okay I got the code to compile, first you have to move the two Sequential: https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/ann/layer/layer_types.hpp#L214-L215 into LayerTypes, maybe you have to move some layer up, I moved AlphaDropout and DropConnect into MoreTypes.
< kartikdutt18[m]> Ahh, Thanks a lot. I'll try it.
< zoq> kartikdutt18[m]: Also correct me if I'm wrong but I think the first layer of DarkNet is an IdentityLayer?
< kartikdutt18[m]> Yes.
< zoq> kartikdutt18[m]: So it's boost::get<Sequential<> *>(darknetModelA.GetModel().Model()[1])->Parameters().n_elem - index 1
< zoq> I think you had index 0
< kartikdutt18[m]> Ahh, Makes sense. Thanks a lot.
< kartikdutt18[m]> Making the changes now.
< zoq> kartikdutt18[m]: Getting size 0, so not sure this is already correct.
< kartikdutt18[m]> Hmm, I'll just give it a try.
< zoq> kartikdutt18[m]: Okay sounds good, let me know if that at least allows you to build the code.
< kartikdutt18[m]> Sure.
favre49 has quit [Remote host closed the connection]
< abernauer[m]> Anyone recommend alternatives to GraphViz for making diagrams of neural networks?
< kartikdutt18[m]> @zoq, The code compiled with the changes. Thanks a lot. Also, it gives 0 elements in weights.
< kartikdutt18[m]> abernauer: , Not sure if this what you were looking for, but you could take a look at [Fabrik](https://github.com/Cloud-CV/Fabrik)
< shrit[m]> rcurtin: I basically get back to the original CLI for the class and also for binding, no modification for this can be found now in the repo
< shrit[m]> The compiler seems to be un happy, I added using namespace CLI; and everything seems to be find for CLI11
< shrit[m]> fine*
< rcurtin> I don't understand what you mean; you say the compiler is unhappy but everything seems to be fine?
< shrit[m]> Yes, It seems that the compiler is missing CLI class
< rcurtin> also, I thought that you were going to change the name of the CLI class in src/mlpack/core/util/cli.hpp to IO
< rcurtin> wouldn't that need to be IO::GetParam<>?
< shrit[m]> if I added mlpack::CLI it will be fine
< abernauer[m]> @kartikdutt18:matrix.org: Thanks for the suggestion. I am mostly just using the diagrams for visual representations in the my notes.
< rcurtin> shrit[m]: I don't understand what you mean. we are planning on changing it to IO anyway, so why not just go ahead and do that?
< shrit[m]> Sorry for that, no worries, I am on the way of doing this, I had to clean everything because I had used regexp to change even unnecessary bindings, comments, etc. my bad, so instead of modifying file by file, I had only to clean everything back to CLI and then start from scratch.
< rcurtin> yeah, sometimes it might be easier just to start on a new branch and try applying the same changes :)
< shrit[m]> I just found that if we use mlpack::CLI it might not necessary to modify to IO, just proposing
< rcurtin> but we should change the name anyway as it is out of date
< rcurtin> the CLI class is not just for command-line bindings anymore; so we should rename it IO regardless
< rcurtin> (since it is used for every binding type)
< shrit[m]> Exactly, I understand better now, thanks
< kartikdutt18[m]> @zoq, I maybe wrong, but when the a layer is added to the FFN I guess the weights are moved to the FFN weights forming one contiguous weight matrix. I'm using this as [reference](https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/ann/visitor/weight_set_visitor_impl.hpp)
< jeffin143[m]> Too much of heat
< rcurtin> jeffin143[m]: might have a few too many processes running :-D
< jeffin143[m]> 372 Chrome tabs eat lots of my ram
< rcurtin> 372 :-O
< abernauer[m]> 372 yikes
< jeffin143[m]> This is my old laptops
< jeffin143[m]> Didn't manage to take this time :)
< jeffin143[m]> I was a firefox user :)
< rcurtin> I guess you don't make a habit of closing old tabs :-D
< rcurtin> I go through every couple hours and close all tabs that aren't directly relevant, which is maybe too far on the other extreme
< jeffin143[m]> > I guess you don't make a habit of closing old tabs :-D
< jeffin143[m]> I am scared of missing out of information , or not finding it second time I google it
< jeffin143[m]> Just ocd about it
< abernauer[m]> Bookmarks help with not losing a link
< rcurtin> that's true, I take the risk of losing stuff for sure
< abernauer[m]> Yeah I have a to really make a habit of clearing out my main email.
< rcurtin> I don't delete emails once I've read them...
< rcurtin> from mutt: "Msgs:398583"
< rcurtin> so maybe I should archive some of these
< jeffin143[m]> That's my phone :)
< jeffin143[m]> I am obsessed with them :)
< jeffin143[m]> 839 was too many
< rcurtin> 839 :-O my phone would catch on fire if I even tried to open that many tabs
< jeffin143[m]> My chrome didn't respond
< jeffin143[m]> That's upper limit for my phone
< jeffin143[m]> 😜
< rcurtin> :-D
< rcurtin> nice, I wonder how ancient of a computer you'd need so that _FP_W_TYPE_SIZE was less than 32 bits
< jeffin143[m]> Hahaha
< HimanshuPathakGi> Hey everyone here is my blog for this week https://medium.com/@hpathak336/gsoc-2020-week3-67eeaac425f :) Sorry for delay :)
< HimanshuPathakGi> I have to change my habit of submitting it late.
< jeffin143[m]> Himanshu Pathak (Gitter): 👌👌
< HimanshuPathakGi> > `jeffin143 (@jeffin143:matrix.org)` Himanshu Pathak (Gitter): 👌👌
< HimanshuPathakGi> :)
< zoq> HimanshuPathakGi: Great to see the RBFN code merged :)
< zoq> kartikdutt18[m]: About the weights, that's partially correct, but they are not initalized once you add them but if you call the ResetParameters() funtion in the ffn/rnn/gan class; that function is called automatically in the first forward pass, if not done before.
< HimanshuPathakGi> > `zoq on Freenode` Himanshu Pathak (Gitter): Great to see the RBFN code merged :)
< HimanshuPathakGi> Thanks, @zoq after a lot of changes it's done :)
< zoq> HimanshuPathakGi: Also I like the gif, not sure if that is from some movie, don't think I recognize that scene.
< HimanshuPathakGi> Yes it was from movie Microbe & Gasoline
< zoq> No doesn't ring a bell.
< jeffin143[m]> Anybody watching wwdc ?? Apple
< HimanshuPathakGi> I think not that much famous movie also bad IMDB rating :)
ImQ009 has quit [Quit: Leaving]
< zoq> HimanshuPathakGi: The gif is nice anyway.
< zoq> jeffin143[m]: No, I guess you mean the keynote?
< KimSangYeon-DGU[> kartikdutt18: I tried to check the training result. However, the PC that I used for training has been freezing and doesn't work... So, I'll re-try after formatting the PC
< KimSangYeon-DGU[> OMG
< KimSangYeon-DGU[> kartikdutt18: I tried to check the training result, but PC has been freezing and doesn't work... So, I'll re-train after formatting it.
< KimSangYeon-DGU[> OMG
< KimSangYeon-DGU[> @kartikdutt18:matrix.org: I tried to check the training result, but I couldn't do that because PC has been freezing and doesn't work... So, I'll re-train after formatting it
< KimSangYeon-DGU[> OMG 😅
< KimSangYeon-DGU[> Restarted