ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
gtank___ has quit [Read error: Connection reset by peer]
< rcurtin>
kartikdutt18[m]: do you want the .tar.gzs in datasets/, or should I unpack them?
< kartikdutt18[m]>
tar.gz is fine.
< kartikdutt18[m]>
I think compression would be better for the bandwidth.
< rcurtin>
ok, sounds good
< kartikdutt18[m]>
Great. Thanks a lot.
< rcurtin>
ok, should be uploaded now---let me know if there are any problems :)
< kartikdutt18[m]>
Awesome thanks a lot.
< rcurtin>
sure, no problem :)
< rcurtin>
I wish I knew how I could set it up so you could easily add stuff, but we can't easily host the datasets.tar.gz file (or its contents) on github because it's too big
< rcurtin>
if you have an idea, maybe we can cut me out of the loop so things can go faster :) but at least from my end the current setup is okay (although perhaps suboptimal)
< kartikdutt18[m]>
I this works for now, but yes I'll have to keep troubling you for this :)
< rcurtin>
not a problem, hopefully I can keep the response times quick
< kartikdutt18[m]>
:)
< rcurtin>
shrit[m]1: it seems to me like we can get around cereal's raw pointer restrictions by being careful to serialize, e.g., dereferenced pointers only
< rcurtin>
so, if you were serializing a tree structure, we would need specific logic in serialize() to only serialize the children if they are not NULL
< rcurtin>
and then on deserialization, we might need to reallocate space for a deserialized object and use a move constructor on what cereal gives us back
< rcurtin>
let me know if I can clarify; that should at least be an okay option though
< rcurtin>
(not the ones that say "why aren't you using smart pointers?"---that's outside the scope of changes we might reasonably expect to do)
< rcurtin>
shrit[m]1: also, I was able to build mlpack_pca on the mlpack_rpi branch and ran it
< rcurtin>
I noticed a few problems:
< rcurtin>
1) mlpack_pca --help
< rcurtin>
oops
< rcurtin>
1) mlpack_pca --help seems to produce CLI11's help output---then segfaults; we should use the custom mlpack help printing functionality (that's probably easy to fix)
< rcurtin>
2) I changed the handling of the `cat (const CLI::ParseError& pe)` to be `app.exit(pe)` instead of a call to Log::Fatal; that seemed to help it get further (but I didn't dig into why)
< rcurtin>
sorry, I mean `catch (const CLI::ParseError& pe)`
< rcurtin>
3) I used gdb when running just `mlpack_pca` (which segfaults), and found that it does crash in the loop you just made around line 120
< rcurtin>
but it seems like, in that iteration, `identifier` is the empty string, and `option` seems to refer to an empty option? so some special handling may be needed
< rcurtin>
the same problem happens when running with any options (like `mlpack_pca -v`), so I wonder if `for (auto option : app.get_options())` is the right thing to loop over?
< rcurtin>
I think I will leave a couple comments on github; hopefully they are helpful
< shrit[m]1>
Thanks, these are really helpful, I will see what I can do about it
< rcurtin>
just posted a comment too, hopefully it's not *too* stream-of-consciousness :)
KimSangYeon-DGU has joined #mlpack
< KimSangYeon-DGU>
Hi, Saksham[m] kartikdutt18[m], Are your there?
< KimSangYeon-DGU>
*you
< kartikdutt18[m]>
Hi KimSangYeon-DGU , saksham189 , I'm here.
< KimSangYeon-DGU>
I saw your comment on GitHub
< KimSangYeon-DGU>
Let me look into it
< kartikdutt18[m]>
I am trying to figure it out. Maybe If I don't use pointers I won't have that issue. I am not sure though.
< KimSangYeon-DGU>
Ok, thank you for letting me know :)
< KimSangYeon-DGU>
Can you share any reference for Darknet-19 and 53?
< kartikdutt18[m]>
I think the implementation is complete though, Only changes would have to be made in Convolution blocks.
< KimSangYeon-DGU>
Oh, same thing in the proposal :) Thanks for sharing. And I'd like to suggest changing the variable name, `DarkNetVer` to `DarkNetVersion` (?)
< kartikdutt18[m]>
Sure, I can make the change.
< KimSangYeon-DGU>
Thanks, it makes the code self-describing
< KimSangYeon-DGU>
kartikdutt18[m] Can you check it using `mode.Parameters().n_elem`?
< kartikdutt18[m]>
I think I can do that.
< KimSangYeon-DGU>
*darkNet.Parameters().n_elem
< kartikdutt18[m]>
Yeah, I think that can be done. I'll print at each step / layer and see what goes wrong.
< zoq>
kartikdutt18[m]: 88k seems small, that would be < 1MB?
< zoq>
kartikdutt18[m]: if we use double that is, which we do
< kartikdutt18[m]>
Yes, According to [this](https://pjreddie.com/darknet/imagenet), It's smaller than alexnet and I have implemented a working alexnet before so the model size shouldn't be a problem.
< zoq>
kartikdutt18[m]: Btw. did you see the discussion about Yolov5?
< kartikdutt18[m]>
I did see a post. It's supposed to give more than 100 fps (not fully gone through it though).
< kartikdutt18[m]>
It would be nice to have an implementation of convert functionality from [here](https://roboflow.ai) in models repo as well.
< zoq>
kartikdutt18[m]: Right :)
< kartikdutt18[m]>
Hey KimSangYeon-DGU , @zoq, Fixed that error, Thanks. one of input parameters was uninitialized.
< KimSangYeon-DGU[>
Oh, glad to hear :)
< kartikdutt18[m]>
I am posting the output size matrix in the comments of the PR in just a sec.
< KimSangYeon-DGU>
Ok
< kartikdutt18[m]>
I added one for 256, 256 as well. I think the outputs size match.
< KimSangYeon-DGU>
Great! :)
< KimSangYeon-DGU>
It seems the uninitialization issue was in the ConvolutionBlock()
< kartikdutt18[m]>
Yes. Now it works fine. I'll put it on train then. I think 32 x 32 is a little small for the network. Maybe increasing the size to 56 should be okay. Kindly let me know what you think.
< KimSangYeon-DGU>
Ok, let's increase
< kartikdutt18[m]>
Great. Since the network is small, I should have some results (couple of epochs) with me. I'll add them in the comment then.
< kartikdutt18[m]>
* Great. Since the network is small, I should have some results (couple of epochs) with me. I'll add them in the comment by tomorrow then.
< KimSangYeon-DGU>
Ok, and is there anything else you want to discuss?
< kartikdutt18[m]>
I think that's it from my side.
< KimSangYeon-DGU>
Yes, thanks for the meeting :)
< kartikdutt18[m]>
Great. Thanks a lot.
KimSangYeon-DGU has quit [Remote host closed the connection]
HeikoS has quit [Quit: Leaving.]
< jeffin143[m]>
@walragatver:matrix.org: need a review on pr1
< jeffin143[m]>
Or blocked by pr6
ImQ009 has quit [Quit: Leaving]
< walragatver[m]1>
> @walragatver:matrix.org: need a review on pr1
< walragatver[m]1>
I will give it tomorrow in the morning
< walragatver[m]1>
jeffin143: Slightly busy at this moment
< walragatver[m]1>
> Or blocked by pr6
< walragatver[m]1>
jeffin143: Sorry I didn't get it. What you want to say?
< shrit[m]1>
rcurtin sorry to bother you a lot, but I am getting the same segfault, Now it is app.parse()
< shrit[m]1>
I am not able to get a better output of gdb
< rcurtin>
shrit[m]1: no worries, that is why we are here as mentors :)
< rcurtin>
if you reconfigure with -DDEBUG=ON, it will give better output
< shrit[m]1>
it seems that is segfaulting on the destructor call on boost::any