ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
qian has joined #mlpack
qian has quit [Remote host closed the connection]
jeffin143 has joined #mlpack
< jeffin143>
rcurtin[m] : [WARN ] Unable to determine format to save to from filename 'tensor.tsv'. Save failed.
< jeffin143>
zoq[m] : cc to you to
< jeffin143>
How to specify format ?
ImQ009 has joined #mlpack
favre49 has joined #mlpack
< favre49>
nishantkr18[m]: zoq: Not too sure about my availability for the next couple of days, since I've just been given notice of a bunch of upcoming tests. I'll review the PRs when I can, and I'll definitely be back on 20th
< jonpsy[m]>
ping kartikdutt18 hi
< kartikdutt18[m]>
Hi jonpsy
< jonpsy[m]>
Remember the field split code of urs?
< jonpsy[m]>
Did you unit test it?
< kartikdutt18[m]>
Yes
< kartikdutt18[m]>
You could take a look at models/tests/dataloader_test
< jonpsy[m]>
I ser
< jonpsy[m]>
Idk why I'm getting a segmentation fault
< kartikdutt18[m]>
I saw your comment on models/#20
< KimSangYeon-DGU[>
Yes
< KimSangYeon-DGU[>
It seems the network is quite unstable
< kartikdutt18[m]>
Could you let me know what you think about this, The YOLO preprocessor is almost ready (it normalizes values between -1 and 1 however it should be 0 and 1). What do you suggest if I should complete YOLO preprocessor and loss function first because that can be merged.
< kartikdutt18[m]>
<KimSangYeon-DGU[ "It seems the network is quite un"> Darknet?
< KimSangYeon-DGU[>
Ahh, my Internet connection : )
< KimSangYeon-DGU[>
Haha
< kartikdutt18[m]>
:)
< KimSangYeon-DGU[>
Ok, let me check the PR for the YOLO preprocessor
< kartikdutt18[m]>
Ohh, maybe that's the issue with Element upgrade, I wasn't able to access channel for a couple of minutes.
< KimSangYeon-DGU[>
Yeah, I think so
< KimSangYeon-DGU[>
kartikdutt18: there are 4 PRs in model repo and 1PR in mlpack repo. Can you let us know the current progress and prioritize the PRs?
< KimSangYeon-DGU[>
*models repo
< kartikdutt18[m]>
Currently I was working on fixing the preprocessor PR. The PR in mlpack repo isn't a high priority PR since batchnorm PR works and already gives faster speed.
< KimSangYeon-DGU[>
Ok
< kartikdutt18[m]>
I could change priority as per your suggestion.
< KimSangYeon-DGU[>
No, that's good, I was just curious whether the weight converter works or not from PyTorch to mlpack
< kartikdutt18[m]>
I think this would make better sense, I could try getting everything ready for preprocessor by Saturday and then in the next week, I could work on weight converter and loss function (with weight converter having higher priority). Kindly let me know what you think.
< KimSangYeon-DGU[>
Great, if the project goes like that, I think that's really wonderful
< KimSangYeon-DGU[>
:)
< KimSangYeon-DGU[>
What value do you try to normalize with the preprocessor?
< kartikdutt18[m]>
Great, One thing that I missed in my proposal that I realized when I implemented the preprocessor. The YOLOv1 preprocessor has the shape (5num_bounding_boxes + num_classes) * grid_wd * grid_ht whereas the the YOLOv3 has the following shape : (5(num_bounding_boxes + classes) * grid_wd * grid_ht.
< kartikdutt18[m]>
i.e. In YOLOv1, Each grid is assigned to a single class whereas in YOLOv3 it maybe assigned to multiple classes.
< kartikdutt18[m]>
<KimSangYeon-DGU[ "What value do you try to normali"> The image width and height.
< KimSangYeon-DGU[>
Ahh, Ok
< kartikdutt18[m]>
* Great, One thing that I missed in my proposal that I realized when I implemented the preprocessor. The YOLOv1 preprocessor has the shape (5 * num_bounding_boxes + num_classes) * grid_wd * grid_ht whereas the the YOLOv3 has the following shape : (5 * (num_bounding_boxes + classes) * grid_wd * grid_ht.
< KimSangYeon-DGU[>
Let me check the PR
< kartikdutt18[m]>
Great.
< rcurtin>
I hope I can make the video meetup today, but I have to get some car service done so I might not be able to
< rcurtin>
it depends on when it finishes...
< KimSangYeon-DGU[>
kartikdutt18: Ok, do you have anything you want to discuss?
< kartikdutt18[m]>
Yes, just one more thing.
< KimSangYeon-DGU[>
About the preprocessor, so are you implementing the preprocessor for YOLOv3?
< sakshamb189[m]>
Hey guys, sorry for being late.
< KimSangYeon-DGU[>
Or should we change the operation per YOLO version?
< KimSangYeon-DGU[>
sakshamb189: No worries!
< kartikdutt18[m]>
That would be a simple if condition addition, In the PR there is YOLOv1 preprocessor and I just have to change shape and if condition whether each bounding box has class or not.
< kartikdutt18[m]>
<KimSangYeon-DGU[ "sakshamb189: No worries!"> Hi, No worries.
< kartikdutt18Gitt>
That would be a simple if condition addition, In the PR there is YOLOv1 preprocessor and I just have to change shape and if condition whether each bounding box has class or not.
< kartikdutt18Gitt>
> Hey guys, sorry for being late.
< kartikdutt18Gitt>
Hey, No worries.
< KimSangYeon-DGU[>
Yeah, but do you think we'll add YOLOv1?
< KimSangYeon-DGU[>
in the future?
< kartikdutt18[m]>
tiny YOLOv1 and YOLOv1 have the same preprocessor. And the PR I have open is for YOLOv1 (tiny and YOLOv1).
< sakshamb189[m]>
kartikdutt18 I have seen you have opened the PR on YOLO.
< sakshamb189[m]>
I wanted to know how's the progress on darknet? Maybe we should try to finish that first.
< kartikdutt18[m]>
Agreed, I am hoping to complete the Preprocessor part of YOLO by Saturday so I could work on the weight converter after that. I can put the model on training with the new batchnorm layer in the meanwhile.
< kartikdutt18[m]>
Or if you suggest, I could first work on the weight converter.
< KimSangYeon-DGU[>
kartikdutt18: Ahh, right
< sakshamb189[m]>
Hmm either would be fine (assuming the preprocessor isn't going to take a lot of effort and debugging)..But IMO we should try to get one model finished and merged first and then focus on the other model.
< kartikdutt18[m]>
The preprocessor is pretty much ready.
< sakshamb189[m]>
alright then we can finish that first
< kartikdutt18[m]>
Great.
< KimSangYeon-DGU[>
Ok, actually, I thought we'll go PR for Darknet for the next work
< KimSangYeon-DGU[>
Internet connection is poorly unstable...
< KimSangYeon-DGU[>
kartikdutt18: Ok, then we'll add a version in the preprocessor for the YOLO v1 and v3
< KimSangYeon-DGU[>
and what is the one thing that you want to discuss?
< kartikdutt18[m]>
Right, makes sense.
< kartikdutt18[m]>
Ahh, Also in the meanwhile should I train the model?
< kartikdutt18[m]>
Darknet, I tried it for a few iterations but didn't complete an epoch.
< KimSangYeon-DGU[>
<kartikdutt18[m] "Ahh, Also in the meanwhile shoul"> That's good idea
< KimSangYeon-DGU[>
And we can complete the PR for Darknet
< kartikdutt18[m]>
Great, that makes sense.
< KimSangYeon-DGU[>
Didn't it complete even 1 epoch?
< sakshamb189[m]>
I guess we'll have to restart
< kartikdutt18[m]>
I stopped when it was about 80%, just wanted to see the speed difference.
< KimSangYeon-DGU[>
Ahah, I see
< kartikdutt18[m]>
We can't use earlier weights since their size won't match.
< KimSangYeon-DGU[>
Yeah, definitely
< KimSangYeon-DGU[>
We need to complete the Darknet 19 and 53
< kartikdutt18[m]>
<kartikdutt18[m] "I stopped when it was about 80%,"> For reference, it should take about 6-7 hours roughly for an epoch (whereas it was more than 12 earlier).
< KimSangYeon-DGU[>
Nice improvement : )
< KimSangYeon-DGU[>
Did the loss decrease?
< kartikdutt18[m]>
I think it did, It started from somewhere around 3.2 and decreased a bit.
< KimSangYeon-DGU[>
Great
< KimSangYeon-DGU[>
Is there anything to discuss further?
< kartikdutt18[m]>
Nothing more from my side.
< sakshamb189[m]>
alright then we can meet next week. Have a great week guys!
< KimSangYeon-DGU[>
Yes, have a great week and thanks for the great work guys
< kartikdutt18[m]>
Great, Have a nice week guys!
< jeffin143[m]>
There are 5 Thursday
< jeffin143[m]>
Are we having a meeting today ?
< jeffin143[m]>
rcurtin , armadillo:: hist() function doesn't have weight parameter
< jeffin143[m]>
Just like np.histogram() , weights param
< jeffin143[m]>
I think it would be good addition
< KimSangYeon-DGU[>
kartikdutt18: Can you let me know where it normalizes them into -1 and 1?
< rcurtin>
seems like June and July have just been really bad for me and video meetings... I hope August is better, I miss them
< rcurtin>
zoq: really cool, did you see that mlpack is being archived there? I didn't see where to find the full list of projects
< rcurtin>
oh, I see "On February 2, 2020, GitHub captured a snapshot of every active public repository,"
< zoq>
in the video they said they will store every open source project, also got a new badge "Arctic Code Vault Contributor" that lists mlpack ensmallen
< rcurtin>
awesome :)
< rcurtin>
I'm still stuck here... so I won't be able to join at the beginning, but maybe if I am lucky I might manage to join by 1830 UTC or so
favre49 has quit [Quit: Lost terminal]
< abernauer[m]>
I would stop by the meeting but busy with some personal project stuff atm. Open to setting sometime aside this weekend to help with code review on that huge PR for R bindings.
< jeffin143[m]>
zoq[m]. , rcurtin u there
< jeffin143[m]>
> rcurtin : [WARN ] Unable to determine format to save to from filename 'tensor.tsv'. Save failed.
< jeffin143[m]>
Any idea why this error
< rcurtin>
ok, made it back home... joined the video meetup but it is just me for now :-D
< rcurtin>
jeffin143[m]: hmm, is that from calling `data::Load()`? that should be able to save to TSV just fine
< rcurtin>
abernauer[m]: cool, the more eyes the better
< jeffin143[m]>
Yes
< jeffin143[m]>
Tagging @rcurtin:matrix.org
< rcurtin>
jeffin143[m]: that's kind of strange, maybe you can use gdb to try and get a backtrace from where that warning is issued?
< rcurtin>
the code in data::Load() that figures out the type to save to is pretty straightforward, so I think it shouldn't be *that* hard (hopefully) to figure out what is going wrong
< jeffin143[m]>
Ok
rcurtin_ has joined #mlpack
rcurtin_ has joined #mlpack
rcurtin_ has quit [Client Quit]
< rcurtin>
himanshu_pathak[: is it possible that Jenkins is not loading the data correctly or something like that?
< himanshu_pathak[>
rcurtin: Not sure I am using same datasets which we are already using in other tests there should be no problem with loading data.
< rcurtin>
I see, and valgrind run locally does not produce the issue...
< himanshu_pathak[>
rcurtin: Yup
< rcurtin>
now, I can see that there is a random component to `ConcentricCircleDataset`
< rcurtin>
it is possible that you should try with different random seeds, and maybe one of these different random seeds will expose the failure
< rcurtin>
I can also see that `LinearSVMFitIntercept` has a random component too
< himanshu_pathak[>
Ok I will try that may be it should show failure.
< rcurtin>
the valgrind output does indicate that something is being double-freed; so maybe you should also take a look through the code and see if you can find somewhere where some memory would be deleted twice?
< rcurtin>
if you are using any aliases of matrices anywhere, this could be part of the issue
< himanshu_pathak[>
> if you are using any aliases of matrices anywhere, this could be part of the issue
< himanshu_pathak[>
May be this might be possible I should look into this tried Randomseed but no error occurred
< himanshu_pathak[>
* > if you are using any aliases of matrices anywhere, this could be part of the issue
< himanshu_pathak[>
May be this might be possible I should look into this. Also I tried Randomseed but no error occurred
ImQ009 has quit [Quit: Leaving]
< rcurtin>
you might have to try with many different random seeds, but really I'm not sure
< rcurtin>
it could be a compiler difference, or something like that---see if you can replicate the setup of the build node as closely as possible (you can see the cmake output if you look at "Console output" in Jenkins)
< jeffin143[m]>
@walragatver:matrix.org: To log 42000 images it took 67.6837s.
< jeffin143[m]>
Using arma::mat
< jeffin143[m]>
It isn't slow
< walragatver[m]>
Yeah correct
< walragatver[m]>
jeffin143: I am just worried about the preprocess directory. It didn't got removed in my build
< jeffin143[m]>
I will take a look at that do ???
< jeffin143[m]>
Was there any swap files or something
< jeffin143[m]>
Just like last time ???
< walragatver[m]>
May be not sure
< jeffin143[m]>
Ok
< walragatver[m]>
jeffin143: I think it's if we avoid image saving. It would solve windows and Mac removing directory problems as well
< jeffin143[m]>
But I could find a way to convert arma to binary string using in memory