ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< jenkins-mlpack2> Project mlpack - git commit test build #417: FAILURE in 2 hr 0 min: http://ci.mlpack.org/job/mlpack%20-%20git%20commit%20test/417/
kartikdutt18 has joined #mlpack
Saksham[m] has quit [Ping timeout: 260 seconds]
Saksham[m] has joined #mlpack
M_slack_mlpack_U has quit [Ping timeout: 260 seconds]
UmarGitter[m] has quit [Ping timeout: 260 seconds]
UmarGitter[m] has joined #mlpack
M_slack_mlpack_U has joined #mlpack
kartikdutt18 has quit [Ping timeout: 245 seconds]
ImQ009 has joined #mlpack
< jenkins-mlpack2> Yippee, build fixed!
< jenkins-mlpack2> Project mlpack - git commit test build #418: FIXED in 1 hr 26 min: http://ci.mlpack.org/job/mlpack%20-%20git%20commit%20test/418/
kartikdutt18 has joined #mlpack
UmarGitter[m] has quit [Ping timeout: 260 seconds]
UmarGitter[m] has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> mlpack/examples#484 (master - 0999214 : favre49): The build passed.
travis-ci has left #mlpack []
kartikdutt18 has quit [Ping timeout: 245 seconds]
AryamanBhagatGi4 has joined #mlpack
AryamanBhagatGit has quit [Ping timeout: 246 seconds]
kartikdutt18 has joined #mlpack
kartikdutt18 has quit [Ping timeout: 245 seconds]
< himanshu_pathak[> Hey saksham189 (Gitter) When I am calculating betas for training My rbfn they turn out to be zero because sigma values are small and squraing them make them zero beta =1/2*sigmas what can be wrong here
< HimanshuPathakGi> Hey saksham189 (Gitter) When I am calculating betas for training My rbfn they turn out to be zero because sigma values are small and squraing them make them zero beta =1/2*sigmas what can be wrong here
< HimanshuPathakGi> Beta=1/2*sigmas^2
favre49 has joined #mlpack
< saksham189Gitter> @himanshupathak21061998 I have replied to you
< saksham189Gitter> I think that you can remove that parameter for now.
< HimanshuPathakGi> Thanks for replying I will try applying your suggestions :)
< walragatver[m]> jeffin143: birm: Are you here?
< jeffin143[m]> Yes
< jeffin143[m]> Let's start @walragatver:matrix.org
< walragatver[m]> Just a second
jeffin143 has joined #mlpack
< walragatver[m]> Anything you want to say?
< jeffin143> Ok, I did got through the rusting sword repo and we can take some ideas from that surely , No comming to proto part, is it ok if we directly take proto part from tensorflow
< jeffin143> I mean they .proto file where they have declared proto
< jeffin143> So is it ok to take proto from that ?
< jeffin143> they have .proto*
< walragatver[m]> Yeah I think we would need to take it
< walragatver[m]> To make the backend compatible with tensorboard
< jeffin143> ok , second thing I did discuss about adding proto as dependency to mlpack and rcurtin was strongly of opinion that we should avoid it and hence I guess we should make a separte repo for Logging or so just like ensamallen
< walragatver[m]> Yeah you're are quite correct about it
< walragatver[m]> But I think we can avoid the dependency partially
< jeffin143> I am sure I get that, Like how ?
< jeffin143> I am not sure*
< walragatver[m]> Basically have you tried to run the rusting sword api on local api? If we see it's build it building header and source code using protoc from the .proto files.
< jeffin143> yes I did and ran everything, Infact added support for image also in that,
< walragatver[m]> But yeah you are quite right we cannot avoid that dependancy.
< jeffin143> But there is some issue with image code, I will talk that latter in the chat meeting , so let's keep that asside, for other scalers and distribution it is smooth
< jeffin143> Ok So that means we have to opt in for a new repo all together
< walragatver[m]> Yeah it would also be easier to maintain.
< jeffin143> Now that makes me little sceptical about timelines, because I have never worked with cmake and eveything so I have to probably look into that, but yeah I will take care of it so that thing is done
< jeffin143> Now all these were from design point of view , Now my last doubt is more of C++ oriented
< walragatver[m]> okay
< jeffin143> it accepts a byte array
< jeffin143> When you simply print matrix as cout << matrix -> It does give you pixel values
< jeffin143> But when you print it as as single entity using for loop it gives absured output
< jeffin143> Now we should pass the proto a char* array or std::string -> So do you know a way out if that is possible
< jeffin143> https://ibb.co/MfxVtHG : The top has 3 pixel values of the matrix value but the bottom part is where I am printing the elements using cout and it throw invalid characters
< walragatver[m]> Okay so the doubt is to convert arma::mat to std::string right?
< jeffin143> yes, or may be if possible we convert it to array of char or may be if possible convert into valid char one by one while iterating over its elements ?
< jeffin143> Any of those , basically my goal is to get array of char* or std::string
< RyanBirminghamGi> @jeffin143 @walragatver did we have a meething for the vis project today?
< jeffin143> RyanBirminghamGi : Yes :)
< walragatver[m]> birm: Yes
< walragatver[m]> jeffin143: regarding the matrix printing error, I think it might be because that we need to access elements with arr(i) instead of arr[i]
< walragatver[m]> in armadillo
< birm[m]> ah, gitter wasn't updating.
< walragatver[m]> Also it's a matrix so I think because of that we might need to access with something like arr(i, j) instead of arr[i][j]
< birm[m]> I'll have to type on my phone, and catch up. a moment...
< jeffin143> i iterating over n.elems and hence That should we work since continuous I will try and let you know
< walragatver[m]> jeffin143: I am not sure but I think that's the correct way to access armadillo matrices. It's been a while I have worked with them.
< jeffin143> No issues
< walragatver[m]> I have a doubt now.
< walragatver[m]> birm: no problems.
< jeffin143> walragatver[m] about what ?
< walragatver[m]> Does converting to std::string would suffice our issue? If I am not wrong we need to convert it to the bytes array write?
< walragatver[m]> right?
< jeffin143> After creating the header files from the proto files, these are the default functions possible : https://pastebin.com/1cgSz3i2
< birm[m]> what's the doubt?
< walragatver[m]> <jeffin143 "https://github.com/dmlc/tensorbo"> jeffin has asked his doubt over here.
< walragatver[m]> jeffin143: Okay so it would be converting it over here.
< walragatver[m]> We need to pass the string as an input.
< jeffin143> yes , We should give the input as string or char array (Since it has overload support also ) and I am assuming the proto would handle the rest of conversion internally
< walragatver[m]> jeffin143: I will reply on it soon.
< jeffin143> Thanks , I will also take a look till then
< jeffin143> Apart from that I have nothing from my side
< walragatver[m]> Okay.
< jeffin143> I am assuming there is video chat tomorrow , If both of you makes it to the video meet I will screen share and see the header files and also the conversion part if possible
< jeffin143> within the last 5 min
< walragatver[m]> And when is it?
< walragatver[m]> <jeffin143 "I am assuming there is video cha"> Is it the mlpack's meet or something decided by us?
< jeffin143[m]> Mlpack meet ,every first and third Thursday
< jeffin143[m]> 11:30 pm IST
< rcurtin> jeffin143: good call, thanks! I have to send the reminder email
< rcurtin> by the way that zoom room is always open, you can use it anytime
< walragatver[m]> <walragatver[m] "Okay so the doubt is to convert "> birm: This would be the one line doubt he asked any ideas from your side?
< jeffin143> Write a script please
< rcurtin> jeffin143: I should!
< birm[m]> you want to serialize the matrix?
< walragatver[m]> <jeffin143[m] "11:30 pm IST"> Oh I was not knowing about it.
< walragatver[m]> <birm[m] "you want to serialize the matrix"> Yeah might be not sure.
< jeffin143> birm[m] : Yes sought of -> From matrix of unsigned char to a std::string or char [ ]
< jeffin143> if possible any of those should work ?
< birm[m]> raw_print may be useful for that
< birm[m]> rcurtin : any idea what's up with gitter?
< rcurtin> no, I had no idea there was an issue... I saw Himanshu sending messages from gitter earlier
< rcurtin> looks like there is just a little bit of lag, messages from the past ~22 minutes don't seem to have shown up yet
< rcurtin> maybe the bridge is a little slow?
< rcurtin> in any case it's matrix.org that runs the bridge, so to me it's a little opaque
< birm[m]> hm. well, we'll see if it catches up
< walragatver[m]> <birm[m] "raw_print may be useful for that"> birm: Yeah I think you might be right
< walragatver[m]> You can refer this link.
< jeffin143> Thanks I did go through it , missed raw print
< jeffin143> @bir
< jeffin143> birm[m] : Cout works perfectly as well
< jeffin143> its just this which fails cout<< matrix(1) or similar
< walragatver[m]> jeffin143: Regarding the meet I am not sure whether I would be able to attend it.
< walragatver[m]> But I will try my best
< jeffin143> No issues we can have some other time
< birm[m]> Nice to know that the zoom room is open!
< rcurtin> I don't want to advertise on the website that it's an always-open room that can be used at any time for discussions, but, it is :)
< walragatver[m]> jeffin143: So the final thing I want to discuss. I hope the design part of backend is quite clear to you now right?
< jeffin143> Yes
< jeffin143> Also Since it is async
< walragatver[m]> So, Can you come up with a timeline before the community period ends?
< jeffin143> Then we should have a function as write()
< jeffin143> Which has boost thread and the thread is running
< jeffin143> and in flush the thread is stopped using .join and then the main is exited
< jeffin143> Correct ?
< jeffin143> Yes sure, I will get that done before the next meet on sunday
< walragatver[m]> <jeffin143 "and in flush the thread is stopp"> For async they were using queue.
< walragatver[m]> With flush they might emptying the queue.
favre49 has quit [Remote host closed the connection]
< jeffin143> So when we create an object the queue would be empty and hence nothing would be printed , but when we push something to queu
< walragatver[m]> Depends on the max queue size
< jeffin143> How would write to file function be triggered , We need some logic in wrtie to file function which continuosly check if there is any new element in queue and if yes then write it to file
< jeffin143> isn't it max queue size the maximum summary that can be written to queue ?
< walragatver[m]> Yes when the queue gets full it gets flushed out.
< jeffin143> Oh I see so only write when the queue is full
< jeffin143> I see
< walragatver[m]> Yes or when the user calls it by himself.
< walragatver[m]> It's like a publisher subscriber model in design
< walragatver[m]> There was some thread mechanism though in tensorflow.
< jeffin143> Yes pub-sub exactly , So there should be some sought of triggering , and I am not sure if we should expect user to trigger the write
< jeffin143> I will take a look
< jeffin143> and come up with something
< jeffin143> Will let you know in the next meet
< jeffin143> apart from that rest eveyrthing looks good from design point of view
< walragatver[m]> I am not sure of it workds so I will read it through and will let you know about using thread soon.
< jeffin143> Yes , Thanks :)
< jeffin143> I will also go through tit
< walragatver[m]> <jeffin143 "apart from that rest eveyrthing "> Okay great.
< jeffin143> it*
< jeffin143> I will also be writing timeline and get back to you on sunday
< jeffin143> So we can close everything probably from design point of view
< walragatver[m]> Also one last thing.
< jeffin143> yes
< walragatver[m]> It's better we just focus on implementing scalars first.
< jeffin143> ok I see , I am ok with it let's go slow and implement step by step
< walragatver[m]> Then jump on to images distribution histogram etc.
< jeffin143> ok
< walragatver[m]> <jeffin143 "ok I see , I am ok with it let's"> Yeah It would expose us through the loopholes in a better way
< walragatver[m]> So that's it I want to say
< walragatver[m]> birm: You want to say something?
< birm[m]> I'm not entirely sure what we'll do with pubsub here, but you seem to have a good concept
< walragatver[m]> <birm[m] "I'm not entirely sure what we'll"> Yeah I am also not that sure. But my understanding is when we will be directly writing to file and let's assume we are training on gpu. That means simultanously each of it would be writing they need to wait for the other thread to finish before they write. Leading to some unwanted waiting delay
< walragatver[m]> That's why they used queue I suppose.
< walragatver[m]> Asychronous file writing basically.
< walragatver[m]> Pardon me If I am wrong.
< jeffin143> rcurtin : I am not sure if this helps : https://pastebin.com/4HD6zA8K : I did write it once for some of my work, just dug it now for you :)
< walragatver[m]> <walragatver[m] "Asychronous file writing basical"> jeffin143: birm: If you find any good article please let me know. I have been searching it lately
< jeffin143[m]> Sure
< walragatver[m]> birm: Anything else you would like to say?
< walragatver[m]> If not let's wrap it up
< jeffin143[m]> Nothin from my side :)
< walragatver[m]> jeffin143: birm: Thanks for your time. Have a nice day. Take care.
< birm[m]> Thanks, and sorry about my connection issues!
kartikdutt18 has joined #mlpack
< kartikdutt18> Hey @rcurtin, Did you get a chance to look at this message. When you get a chance, Kindly let me know what you think. Thanks.>Hey @rcurtin, Sorry I couldn't reply yesterday, the messages got lost.Let me know what you think about this, @zoq suggested to use boost::iostream::filter to unzip files. That does work but only for zipped files that contain
< kartikdutt18> a single file. So inorder to use that we would need seperate zip files for train and tests for datasets like MNIST and Pascal VOC as they have fixed training and testing set. An alternative would be adding a dependency such as libarchive.
ImQ009 has quit [Read error: Connection reset by peer]
kartikdutt18 has quit [Remote host closed the connection]
< walragatver[m]> We can refer this while implementing.
< jeffin143[m]> The are using eventlogger thread
< walragatver[m]> Yeah correct
< rcurtin> kartikdutt18: sorry, I didn't see the message; why not use the same kind of Python script strategy that Omar committed to the examples repo recently?
< rcurtin> maybe I don't understand the context right though
< jeffin143[m]> I will take a detailed look @walragatver:matrix.org
< walragatver[m]> jeffin143: They have used mutex to lock and unlock the queue.
< walragatver[m]> jeffin143: Sure.
jeffin143 has quit [Remote host closed the connection]
kartikdutt18[m] has joined #mlpack