ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
vivekp has joined #mlpack
< rcurtin> robertohueso: I took a look at the Monte Carlo sampling for KDE code you wrote; I don't see exactly any issue quite yet, but I had a couple ideas:
< rcurtin> (1) you might want to comment out the old single-tree scoring code---right now the Monte Carlo approximation is only used if the existing bounds fail; that might mess up the approximation results
< rcurtin> (2) it might also be worth concocting a really, really simple tree, with ~20 reference points that are close enough that its value can be approximated for a single query point
< rcurtin> I guess the query point would be really far away
< rcurtin> then, you can compute on paper what the sample mean should be (even easier if your reference points came from a Gaussian where you chose the mean and variance :))
< rcurtin> and you can print through the Score() function to see what's actually happening
< sakshamB> ShikharJ: yes thats fine with me
xiaohong has joined #mlpack
< xiaohong> If the defined model, we don't need the loss function, how do we handle it ?
< xiaohong> I mean we calculate the loss outside the model.
< zoq> iaohong: Privide an empty loss class?
< zoq> xiaohong: Sorry got the name wrong.
< xiaohong> Never mind. Okay, I think it is reasonable for the first thought.
< xiaohong> zoq: If we need to pass extra parameter into the forward function except `input` and `target` parameter, should we compact additional information as struct with `input` parameter?
< zoq> xiaohong: That would be my first ideas as well, you could also adapt the layer to take another input, but I guess in thise case you have to adjust multiple layers.
< xiaohong> Thx, I will implement a simple version and try to come up with more ideas.
vivekp has quit [Ping timeout: 252 seconds]
vivekp has joined #mlpack
ARYANDOSAJ has joined #mlpack
ARYANDOSAJ has quit [Ping timeout: 256 seconds]
aryandosaj has joined #mlpack
aryandosaj has quit [Quit: Leaving]
aryandosaj has joined #mlpack
< zoq> xiaohong: Sounds good.
aryandosaj has quit [Ping timeout: 252 seconds]
aryandosaj has joined #mlpack
aryandosaj has quit [Client Quit]
walragatver has joined #mlpack
< walragatver> ShikharJ: HI
< walragatver> Are you over here?
< walragatver> We can have meeting right now. I don't mind
< walragatver> Sorry I just read that message right now.
sreenik has joined #mlpack
rcurtin has quit [Ping timeout: 252 seconds]
rcurtin has joined #mlpack
rcurtin_ has joined #mlpack
rcurtin_ has joined #mlpack
rcurtin has left #mlpack []
rcurtin has left #mlpack []
rcurtin has joined #mlpack
rcurtin has joined #mlpack
walragatver has quit [Quit: Page closed]
walragatver has quit [Quit: Page closed]
< ShikharJ> sakshamB: toshal: I'm here, let's start if you guys are also here?
< ShikharJ> sakshamB: toshal: I'm here, let's start if you guys are also here?
< ShikharJ> Otherwise, we can begin at the regular time today, and maybe 7pm from Friday onwards.
< ShikharJ> Otherwise, we can begin at the regular time today, and maybe 7pm from Friday onwards.
< ShikharJ> Hmm, I don't know why IRC is logging in two messages every time.
< ShikharJ> Hmm, I don't know why IRC is logging in two messages every time.
rcurtin_ has left #mlpack []
rcurtin_ has left #mlpack []
< rcurtin> ShikharJ: oops, it's because I was in here twice and irssi was logging duplicates :)
< ShikharJ> rcurtin: Hmm, that makes sense. SO basically, the logs that we see are the public messages that you receive? And these messages are saved to logs?
< ShikharJ> sakshamB: toshal: Let's start regular time today.
< rcurtin> ShikharJ: yeah, you can take a look at https://github.com/rcurtin/irclog to see how I have it set up
< robertoh1eso> rcurtin: Thanks for your advice :)
xiaohong has quit [Ping timeout: 256 seconds]
< robertoh1eso> I think the idea in Algorithm 1 is to use hard bounds when possible but anyway I tried (1) just to look for issue
< robertoh1eso> Also tried (2) and I think all calculations are right (using KD-Tree). If reference points are close enough and query point is far enough, then I get an acceptable approximation, otherwise I don't. I'm starting to think that the condition for using Monte Carlo sampling is not enough.
< rcurtin> robertoh1eso: interesting, let me dig in more tonight and see if I can come up with any ideas
< rcurtin> it's completely possible there is an error in the paper as it is written---it might be worth spending some time with the math in the paper and making sure that there isn't some forgotten term somewhere :)
< robertoh1eso> Apart from that, I think it's technically possible that at some point it might sample more points from the node than the amount points it would have to evaluate using the base case (since I use sampling with repetitions)
< robertoh1eso> rcurtin: Thanks! :) I'll keep working on this anyway and let you know if I find anything new
< rcurtin> robertoh1eso: I thought about that, I would have figured in such a case then CanSummarize() would be false
walragatver has joined #mlpack
walragatver has quit [Client Quit]
walragatver has joined #mlpack
walragatver has quit [Ping timeout: 256 seconds]
< ShikharJ> sakshamB: toshal: Let me know if you're here.
< sakshamB> yes I am here.. just saw your last messages
< ShikharJ> sakshamB: Yeah, sorry about the delay today. I overslept :(
< sakshamB> not a problem, we can always reschedule :)
< ShikharJ> It's kinda tough to maintain the American life. You have to wake up early, get ready, make breakfast for yourself, do the dishes before you leave, clean the floor, and have a great time management in the rest of the time that's available for work. And in case you oversleep, like today, your entire day takes a hit. In India, I hardly had to do anything, everything was so cheap that you could pay for domestic help :)
< ShikharJ> I'm getting used to everything these days.
< sakshamB> that does sound very difficult.. are you living in the dorms or have you rented an appartment?
< ShikharJ> sakshamB: Rented an apartment, I guess a dorm would have been much less work.
< ShikharJ> But I guess I'm a spoilt kid.
< sakshamB> and how about the other meals? do you eat outside or cook them yourself?
< ShikharJ> You could eat outside, but then don't expect your health to get better. So I prefer cooking at home, and pack for lunch.
< sakshamB> hmm that would be impossible for me because I dont know how to cook.
< ShikharJ> Yeah, nobody does, until hunger takes over.
< ShikharJ> Anyways, I think we've waited enough for toshal. This is his second time he's late, so let's continue.
< sakshamB> alright
< ShikharJ> I was mostly convinced of the Highway Networks PR, just wanted to try a few ideas of my own. If they turn out to be viable, I'll let you know and you can implement them. Otherwise, I'll merge that after a final review.
< ShikharJ> I see that you pushed the Inception Score PR, so that's great.
< rcurtin> ShikharJ: yeah, when I cook for myself I cook for efficiency, I just make some really simple pasta :)
< rcurtin> a lot of people will, e.g., cook for an entire week at once and then simply reheat things
< ShikharJ> sakshamB: Is there anything you're stuck at currently / any other review that you're looking forward at?
< sakshamB> I think you could review the MinibatchDiscirmination PR.. it is mostly done. The numerical gradient test is passing.
< ShikharJ> rcurtin: Yeah, and cooking for the entire week is not healthy either. My parents never kept anything in the fridge for longer than two days :)
< toshal> Hi everyone
< sakshamB> ShikharJ I don’t have a test for the inception score and not sure how we could test that
< ShikharJ> sakshamB: Okay, let me check that. Let's try and get everything in by the end of this week.
< sakshamB> ShikharJ: thanks. I will maybe try to work on virtual batch normalisation in the meantime.
< ShikharJ> sakshamB: Yeah, sure go ahead. But we're still on track, so don't feel obligated to :)
< rcurtin> ShikharJ: yeah, depends on what it is. but cooking can be a huge time sink---I find myself eating out most of the time, but in that case it costs money, so it's a time-money tradeoff...
< sakshamB> ShikharJ: also what do you normally prefer to cook? I guess you must be limited to groceries in the US.
< ShikharJ> sakshamB: I packed a lot of stuff from India, and there are Indian grocery stores in LA. So that's good. My goto dish is a normal cheese parantha, and sometimes potato, beans and rice in the afternoon. Dinner is usually heavier.
< ShikharJ> toshal: Are you comfortable with the 7pm timings from Friday onwards?
< toshal> Yes
< ShikharJ> Okay great. I looked at the GAN serialization PR, but I wasn't totally confident of the implementation. Hence I needed more time. Sorry for the delay there.
< toshal> ShikharJ: It's great to hear your experience.
< toshal> Okay.
< ShikharJ> What's the progress on label smoothing? Do you think that's ready for review? Or are you stuck somewhere there?
< toshal> No it's good to go.
< toshal> You can check it out.
< toshal> It also contains commits from serialization PR.
< ShikharJ> rcurtin: Yeah, but then I don't find many healthy + delicious options in LA that are easy on the wallet. The salads are too bland for me. I like eating hot food.
< ShikharJ> toshal: Okay, in the meantime, have you decided on a topic you're going to pick up next?
< toshal> I am thinking to continue Dual Optimizer PR. I was thinking to start testing it.
< toshal> Let me know what you think
< rcurtin> ShikharJ: agreed, "american food" is really boring. mexican food can be pretty spicy, also thai food should be near you in LA and other asian :)
< ShikharJ> toshal: I think you should ideally pick up a feature to implement instead. Dual Optimizer is too ambitious to be completed within the week.
< ShikharJ> Like Saksham has planned on implementing virtual batch normalization.
< toshal> Actually it's already done just testing was remaining. And WGAN-GP's implementation is remaining.
< toshal> But hold on.
< toshal> What you wish to add next?
< ShikharJ> rcurtin: Hmm, can't say it's boring, I have tried Pretzels / Bagels / Doughnuts / Sandwiches dipped in hundreds of other sauces. But cold food isn't really my preference. For an Indian, the amount of hospitality he perceives is directly proportional to the temperature of the food he is eating.
< rcurtin> ohhh, I thought you meant spice level :)
< ShikharJ> toshal: Any of the features that we decided upon in the first official meeting. The choice is yours.
< ShikharJ> rcurtin: Yeah, I don't mind non-spicy, but it must be hot :)
< ShikharJ> toshal: Better to lookup Saksham's proposal. It had a few other proposed feature IIRC.
< toshal> I can go with weight normalization. But I am not sure of it's implementation as I was in my previous implementation's. It would go as a layer.
< toshal> I have read that paper quite a lot earlier.
< ShikharJ> Maybe try any of the regularization techniques he had mentioned?
< toshal> Hmm I am not sure of orthogonal normalization. I will need to go through the paper
< ShikharJ> toshal: In that case, let's stick with Dual Optimizer, and try familiarizing yourself with these techniques. I'll provide the reviews in the meantime.
< toshal> I know weight normalization so if you wish I can start working on it.
< ShikharJ> toshal: Sure, feel free to open a draft. Or let me know here if you have some doubts regarding that.
< toshal> And will go hand in hand on dual optimizer
< toshal> ShikharJ: okay.
< ShikharJ> Sounds good.
< toshal> I was just having some doubts on ssh. Are you free now? Can I ask them now?
< ShikharJ> sakshamB: Have a fun week. Feel free to log off.
< ShikharJ> toshal: Yeah?
< robertoh1eso> ShikharJ: I think it's the same for Spanish, I like warm food specially for lunch. Luckily German univeristies have good inexpensive places to eat :)
< toshal> Okay. In my ssh login I would need all basic facilities like git and then will need to clone my fork or mlpack already there?
< ShikharJ> I guess it will be already there. Have you tried logging in?
< toshal> Yes. I fired `ls` it looks like an empty directory.
< ShikharJ> toshal: I think it would be easier to explain if we're on a video chat. Maybe install zoom, and I can contact you there?
< toshal> Okay
< toshal> It may take some time.
< ShikharJ> Let me know when you're ready.
< toshal> It looks like It's download will take quite a lot of time. I am having a slow network. We can have a video chat in our next meet. So for now just let me know to get started. What are the things I would need to put in savannah.
< toshal> I also got acquainted with tmux.
< ShikharJ> Hmm, so after you login, try determining the folder you're in.
< ShikharJ> Ideally, it would be /home/Toshal or something.
< ShikharJ> Then if that is empty, install Git and download mlpack via https. Then add your remote servers to repo, and checkout your branch.
< ShikharJ> Then simply follow the build and make instructions and test it out there, inside tmux.
< toshal> Yes
< ShikharJ> toshal: Anything else you wanted to know?
< toshal> ShikharJ: Thanks.
< ShikharJ> Okay great. I'll be off for now then. Have fun.
< toshal> Sorry My internet is not working correctly.
< toshal> ShikharJ: Sorry for coming late.
zoq has quit [Read error: Connection reset by peer]
zoq has joined #mlpack
blurryface has joined #mlpack
blurryface has quit [Quit: Page closed]
robertohueso has joined #mlpack
< sreenik> We don't have support for padding in the maxpool function, is there any workaround for that?
< sreenik> I mean, any separate layer for padding/dimension manipulation without having to use armadillo functions directly?
< zoq> sreenik: hm, is that used in a model you like to load?
< sreenik> zoq: Yes
< zoq> mlJens: Ahh, I see, sorry no update on that one.
< zoq> sreenik: Do you have a link? I might be able to implement tomorrow.
< sreenik> zoq: Do you mean a link to the relevant paper? Or to the onnx implementation?
< zoq> to the onnx implementation
vivekp has quit [Ping timeout: 258 seconds]
< zoq> I guess, I could just take a look at keras or tf
< sreenik> Yes it's the same thing there I guess
vivekp has joined #mlpack
< sreenik> The code is (let me just search for it and post the link)
< zoq> okay, great
< sreenik> zoq: Thanks! Ahh the onnx backend is a messed up thing, rather more confusing. tf or keras would be a better option I suppose
< sreenik> On a different note, what code editor do you use? I use vscode but it gradually eats up all the RAM (I have 8GB, sadly). A similar but less expensive alternative would be great :)
sreenik has quit [Quit: Page closed]
< zoq> Not sure you like my answer, I use vim or be more precise nvim.
< zoq> I take a look at the keras interface than.
< zoq> Also let me know if you need any help.