ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< PrinceGuptaGitte> since training networks on datasets like ImageNet is practically impossible (unless I have a powerful work station), is there a way to transfer pre trained weights to MLPack models?
< kartikdutt18Gitt> @prince776 take a look at https://github.com/sreenikSS/mlpack-Tensorflow-Translator . Since not every layer is supported we can't really exchange weights between all layers.
< kartikdutt18Gitt> Also if you set up mlpack remotely with a system with CUDA you can use NVBLAS.
< jenkins-mlpack2> Project docker mlpack nightly build build #631: STILL FAILING in 2 hr 49 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/631/
AbhiSaphire has joined #mlpack
AbhiSaphire has quit [Ping timeout: 260 seconds]
mohona[m] has joined #mlpack
< rcurtin> well... that's unfortunate
< rcurtin> I now have a meeting with my company's CEO exactly at the mlpack video meet-up time
< rcurtin> it should only be half an hour though, so I'll join, but I'll probably be late (unless our meeting goes over...)
< rcurtin> I think the CEO is the one person I shouldn't reschedule with :)
< zoq> rcurtin: I don't mind to postpone, might not work for me as well.
< rcurtin> do you mean, e.g., postpone for like half an hour, or for like a week?
< zoq> a week
< zoq> But if it works for some people, fine for me.
< rcurtin> hmm, I don't think that the two of us need to be there every time... how about we just do this week at the scheduled time, and then do two weeks from now?
< rcurtin> it's totally casual so I don't think it's a big deal either way
< zoq> fine for me, I might join the meeting, but I don't know for sure
< rcurtin> likewise, I'll send an email to point that out
< zoq> Sounds good.
< AbishaiEbenezerG> for how long could the video chat be? i think the timing of the meetup is around 11:30 IST where i stay...
< sreenik[m]> freenode_gitter_abishaiema[m]: It is generally an hour long, though you can join or leave at any time you wish
< AbishaiEbenezerG> cool
< PrinceGuptaGitte> Finally I'll attend it this time
M_slack_mlpack_7 has joined #mlpack
M_slack_mlpack_7 is now known as M_slack_9
< M_slack_9> Would you please give the details of the video chat?
< M_slack_9> I am interested to join.
ImQ009 has joined #mlpack
< rcurtin> M_slack_9: details on the website: https://www.mlpack.org/community.html#real-time-chat
< rcurtin> just got those deployed :)
< mohona[m]> Thank you!
lozhnikov has quit [Ping timeout: 265 seconds]
lozhnikov has joined #mlpack
< Param-29Gitter[m> I was working on decision trees but cannot understand what I see in results. First I tried to speed up classify function. Its execution time did decrease but it takes more time to train (the model) with increase in threads.
< Param-29Gitter[m> Can someone help me understand why this is happening?
< Param-29Gitter[m> training time (1T - 59s, 4T- 75s) testing time (1T - 0.11s, 4T - 0.051s)
< saksham189Gitter> Hey @zoq I wanted your opinion regarding the adaptive pooling mean and max layers PR ( https://github.com/mlpack/mlpack/pull/2195 ). Do you think we should implement the layer as a wrapper over the original pooling layer since most of the code is exactly the same once the stride and kernel parameters have been calculated?
< PrinceGuptaGitte> Hi @kartikdutt18 thanks for providing the mlpack-tensorflow-translator's source. I'm able to get a general idea of how it's working. Do you have some code in which a keras model saved as onnx is being loaded into mlpack.
< kartikdutt18Gitt> At this point I don't have a code for that right now. Maybe Sreenik might be able to help you with that.
< sreenik[m]> freenode_gitter_prince776[m]: Hello, the convert_model() function in https://github.com/sreenikSS/mlpack-Tensorflow-Translator/blob/master/src/onnx_to_mlpack.hpp is what you are looking for
< sreenik[m]> Let me know if it does not produce expected results for your model, I am not 100% sure that it is not buggy
AbhiSaphire has joined #mlpack
< kartikdutt18Gitt> Also Sreenik, I think one of the issue regarding convolution layer not having groups will be solved. I currently have a PR for Depthwise convolution, I can make changes so that convolution accepts groups as a parameter rather than having a different layer.
< sreenik[m]> kartikdutt18[m]: That would be great
< PrinceGuptaGitte> Thanks sreenik @kartikdutt18
togo_ has joined #mlpack
azwn[m] has joined #mlpack
< rcurtin> well, my meeting got postponed, so I actually will be able to make the whole video meetup
< PrinceGuptaGitte> Great!
< rcurtin> to run an individual test suite:
< rcurtin> bin/mlpack_test -t NameOfTestSuite
< rcurtin> and to run an individual test case:
< rcurtin> bin/mlpack_test -t NameOfTestSuite/NameOfTestCase
< rcurtin> valgrind+gdb: https://tromey.com/blog/?p=731
AbhiSaphire has quit [Remote host closed the connection]
< kartikdutt18Gitt> Thanks @rcurtin, I'll try it out.
ImQ009 has quit [Quit: Leaving]
Manav-KumarGitte has joined #mlpack
< Manav-KumarGitte> Hello Everyone, I am Manav Kumar 3rd year computer science student. I have beginner level experience in ML and Deep Learning and want to participate in gsoc by working on this organizations one of the ideas ' Improvisation and Implementation of ANN Modules'. Can somebody guide me with it.
< zoq> Manav-KumarGitte: Hello, https://www.mlpack.org/community.html and https://www.mlpack.org/gsoc.html should help you get started.
< zoq> saksham189Gitter: About adaptive pooling, sounds like a good idea to me, ideally we can avoid code-duplication, because each line has to be maintained.
< shrit[m]> Anyone knows if armadillo iterators has value_type traits?? I was not able to find this traits for the iterators
togo_ has quit [Quit: Leaving]