ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
ayushwashere has joined #mlpack
ayushwashere has quit [Remote host closed the connection]
eadwu has quit [Remote host closed the connection]
eadwu has joined #mlpack
eadwu has quit [Quit: ERC (IRC client for Emacs 28.0.50)]
travis-ci has joined #mlpack
< travis-ci> mlpack/ensmallen#715 (master - 29ff827 : Ryan Birmingham): The build passed.
travis-ci has left #mlpack []
eadwu_is_running has joined #mlpack
eadwu_is_running has left #mlpack []
eadwu has joined #mlpack
eadwu has quit [Remote host closed the connection]
< kartikdutt18Gitt> Hi @zoq, Got my mlpack stickers. I absolutely love them.
< kartikdutt18Gitt> Thanks a lot.
< himanshu_pathak[> Hey rcurtin in https://ci.appveyor.com/project/mlpack/mlpack-wheels/builds/31128643/job/703b3or2uwmtwnxa can you tell what is the value of PYTHONPATH=
ImQ009 has joined #mlpack
< jenkins-mlpack2> Project docker mlpack nightly build build #627: FAILURE in 3 hr 4 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/627/
< SaraanshTandonGi> What is the Deterministic value for a layer?
robotcatorGitter has joined #mlpack
< robotcatorGitter> Hi, is anybody faces the STB_IMAGE_FOUND not works in Mac os? https://github.com/mlpack/mlpack/blob/master/CMakeLists.txt#L347
< robotcatorGitter> Only when I change the if statement into if (True), so when cmake will download the stb images dependency. https://github.com/mlpack/mlpack/blob/master/CMakeLists.txt#L347
< GauravSinghGitte> @saraansh1999 it determines whether the layer is in training or testing mode.
TanviAgarwalGitt has joined #mlpack
< TanviAgarwalGitt> i am not able to get armadillo on my system. hcan anybody tell alternative?
< SriramSKGitter[m> @tanviagwl98 : mlpack is built on armadillo and is a necessary component. What problems are you facing getting it on your system?
sayantanu has joined #mlpack
sayantanu has quit [Remote host closed the connection]
< TanviAgarwalGitt> I am not able to get a runnable form of armadillo @sriramsk1999 sir. Like I downloaded it but can't extract it.
< SriramSKGitter[m> No need to call me sir :) . What do you mean when you say can't extract it?
< TanviAgarwalGitt> can't extract is i am not getting zip file after download, the file after downloading is not supported on the system @sriramsk1999
< SriramSKGitter[m> Do you mean the `.tar.xz` file? If you are on Linux, `tar -xvf file.tar.xz` should do the trick. On Windows, I assume 7zip will be able to extract it.
< chopper_inbound[> Is there a concept of broadcasting in armadillo? (I need to operate on vectors and matrices)
taapas1128[m] has left #mlpack []
< TanviAgarwalGitt> okay i will try again.
< rcurtin> himanshu_pathak[: for that step on line 9301 of the output, we do `SET PYTHONPATH=.`, which corresponds to build/src/mlpack/bindings/python/
< rcurtin> the problem I have with that build you can actually see just a few lines above: lmnn.pyx and cf.pyx are empty (they have size 0)
< rcurtin> this means that when the programs generate_pyx_lmnn and generate_pyx_cf were run, they failed to run properly
< rcurtin> on Windows this generally happens when the right .dlls can't be found
< rcurtin> and I know that's what's happening here because I used remote desktop to connect to the build system and see
< rcurtin> so my first step (which I've been trying to do for many days now...) is just to get those files to be non-empty
< AbishaiEbenezerG> hi mlpack! I've been learning neural networks for the past few weeks, and still have a lot to do. But i do feel that this is the right time to start contributing, atleast in a small way. I'm having some difficulty in finding some issues i can work on. Some guidance here would be really helpful. Thanks!
< AbishaiEbenezerG> I know i could open my own issue and claim it, but if anyone has some more guidance , that would be great
< himanshu_pathak[> rcurtin (@freenode_rcurtin:matrix.org): Also the problem is only with 32 bit not in 64 bit version I will also try to find out if I get something I will notify you
< kartikdutt18Gitt> Hi @abishaiema, Feel free to do so. Also there are some good first issues that you might want to look at.
< kartikdutt18Gitt> *you might want to take a look at.
< AbishaiEbenezerG> yes @kartikdutt18 , i did take a look at them.
< AbishaiEbenezerG> but most(if not all) have already been claimed...
< AbishaiEbenezerG> wanted to know how i need to approach this as i'm very new to this
< kartikdutt18Gitt> Hmm, Well if you find something interesting feel free to open a PR for the same.
< AbishaiEbenezerG> sure. Thanks tho !!!
< kartikdutt18Gitt> I think you could get familiar with the codebase and if you find something that you might need to use or something that's interesting you can open a PR for the same or if you need help with anything you can open an issue / discuss it here.
< AbishaiEbenezerG> i agree. I think i should just spend time with the codebase now and then take it from there....
< rcurtin> himanshu_pathak[: right, this has confused me from the outset---why are the libraries in the right place fpr 64 bit (even without any of the copying in .appveyor.yml)? I haven't figured that out
harshitaarya[m] has joined #mlpack
< rcurtin> mlpack has just been merged into the Julia package registry
< rcurtin> so now you can do
< rcurtin> julia> import Pkg; Pkg.install("mlpack")
< rcurtin> and then you can use the Julia mlpack bindings :)
< rcurtin> this unblocks the 3.3.0 release (which was waiting on this), so now we can finish up the other issues that are open and get the release done soon :)
< pickle-rick[m]> Awesome!!
< himanshu_pathak[> Yeah!!
< chopper_inbound[> wow
< kartikdutt18Gitt> Great.
< SaraanshTandonGi> What is layer -> Delta()
< SaraanshTandonGi> I think it is the upstream gradient. Is it? And where is the code that sets it?
Pranshu54 has joined #mlpack
< SaraanshTandonGi> same for layer -> Loss(), I see it being returned by the LossVisitor, but where is its value set?
< PrinceGuptaGitte> Hi @saraansh1999 Delta is the gradient matrix of that gradient, and loss is being set when we call `Backward()` function and pass in the loss of that class through visitor, and since backward function updates it.
< PrinceGuptaGitte> Of that layer*
< PrinceGuptaGitte> It gets updated
< PrinceGuptaGitte> That's the same way how `outputParameter` gets updated.
< PrinceGuptaGitte> In the forward function
< SaraanshTandonGi> yeah. Just saw it in the code. Thanks :)
< Pranshu54> Hi KhizirSiddiquiGi ,after referring SummerofCodeIdeas page I would like to know if we can add a module for "cyclic learning" in code base under "Improvisation and Implementation of ANN Modules". Cyclic Learning is used for faster training of neural networks.
< SaraanshTandonGi> @prince776 I see forward setting layer -> outputParameter(), and backward setting layer -> Delta().
< SaraanshTandonGi> Where is layer -> Loss() being set().
< SaraanshTandonGi> (edited) ... being set(). => ... being set()?
< PrinceGuptaGitte> You see backward() function of any layer class takes 3 parameter:
< PrinceGuptaGitte> In our activation, error, Delta
< PrinceGuptaGitte> Input*
< PrinceGuptaGitte> Error is the Delta of previous layer(while propagating in backward direction)
< PrinceGuptaGitte> And Loss() as far as I remember is only for certain layers, not every layer
< PrinceGuptaGitte> I might have went offtrack with explaination, but I wanted to clear it since it's really confusing sometimes
< SaraanshTandonGi> @prince776 Thanks. What you explained above is what i just took like 2hours to understand on my own. : P
< PrinceGuptaGitte> I also struggled understanding it, since I was totally new to boost and template programming
< PrinceGuptaGitte> But that's so beautifully done, I thought for this they definitely need inheritance but no, everything was done avoiding viable
< SaraanshTandonGi> yeah!
Pranshu54 has quit [Remote host closed the connection]
< SaraanshTandonGi> What could be causing this error in the Train function of ffn.
< SaraanshTandonGi> (edited) ... of ffn. => ... of ffn when I increase the MAX_ITERATIONS_PER_CYCLE value.
< SaraanshTandonGi> batch size = 50
< SaraanshTandonGi> train set size = 37800
eadwu has joined #mlpack
< SaraanshTandonGi> MAX_ITERATIONS_PER_CYCLE = 40 giving this error
< SaraanshTandonGi> MAX_ITERATIONS_PER_CYCLE = 20 is not
< SaraanshTandonGi> Everything above MAX_ITERATIONS_PER_CYCLE=23 are failing.
< SaraanshTandonGi> (edited) ... MAX_ITERATIONS_PER_CYCLE=23 are failing. => ... MAX_ITERATIONS_PER_CYCLE=23 is failing.
< PrinceGuptaGitte> Vtables*, I meant. I just noticed typo
< PrinceGuptaGitte> Auto correct
< SaraanshTandonGi> @prince776 do you have any idea regarding this?
< SaraanshTandonGi> (edited) ... regarding this? => ... regarding this? ^^^
< SaraanshTandonGi> Also @prince776 if you have not already put in a lot of work I would like to work on TiripletMarginLoss
< PrinceGuptaGitte> Not really, can you share your code I might get an idea
< SaraanshTandonGi> will do
Cecca has joined #mlpack
Cecca has left #mlpack []
< PrinceGuptaGitte> What is the batch size and number if columns kn your dataset?
< SaraanshTandonGi> > batch size = 50
< SaraanshTandonGi> > train set size = 37800
< SaraanshTandonGi> >
< SaraanshTandonGi> mnist data, so 784 cols
< SaraanshTandonGi> rows in mlpack
< PrinceGuptaGitte> I'll try to check it tomorrow. I have a maths exam tomorrow. Sorry
< SaraanshTandonGi> OK :)
< SaraanshTandonGi> Best of luck!
< SaraanshTandonGi> @zoq
< SaraanshTandonGi> @rcurtin
< SaraanshTandonGi> can you take a look at #2248 and #2249
ImQ009 has quit [Quit: Leaving]
< PrinceGuptaGitte> Thanks
eadwu has quit [Remote host closed the connection]
< metahost> rcurtin: Ryan, I have added the documentation separately in doc/functions_types.md and made the other changes you suggested, please have a look when you can! :)
xps3 has joined #mlpack
xps3 has quit [Client Quit]
xps3 has joined #mlpack
xps3 has quit [Client Quit]
xps has joined #mlpack
xps has quit [Client Quit]
eadwu has joined #mlpack
eadwu has quit [Client Quit]
eadwu has joined #mlpack
eadwu has quit [Client Quit]
eadwu has joined #mlpack
ayushwashere has joined #mlpack
eadwu has quit [Remote host closed the connection]
travis-ci has joined #mlpack
< travis-ci> shrit/models#8 (digit - 0ada054 : Omar Shrit): The build is still failing.
travis-ci has left #mlpack []
ayushwashere has quit [Remote host closed the connection]
eadwu has joined #mlpack