ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
ImQ009 has joined #mlpack
< zoq> AakashkaushikGi4: Will add an example notebook.
chopper_inbound[ has quit [Quit: Idle for 30+ days]
< AakashkaushikGi4> Hi @zoq i checked this out on the mlpack doc but as in pytorch imagefolder we can just specify the folder and it loads all classes present in it and then provides a iterator for the data, which can be further customised, in mlpack when we have to load multiple images we need to specify all the file names(images) right?
< AakashkaushikGi4> Also thanks for adding a example notebook
< AakashkaushikGi4> This helped me a lot, thanks.
< kartikdutt18[m]> <AakashkaushikGi4 "Does mlpack have something simil"> Also, you can take a look at the image loader in models repo
< say4n> > Hacktoberfest just started
< say4n> Wait isn't it supposed to be in October!
< say4n> zoq: ^
< zoq> say4n: Hm, good point, fun fact most of the German Oktoberfest takes place in September, and not in October.
< AakashkaushikGi4> I read the details and i think only the PR created after or on 1 October will be counted so only the registrations have started i think, not so sure about it .
< AakashkaushikGi4> Also i was having problem with the mnist_cnn.cpp example and found out that i had a older version of ensmallen and as i know there is a option which is ON by default in mplack do install ensmallen so does it not check if the version is old and may not work with the all the current mlpack functionality ?
< jonpsy[m]> Quick question, in CNNwhy don't we just take IFFT instead of convolving over image?? That seems to be MUCH faster. Wonder why this isn't the norm?
xiaohong has joined #mlpack
xiaohong has quit [Write error: Connection reset by peer]
< AakashkaushikGi4> > `jonpsy` Quick question, in CNNwhy don't we just take IFFT instead of convolving over image?? That seems to be MUCH faster. Wonder why this isn't the norm?
< AakashkaushikGi4> I am not completely sure but i think convolutions are used because the preserve the position connectivity in between convolution operations but then i am not so sure what exactly IFFT is in images rather than in signal processing.
< AakashkaushikGi4> (edited) ... because the preserve the position connectivity ... => ... because they preserve the positional connectivity ...
< jonpsy[m]> open question, anybody else would like to weigh in? :)
< jonpsy[m]> * open question, anybody else would like to weigh in as well? :)
< AakashkaushikGi4> Don't you think finding the Fourier transform (I am leaving the inverse for a minute) will cost your more computation power and time because taking the exponent of numbers also what exactly will be the learnable parameter there and if somehow there is one, Will the optimization space created be good enough to solve the problem ?, Because from what i think convolutional layers dropped the number of learnable parameters
< AakashkaushikGi4> by a huge margin, and as I mention with my above point they introduced a lot of image specific qualities.
< jonpsy[m]> As far as computation power is concerned, there are fft algos which are **much** faster, so that's ruled out.
< jonpsy[m]> Onto learnable parameters, yes, indeed it's lesser in spatial domain but then why does it matter? Because if it's because "training will be faster", then it's not a firm ground to stand upon as Ive said before.
< jonpsy[m]> I'm not very sure about the optimization space, well that's why it's an open question xD, but I suppose nature of the loss function should be unhinged(or I think so??)
< AakashkaushikGi4> Hey @zoq, Here is the [link](https://medium.com/@kaushikaakash7539/a-convolutional-neural-network-cnn-in-c-52c9ed47a6ea) to the article, everyone else is also welcome the provide their feedback. **:)**
< AakashkaushikGi4> in case you can't see from the link here is a friend [link](https://medium.com/@kaushikaakash7539/52c9ed47a6ea?source=friends_link&sk=84202980cf495652ec8eb654f0ee58bb).
< AakashkaushikGi4> (edited) ... from the link ... => ... from that link ...
< AakashkaushikGi4> The code is mostly taken from the mnist_cnn example with some slight modification.
ImQ009 has quit [Quit: Leaving]
< say4n> zoq: Ah, I see!
< AakashkaushikGi4> > in case you can't see from that link here is a friend [link](https://medium.com/@kaushikaakash7539/52c9ed47a6ea?source=friends_link&sk=84202980cf495652ec8eb654f0ee58bb).
< AakashkaushikGi4> Hey @zoq here is the link for the article and everyone else is also welcome to provide feedback, or any edits and additions to this.
AbishaiEbenezer4 has joined #mlpack
Eddie-XiaoGitte4 has joined #mlpack
SriramSKGitter[4 has joined #mlpack
SakshamRastogiG4 has joined #mlpack
SriramSKGitter[m has quit [Ping timeout: 246 seconds]
SakshamRastogiGi has quit [Ping timeout: 246 seconds]
Eddie-XiaoGitter has quit [Ping timeout: 246 seconds]
AbishaiEbenezerG has quit [Ping timeout: 246 seconds]
SlackIntegration has quit [Ping timeout: 246 seconds]