ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
birm[m] has joined #mlpack
UmarJ has quit [Ping timeout: 264 seconds]
< AyushSingh[m]>
Okay, I will again look into it and come up with a suggestion.
perfexion[m] has joined #mlpack
< mehulkc[m]>
hello , can someone please help me with setting up the mlpack environment
ImQ009 has joined #mlpack
__kyrre has joined #mlpack
ImQ009 has quit [Read error: Connection reset by peer]
ImQ009 has joined #mlpack
ImQ009 has quit [Read error: Connection reset by peer]
ImQ009 has joined #mlpack
ImQ009 has quit [Ping timeout: 260 seconds]
ImQ009 has joined #mlpack
__kyrre has quit [Quit: Connection closed for inactivity]
< jeffin143[m]>
ryan : you it took 8 years for the above pr
< jeffin143[m]>
You are fast 😂😝
< jeffin143[m]>
ryan : it took 8 years for the above pr to get merged
< jeffin143[m]>
You are fast 😂😝
_slack_mlpack_31 is now known as VedantaJha[m]
_slack_mlpack_31 has joined #mlpack
__kyrre has quit [Quit: Connection closed for inactivity]
ImQ009 has quit [Quit: Leaving]
ib07 has quit [Ping timeout: 240 seconds]
ib07 has joined #mlpack
< ShahAnwaarKhalid>
HI! I'm trying to understand what the concat_performance layer does. Is it supposed to be used when there's a very large input and so to compute forward and backward passes for the corresponding layer, you iteratively take small sections from it and compute the foward() or backward() function and append that to the output?