ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
ib07 has quit [Ping timeout: 260 seconds]
ib07 has joined #mlpack
< zoq> ShahAnwaarKhalid: The concat layer will take at least two layers and concats the results of the two into another output matrix.
< zoq> ShahAnwaarKhalid: One example is you have two networks that branched out and now you like to combine them again.
< mrityunjay[m]> Shah Anwaar Khalid: Are you asking about `ConcatPerformance` class?
< mrityunjay[m]> I think input may or may not be very large...we just want to compute the loss in chunks.
< mrityunjay[m]> If you are working on PR #2777, you might get overload error because the Forward function in this case returns `double` type output. Maybe it makes sense to move it to loss functions.
< ShahAnwaarKhalid> Yes! I was asking about ConcatPerformance class. 😅 I was actually trying to understand the control flow of the code through the ann_layers_test and found out there's no test case for ConcatPerformance.
< ShahAnwaarKhalid> I was hoping to write a test case for it. However, I have two doubts:
< ShahAnwaarKhalid> 1. Is it only applied to the input layer? Since it takes chunks only from the first column of the input matrix ( InputType subInput= input.submat(i,0,i+elements -1,0);
< ShahAnwaarKhalid> 2. It takes chunks of size input.n_elem/ inSize iteratively. What is inSize?
< mrityunjay[m]> Shah Anwaar Khalid: 1. It takes out chunk from the input, uses it to compute loss against the `target`. We can use any loss function the `outputLayer`, but by default it is `NegativeLogLikelihood`.
< mrityunjay[m]> 2. You can call it anything :) It is just splitting out previously concatenated input and computes loss for each chunk and then accumulate all the losses.
< ShahAnwaarKhalid> Oh thanks ! That makes sense .. I'll try to write a test for it 🙂
gitter-badgerThe has joined #mlpack
< ShahAnwaarKhalid> There are some issues with the implementation of concat_performance layer:
< ShahAnwaarKhalid> 1. default value of insize=0 ; ( will give divide by 0 exception as elements= input.n_elements/insize)
< ShahAnwaarKhalid> 2. The value of j should be initialized with 1 since column 0 is already dealt with outside the for loop. (marked with red color in the attached picture)
< ShahAnwaarKhalid> Also there's a conflict in the return type of Forward as defined in this layer.( Base class assumes Forward having void return type while it is defined to have double return type in this layer)
ib07 has quit [Ping timeout: 260 seconds]
ib07 has joined #mlpack
ImQ009 has joined #mlpack
ib07 has quit [Ping timeout: 256 seconds]
ib07 has joined #mlpack
< mrityunjay[m]> Shah Anwaar Khalid: I think you are right. If you like, you can open a separate pull request to fix that.
< mrityunjay[m]> I think it is better to move it to loss functions as well.
ib07 has quit [Ping timeout: 256 seconds]
excalibur_ has joined #mlpack
< ShahAnwaarKhalid> Oh! So, it should not inherit from the layer class and be moved to loss_functions folder? Also, should I open a pull request for the main branch or leave a link to my repo on PR #2777?
excalibur_ has quit [Client Quit]
ib07 has joined #mlpack
< mrityunjay[m]> Yes.
< mrityunjay[m]> Whichever is fine to you.
yashwants19[m] has quit [Quit: Idle for 30+ days]
< AlexNguyenGitter> @jeffin143 it gets even humorous when reading the comments on that pull request.
ib07 has quit [Ping timeout: 256 seconds]
ib07 has joined #mlpack
ImQ009 has quit [Quit: Leaving]
ib07 has quit [Ping timeout: 256 seconds]
ib07 has joined #mlpack
ib07 has quit [Ping timeout: 264 seconds]
ib07 has joined #mlpack