naywhayare changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< oldbeardo>
naywhayare: just one question, what is lowBound() in step 3 of algorithm 3
sumedhghaisas has joined #mlpack
< oldbeardo>
sumedhghaisas: hi, I didn't understand the problem that you mentioned in the ticket
< sumedhghaisas>
okay... I will explain.. As there are parameters to GetRecommendations, it can be called several times with the same dataset?? right??
< sumedhghaisas>
oldbeardo: hence the matrix factorization function will be called that many number of time...
< sumedhghaisas>
which is unnecessary as the computation is exactly the same...
< oldbeardo>
right, now I get it
< oldbeardo>
you are basically saying that we should put 'factorizer.Apply(cleanedData, rank, w, h);' in a different function
< sumedhghaisas>
Yeah not any different function... but the one which is being called exactly once...
< sumedhghaisas>
irrespective of number of calls to GetReccommendations
< sumedhghaisas>
we can just store the factorized matrices rather than storing the dataset...
< oldbeardo>
right, that makes sense
< sumedhghaisas>
So the constructor seems the obvious option to me...??
< oldbeardo>
I'm not so sure about that
< oldbeardo>
if you see the GetRecommendations() function, on top you will see the rank estimation code
< oldbeardo>
this wasn't present before, the rank was hard coded as 2
< oldbeardo>
when I changed it, Ryan shifted it to the GetRecommendations() function
< oldbeardo>
I had originally put it in the constructor
< oldbeardo>
so maybe he has a reason we cannot see, but yes, I agree with you
< oldbeardo>
putting it in the constructor makes sense
< sumedhghaisas>
Yes which is correct as GetReccommendation should be parametrized with the rank ...
< sumedhghaisas>
so that user can refer to the same dataset with different perspective...
< sumedhghaisas>
putting it in the constructor would force the user to use the same rank factorizayion everytime..
< sumedhghaisas>
*factorization
< sumedhghaisas>
I had this discussion with Ryan and he seems to agree :) umm... if my memory serves me right I guess he only suggested this alternative...
< oldbeardo>
what you are suggesting will be quite efficient, so if the user wants to specify a different rank he can always use another object
< sumedhghaisas>
yeah I agree... But the simpler solution would be to call the GetReccommendation on the same object with different rank....
< sumedhghaisas>
as the factorization is already done in the constructor GetReccommendation is no overhead...
< oldbeardo>
that cannot be done
< oldbeardo>
this is because the factorization is itself dependent on the rank
< sumedhghaisas>
Yes... U have a point...
< sumedhghaisas>
GetReccommendations has other parameters too... So in this case we have to shift the parametrization along with the factorization...
< sumedhghaisas>
I just forgot the take rank into consideration... :)
< oldbeardo>
I would say that we should leave it as it is for now
< oldbeardo>
because as we go about completing the stuff we have in our applications, the module will change a lot anyhow
< sumedhghaisas>
Yeah you are right...
< oldbeardo>
by the way, which algorithms did you include in your application?
< sumedhghaisas>
Umm... First adding regularization to current ALS (NMF)... then variant SVD methods... like Batch Learning, complete incremental, learning with momentum...
< sumedhghaisas>
These are all explicit feedback mode...
< sumedhghaisas>
for Implicit feedback I found a paper which provides a robust technique...
< sumedhghaisas>
And some initialization methods... those are li8 :)
< sumedhghaisas>
you are going to implement QuickSVD right??
< oldbeardo>
yes, it's one of 3
< oldbeardo>
also, let's keep BITS lingo away from these chats
< oldbeardo>
I'm going to implement QUIC-SVD, Regularized SVD and PMF
< sumedhghaisas>
haha.. yeah you are right...
< sumedhghaisas>
is the QuickSVD method iteration based??
sumedhghaisas has quit [Ping timeout: 252 seconds]
sumedhghaisas has joined #mlpack
< oldbeardo>
sumedhghaisas: it's complicated, not really iterative on the first glance
< oldbeardo>
but certainly has certain iterative steps involved in the computations required
sumedhghaisas has quit [Ping timeout: 276 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas__ has joined #mlpack
sumedhghaisas has quit [Ping timeout: 252 seconds]
oldbeardo has quit [Ping timeout: 240 seconds]
sumedhghaisas__ has quit [Ping timeout: 240 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Quit: Leaving]
sumedhghaisas has joined #mlpack
< sumedhghaisas>
naywhayare: did you take a look at LMF code??
< sumedhghaisas>
maywhayare: By the way ... I had a chat with oldbeardo today about shifting matrix factorization
< sumedhghaisas>
naywhayare: **
sumedhghaisas has quit [Ping timeout: 245 seconds]