ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 250 seconds]
vivekp has joined #mlpack
Toshal has joined #mlpack
vivekp has quit [Ping timeout: 245 seconds]
< KimSangYeon-DGU>
ShikharJ: I saw your Mar 15th message, congratulations!! :)
Toshal has quit [Read error: Connection reset by peer]
pd09041999 has joined #mlpack
niteya has joined #mlpack
niteya has quit [Ping timeout: 256 seconds]
xyz_ has joined #mlpack
pd09041999 has quit [Ping timeout: 250 seconds]
xyz_ has quit [Client Quit]
niteya has joined #mlpack
< niteya>
rcurtin: hey can please tell me what I should do with do my AdamW class wrapper , I really want to finish that PR.
niteya has quit [Client Quit]
pd09041999 has joined #mlpack
niteya has joined #mlpack
drdr_ has joined #mlpack
drdr_ has quit [Client Quit]
niteya has quit [Quit: Leaving]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 246 seconds]
jeffinsam has joined #mlpack
vivekp has joined #mlpack
Subrajaa has joined #mlpack
Subrajaa has quit [Ping timeout: 256 seconds]
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
Shubhangi100 has joined #mlpack
< ShikharJ>
KimSangYeon-DGU: Thanks!
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
jeffinsam has quit [Quit: Page closed]
Shubhangi100 has quit [Ping timeout: 256 seconds]
Sayam has joined #mlpack
Sayam has quit [Client Quit]
Abc__ has joined #mlpack
Viserion has joined #mlpack
< Viserion>
What is the difference between Dense matrix and sparse matrix of user and items in collaborative filtering
< favre49>
zoq: I mailed you a proposal, it would be great if you could give me some feedback when you get the time. Thanks :)
favre49 has quit [Client Quit]
Nitish has joined #mlpack
sadda11asm has joined #mlpack
sadda11asm has left #mlpack []
< sreenik>
It's a bit of c++ code that I am having some trouble with. I have a function whose job is to choose (i.e. return) among a few layer classes, say Relu, Sigmoid, Selu and tanh based on the parameters I provide. So basically it has to return a class without knowing its type beforehand. Is there any way out?
< sreenik>
I also cannot find any class that all of these inherit (thought about LayerTypes initially)
< sreenik_>
I want to return FFN<SomeLoss<>, SomeInit<>>
< sreenik_>
Is that possible?
Viserion has joined #mlpack
sreenik_ has quit [Quit: Page closed]
< Viserion>
Can anyone explain use and execution of statement in ReportIgnoredParam({{ "iteration_only_termination", true }}, "min_residue"); in cf
< Viserion>
Can anyone explain use and execution of statement in ReportIgnoredParam({{ "iteration_only_termination", true }}, "min_residue"); in cf_main.hpp in collaborative filtering
< ShikharJ>
shubhangi: Please read my comment from the previous day.
pd09041999 has quit [Ping timeout: 245 seconds]
pd09041999 has joined #mlpack
< rcurtin>
Nitish: sounds good, go ahead
shubhangi has quit [Ping timeout: 256 seconds]
ng0227 has joined #mlpack
Toshal has quit [Read error: Connection reset by peer]
ng0227 has quit [Quit: Page closed]
naman has joined #mlpack
naman has quit [Client Quit]
Toshal has joined #mlpack
naman has joined #mlpack
naman has quit [Client Quit]
Toshal has quit [Read error: Connection reset by peer]
sreenik has joined #mlpack
pd09041999 has quit [Ping timeout: 255 seconds]
jeffinsam has joined #mlpack
pd09041999 has joined #mlpack
pd09041999 has quit [Ping timeout: 246 seconds]
Shubhangi has joined #mlpack
riaash04 has joined #mlpack
pd09041999 has joined #mlpack
Toshal has joined #mlpack
< riaash04>
rcurtin: In the isomap pr you suggested to use minimum spanning tree to find all pair shortest path. In case of sparse graphs it is a good approximation but it might give some wrong results when graph is dense. Do you think it would be reasonable to use it? It would have lower time complexity than Dijkstra and Floyd's but since isomap depends on geodesic distance calculations sometimes it might produce wrong results.
< sreenik>
zoq: thanks :)
chdinesh1089 has joined #mlpack
dinesh has joined #mlpack
< dinesh>
can we load .mat files with mlpack
< rcurtin>
riaash04: I thought that the MST would be exactly what's needed there, so I haven't managed to understand the discrepancy quite yet
< rcurtin>
I'll take the time and leave more comments when I can, but it may be a little while
< rcurtin>
dinesh: do you mean .mat files produced by Armadillo, or by matlab?
< dinesh>
matlab
< rcurtin>
ok, I see. I don't believe that Armadillo will directly import MATLAB matrices (you should double check their documentation), so I'd suggest writing your matlab matrix as CSV then importing like that
< rcurtin>
or you could write the matlab matrix as HDF5 too, if you have HDF5 available on your system
< dinesh>
in .mat files we can have more than 2 matrices in one file could we do that with CSv or HDF5
< rcurtin>
huh, didn't know that
< rcurtin>
for mlpack it'll only load one matrix out of a file
< rcurtin>
same with Armadillo
< rcurtin>
so you'd need to save multiple HDF5s or CSVs with one matrix in each I think
< dinesh>
ok..thanks
< rcurtin>
sure, happy to help
< riaash04>
rcurtin: ok, thanks
riaash04 has quit [Quit: Page closed]
dfsfsf has joined #mlpack
dfsfsf has quit [Client Quit]
< jeffinsam>
zoq,recuritn : is there any way to print confusion matrix, i mean a function?
< rcurtin>
jeffinsam: hmmm, I thought I remembered a function like this existing once
< rcurtin>
but when I do 'grep -r Confusion src/' I don't see anything
< rcurtin>
so perhaps no such function exists
< jeffinsam>
recuritn: would it be apt to implement one to get recall and precision in one go ?
< rcurtin>
jeffinsam: it could be useful; however, take a look at the cross-validation and hyper-parameter tuner code in src/mlpack/core/cv/ and src/mlpack/core/hpt/. These directories may have a good part of what you are looking for, and you may be able to reuse some of that code
< jeffinsam>
rcurtin : thanks
dinesh has quit [Read error: Connection reset by peer]
< rcurtin>
jeffinsam: sure, happy to help
< rcurtin>
I think I saw you opened some PRs... I'll try and review when I have a chance, but there are a lot of PRs and I'm pretty underwater with reviews right now
< rcurtin>
so it might be a little while :)
< rcurtin>
(at least from my end. someone else may get to it before I do, and I think actually Shikhar might have already, if I'm remembering right from my quick glance at emails this morning)
< jeffinsam>
rcurtin : no issues, should i write some test for it, update cmake, to check wether there is issues with the code
chdinesh1089 has quit [Ping timeout: 256 seconds]
< rcurtin>
yeah, definitely any new functionality should have some tests to make sure it works right :)
< jeffinsam>
recutin : also was working on dictionary encoding idea , i have some doubts, Like if there are multiple document : for exampel - [ hi hello how are you ] [ i am good] .... then ecoding vector may have 2 different sizes
< jeffinsam>
so should i pad 0 or just leave it to user to handle it
< jeffinsam>
padding 0 would add zero-risk bias
< rcurtin>
if the API is such that a user passes in a single document, then no need for padding; but the user can pass in multiple documents then the encoding vector must be able to fit all words in the dictionary across all documents
< jeffinsam>
rcurtin : umm then i must padd it with 0 i guess.
< jeffinsam>
ok so i would try to code that too.
< sreenik>
Is there any superclass that all the loss type classes inherit, so that in a function I may have return type as the superclass type and then return a particular loss (say MeanSquaredError)?
< zoq>
sreenik: You could use LayerTypes.
< sreenik>
Sounds good. I will give it a try
sreenik has quit [Quit: Page closed]
last_comer has joined #mlpack
Toshal has quit [Read error: Connection reset by peer]
Toshal has joined #mlpack
pd09041999 has quit [Remote host closed the connection]
Viserion has joined #mlpack
ollie has joined #mlpack
ollie has quit [Client Quit]
ollie has joined #mlpack
Shubhangi has quit [Ping timeout: 256 seconds]
ollie has quit [Ping timeout: 256 seconds]
< Viserion>
I would like to work on issue for adding support for alternative normalization...how should I begin?
< zoq>
Viserion: Each normalization for cf follows a common interface, so you should implement that.
< Viserion>
First thing I thought about is to get the normalization algo, which we will get from input string alongside decomposition algorithm..
< Viserion>
Why we are making copy of data before normalizing it, in cf_impl?
Toshal has quit [Ping timeout: 250 seconds]
Toshal has joined #mlpack
Toshal has quit [Ping timeout: 245 seconds]
< Viserion>
@zoq?
< zoq>
Viserion: To keep the input data untouched.
< zoq>
Viserion: Normalize takes the input data as a reference.
< Viserion>
I basicaly changed the functions to get the normalization algo in the related files...how can I verify them?
< zoq>
You can run the cf test suite: 'mlpack_test -t CFTest'