ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 245 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
KimSangYeon-DGU has joined #mlpack
< ShikharJ>
sakshamB: Toshal: Are you guys there?
< sakshamB>
ShikharJ: yes I am here
< ShikharJ>
sakshamB: Great, let's begin. How's Spectral Norm coming up? Do you think you'd be able to wrap it up by next week?
< sakshamB>
ShikharJ: currently I am working only on spectral norm as a wrapper for linear layer. I haven’t been able to generalize it as a wrapper for convolutional layer.
< sakshamB>
ShikharJ they have different classes of spectral norm for different layers
< sakshamB>
ShikharJ: I think I will be able to finish it over the weekend
< ShikharJ>
sakshamB: Okay, cool no pressure. I think rest of your work is only pending review.
< ShikharJ>
sakshamB: Toshal just cleared up the layer limit issue, so we're full steam ahead for merging your work.
< sakshamB>
ShikharJ: yes I think we could merge it soon after review
< ShikharJ>
sakshamB: Please update those PRs as soon as you can.
< sakshamB>
ShikharJ: I have already done that
< ShikharJ>
sakshamB: Perfect. Is there something you need help with from my side?
< sakshamB>
ShikharJ: no nothing right now :)
< ShikharJ>
sakshamB: I'm glad we could get the work done over the summer. I'm anticipating a full release, as soon as your and Toshal's work gets merged in :)
mlutra has joined #mlpack
< sakshamB>
ShikharJ: yes that would be great! Thanks for your constant reviews and time. :)
< ShikharJ>
sakshamB: Alright, let's wrap this up. I'll see you on Monday then. Have a nice weekend :) I'll be off for now.
< sakshamB>
ShikharJ: alright have a great weekend!
mlutra has left #mlpack []
mlutra has joined #mlpack
< mlutra>
Hello. After reviewing the mlpack doc and some threads on github, I didn't find a way to obtain the performance value of each training epoch from the optimized when training a FNN. It would be nice if someone could help me with this. Thanks in advance and thank you for the work on this great library.
mlutra has quit [Ping timeout: 246 seconds]
mlutra has joined #mlpack
mlutra has quit [Read error: Connection reset by peer]
mlutra has joined #mlpack
< rcurtin>
multra: I don't have the best solution for you today, but I can say that we are about to add callbacks to the ensmallen optimization library, and then we will be able to easily add this support to mlpack's FFN code
< rcurtin>
however, if you want to get the performance value at each epoch, currently the best way is a little bit clunky but it should work:
< rcurtin>
when you call FFN::Train(), pass a custom optimizer that you have configured to only perform one epoch of training
< rcurtin>
then, manually compute the error measure and print it
< rcurtin>
you can do this in a for loop for the number of epochs you are hoping to perform total; each time you call Train(), it should not reset the parameters, so the net result is the same
< rcurtin>
there's an example of the idea. like I said, it's just a little clunky, but it will get better soon :)
vivekp has joined #mlpack
< mlutra>
Thank you very much for your quick answer, rcurtin. I will try that. BTW, do you have plan to add Levenberg-Marquart method to ensmallen?
jeffin143 has joined #mlpack
< jeffin143>
rcurtin : is there any way to find the type of values store in arma::mat ?
< jeffin143>
Suppose I want to declare a map with key as type of value store in arma::mat so how should I do that
< jeffin143>
Something as map<typeof(arma::mat), size_t> name
KimSangYeon-DGU has quit [Remote host closed the connection]
< lozhnikov>
jeffin143: arma::mat is an alias for arma::Mat<double>. So, it always stores doubles.
< sreenik[m]>
zoq: I don't seem to be able to figure out what the variables inputParameter and outputParameter do. They are present in most ANN layers, but could you let me know its exact purpose?
< lozhnikov>
jeffin143: I think each armadillo structure has the following typedef: typedef eT elem_type.
< lozhnikov>
jeffin143: If the type of the matrix is a template parameter e.g. MatType then you could write typename MatType::elem_type
< jeffin143>
Yes just found out that
< jeffin143>
lozhnikov , yeh that was what I was looking for
< jeffin143>
elem_type, thanks :) once again
< lozhnikov>
jeffin143: I think it's not safe to use floating point values as the key due to various machine precision issues.
< jeffin143>
Um , then what should I do..???
< lozhnikov>
Probably it's better to find another way.
< jeffin143>
Ok I will find a way* out
mlutra2 has joined #mlpack
< zoq>
sreenik[m]: They store intermediate results, like the output activation.
mlutra has quit [Ping timeout: 244 seconds]
< zoq>
mlutra2: Its on my list of optimizers I like to implement but it's not a priority.
< rcurtin>
zoq: someone sent me a link to RAdam today, maybe that's interesting too. there is a FastAI blog post about it but honestly the blog post is of fairly low quality
< rcurtin>
next time someone asks what they can write for ensmallen I'll have an idea though :)
< rcurtin>
in my company we are doing a lot of work that focuses on proximal gradient algorithms, which could be another interesting way to handle constraints
< rcurtin>
just something to think about though :)
< jeffin143>
lozhnikov : you were correct of handling floating point , but if we used lower_bound search instead of find..??? Would it be ok
< jeffin143>
It was for one hot encoding*
< jeffin143>
If the labels are 1 , 1.5 , 2 , 2.5 then to support it we need to map it something
< jeffin143>
Different values*
< zoq>
rcurtin: Will take a look at the blogpost.
< rcurtin>
zoq: the paper could be just fine (I only skimmed it) but I thought the blog post... needed some more clarity and correctness :)
< zoq>
rcurtin: The medium post?
< lozhnikov>
jeffin143: I think there are some cases when lower_bound() doesn't work very well. However you can call lower_bound(X - eps) in order to find X. This should work provided that you choose eps properly.
< jeffin143>
Or may be I can write a function for comparator for map..??? Where I do x-eps
< jeffin143>
Or x-key < eps..???
< rcurtin>
zoq: yeah, I think that was it
< zoq>
jeffin143: New picture, nice.
< jeffin143>
:) thanks zoq
< jeffin143>
Also I got placed in nutanix :) banglore india
< jeffin143>
I will be joining them from January first week
< zoq>
jeffin143: Congratulations!
< rcurtin>
jeffin143: congratulations!
< rcurtin>
:)
< jeffin143>
:) Thanks
< rcurtin>
I know Nutanix is a cloud infrastructure company but every time I hear the name I think that it must be something one spreads on toast...
< jeffin143>
Haha :)
< zoq>
:)
< rcurtin>
I guess I am mixing it up with Nutella
< sreenik[m]>
zoq: Oh, thanks.
< jeffin143>
rcurtin : wasn't there any video conference planned..??
< rcurtin>
jeffin143: I'd like to but I just haven't had the time to catch up and schedule it
< rcurtin>
I wanted to come up with a list of things to address before a release first
< jeffin143>
Ohh , no issues. I thought it was planned and missed the schedule, so just queried :)
< rcurtin>
yeah, dont' worry, I'll send an email to the mlpack list and mention it here
< rcurtin>
I hope to have a chance to get this figured out this weekend, but there is some family in town so they will take first priority :)
< jeffin143>
May be After gsoc , we could finally release with all the work may be
< jeffin143>
lozhnikov : sklearn has something know as CountVectorizer , should I implement it in Mlpack as encoding type..???
< jeffin143>
Encoding ploicy*
< jeffin143>
Policy*
< lozhnikov>
jeffin143: I didn't look at the bindings yet. But I think it's better to spend the last week on the existing PRs.
< lozhnikov>
jeffin143: Perhaps a comparator could work. It depends on the comparator.
< lozhnikov>
I didn't hear about Nutanix. But, congratulations!
< jeffin143>
:) thanks and yeah I will spend last week clearing up the existing PR's
< lozhnikov>
jeffin143: Regarding floating point values as the key: I think lower_bound(X - eps) is the most reasonable way. However, it has its own disadvantages.
< lozhnikov>
For example eps depends on the data. You can't use a static value.
< lozhnikov>
I still think it's better to avoid FP keys.
< jeffin143>
Ok , then we should avoid one hot encoding for double datatypes
< jeffin143>
And only allow int "
< lozhnikov>
jeffin143: Could you point out the exact place where you want to introduce the map?