ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< chopper_inbound4>
hey Joel Joseph (Gitter), can you try again if softmax layer works for you? I've made the required changes.
< AbishaiEbenezerG>
@nishantkr18 I'm having a look at #1912 and would be happy to help...
< AbishaiEbenezerG>
also @zoq, we do not currently have any other implementation of a policy gradient method other than PPO right?
< AbishaiEbenezerG>
i mean - i couldn't find any other PR or issue related to having a pg method
< AbishaiEbenezerG>
basically , i'm looking to train an agent with a simple policy gradient method and see how it fares in a gym env.
< AbishaiEbenezerG>
could i open an issue on this?
< zoq>
AbishaiEbenezerG: Sure, feel free.
< sreenik[m]>
freenode_gitter_prince776[m]: Awesome. I was thinking about something else, this is alright
< PrinceGuptaGitte>
Alright, then I'll do the most interesting part and add 'std::string name' to all the remaining layers :)
< PrinceGuptaGitte>
(edited) ... I'll do the ... => ... I'll complete doing the ...
< AnjishnuGitter[m>
Hi @zoq, if you find some time, could you take a look at #2345 ? Thanks!
< ShikharJaiswalGi>
@rcurtin Can the existing decision tree implementations be used for regression?
< rcurtin>
ShikharJaiswalGi: in practice, no, but it wouldn't be too hard to adapt it :) basically new loss functions are needed, and then a new internal structure to be held by each node to store the regression coefficients / etc. needed for prediciton
< rcurtin>
*prediction
< rcurtin>
it's definitely useful support that would be awesome to add :)
< ShikharJaiswalGi>
Also, is the current implementation making use of gradients? Is it Gradient Boosted Decision Trees?
< rcurtin>
nope, it's just a regular decision tree, but I do believe it can be trained in a weighted way
< rcurtin>
so, it would be very easy to write a gradient boosting wrapper around it (in fact, you can use AdaBoost with decision trees as the weak learner to get *close* to GBDTs, but I don't think that the algorithms *exactly* line up, and AdaBoost was designed for a weak learner, not a full decision tree)
< rcurtin>
our random forest implementation is basically an OpenMP for loop around the DecisionTree constructor
< ShikharJaiswalGi>
Hmm, I don't think AdaBoost would be that effective, but I haven't tried a full fledged decision tree either. I've tried using DecisionStumps in the past with ensembles, they work well, though I'm not sure if they surpass individual GBTs, they certainly wouldn't in terms of training times.
< rcurtin>
give it a shot, AdaBoost is old and not trendy but it is an effective boosting technique :) if your data match the assumptions of the technique (can't remember what exactly those are at the moment), it may be quite effective
< ShikharJaiswalGi>
I don't think we have tree pruning support as well?
< rcurtin>
there were PRs opened for MDL-based pruning, but, they were never finished and merged, unfortunately
< rcurtin>
you can do something kind of like pruning by setting 'z
< rcurtin>
oops, "by setting minimum_leaf_size or minimum_gain_split"
< ShikharJaiswalGi>
Yeah, but that would be "pre-pruning" and not "post-pruning" I feel.
< rcurtin>
exactly, you're right
< ShikharJaiswalGi>
Apparently people have done past studies on Adaboost and Gradient Boost. Gradient Boosting is apparently more general than AdaBoost.
dendre has joined #mlpack
ImQ009 has quit [Quit: Leaving]
< PrinceGuptaGitte>
@joeljosephjin This is because you haven't added Softmax layer to `LayerTypes` which is present in `layer_types.hpp` file.
< PrinceGuptaGitte>
I think you're using `std::move()`, but we don't need that with l value reference, maybe.
< PrinceGuptaGitte>
(edited) ... using `std::move()`, but ... => ... using `std::move()` when calling forward and backward functions, but ...
< JoelJosephGitter>
im not using it i think, i copied the code directly from the PR
< JoelJosephGitter>
copy-pasted the whole stuff from softmax.hpp,softmax_impl.hpp, made changes on layer.hpp and layer_merge.hpp
< JoelJosephGitter>
*layer_merge
< JoelJosephGitter>
*layer_types.hpp
< JoelJosephGitter>
(edited) *layer_types.hpp => copy-pasted the whole stuff from softmax.hpp,softmax_impl.hpp, made changes on layer.hpp and layer_types.hpp