<lumi_232>
Hello I am Dhruv Nagill a 3rd year undergraduate from NIT Trichy. I have 2 years of experience in the reinforcement learning domain. I saw last years project "Addition of PPO, Twin Delayed DDPG, Hindsight Experience Replay to RL Codebase". I noticed that PPO is still listed as one of the algorithms to be implemented. I want to implement PPO and
<lumi_232>
the other algorithms listed in the Ideas for Google Summer of Code 2024. May I please know what are the steps I can take to maximise my chances of getting selected for this project? PPO is a fundamental algorithm and is very often used as reference, mlpack not having a PPO implementation is a major setback. I wish to implement it and integrate it
<lumi_232>
before GSoC itself.
lumi_232 has quit [Quit: Client closed]
Dhruv has joined #mlpack
Dhruv is now known as lumi_232
lumi_232 has quit [Quit: Client closed]
Dhruv has joined #mlpack
Dhruv is now known as lumi_232
lumi_232 has quit [Client Quit]
lumi_232 has joined #mlpack
lumi_232 has quit [Client Quit]
Guest35 has joined #mlpack
<Guest35>
hey
<Guest35>
i am getting low cross validation test accuracy in random forest, can someone point out what is wrong with my code