verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#4469 (master - 97d8883 : Marcus Edel): The build passed.
< MystikNinja>
zoq: Is there any way I could view the lrsdp.hpp code from when the mvu.cpp code was written? There are methods of the LRSDP object like A(), B() and C() which I can't figure out the function of.
< MystikNinja>
Those methods are being used in the mvu.cpp code, but they are nowhere to be found in the current lrsdp.hpp code
< rcurtin>
MystikNinja: just check out the last revision of when mvu.cpp was modified
< MystikNinja>
Whew, the relevant code is from way back in 2012!
< MystikNinja>
Thanks rcurtin, this helps
MystikNinja has quit [Quit: Page closed]
< rcurtin>
MystikNinja: yeah, it was a long time ago now...
ricklly has joined #mlpack
< ricklly>
hello, everyone, I'm intrested in MVU, a fun project!
csoni has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
Nisha_ has quit [Quit: Page closed]
Nisha_ has joined #mlpack
ricklly has quit [Quit: Page closed]
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
sujith has joined #mlpack
< Atharva>
sumedhghaisas: on the gsoc ideas page, for the VAE project, it has been mentioned that we need to reproduce the results from the two papers. Should this be done along with the unit tests for the framework, or will it be better if I make sample models in mlpack/models and reproduce them there
< Atharva>
Because trying to do this along with the unit tests doesn’t seem right, we will be testing entire models.
Nisha_ has quit [Ping timeout: 260 seconds]
IAR has joined #mlpack
IAR_ has joined #mlpack
IAR has quit [Ping timeout: 246 seconds]
vivekp has quit [Ping timeout: 256 seconds]
IAR_ has quit [Remote host closed the connection]
IAR has joined #mlpack
IAR has quit [Read error: Connection reset by peer]
IAR has joined #mlpack
IAR has quit [Ping timeout: 245 seconds]
vivekp has joined #mlpack
nikhilgoel1997 has joined #mlpack
< sumedhghaisas>
@Atharva: unit tests will involve checking functionality, models repository would be ideal to reproduce the results
< sumedhghaisas>
apart from reproducing results, units tests should check the API that you have built from VAE
< Atharva>
Okay, that’s what I will propose.
< sumedhghaisas>
check feedforward and recurrent tests for examples
< Atharva>
Yeah
< Atharva>
I am writing my proposal in markdown, i will upload the draft as soon as I can
< sumedhghaisas>
good to hear :)
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 260 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Read error: No route to host]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
IAR has joined #mlpack
sumedhghaisas2 has joined #mlpack
IAR has quit [Ping timeout: 240 seconds]
sumedhghaisas has quit [Ping timeout: 264 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
Ravi_ has joined #mlpack
Ravi_ has quit [Client Quit]
ravikiran0606 has joined #mlpack
nikhilgoel1997 has quit [Quit: Connection closed for inactivity]
travis-ci has joined #mlpack
< travis-ci>
ShikharJ/mlpack#114 (ConvolutionalLayer - 49e3cda : Shikhar Jaiswal): The build has errored.
IAR has quit [Read error: Connection reset by peer]
IAR has joined #mlpack
< ravikiran0606>
Hello, I am Ravi Kiran S, doing my 3-rd year B.E Computer Science and Engineering at Anna University-College of Engineering, Guindy Campus.
< ravikiran0606>
I am very much interested to contribute to mlpack project. I would like to work on "String Processing Utilities" and "Essential Deep Learning Modules". Is there any preliminary set of tasks that I need to do before applying to GSoC 2018. And I would like to submit my draft proposal soon. I would like to know whether any template is available for GSoC proposal ?.
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
IAR has quit [Remote host closed the connection]
IAR has joined #mlpack
govg has joined #mlpack
< Atharva>
zoq: What do you think about the discussion on VAEs on yesterday’s irc logs?
IAR has quit [Ping timeout: 252 seconds]
yashsharan has joined #mlpack
< yashsharan>
Hello I was working on the issue of implementing a new OpenAi environments and I had some doubts
csoni has joined #mlpack
csoni has quit [Ping timeout: 256 seconds]
< zoq>
Atharva: I'll have to think about it, but introducing a sperate class is probably the easiest.
< zoq>
yashsharan: Let me know if you need help.
< Atharva>
zoq: yes, for now that’s what I will be proposing, I have been thinking about it and I think we can make it work really well.
< yashsharan>
Okay so here is where I am facing some confusion.I'm implementing mountainCartContinous environment and have written the code for it.But I saw that the rendering part of the encironment is dont by zoq_tcp_api.However I am not able to figure out where is the api being called in the mlpack repository
csoni has quit [Read error: Connection reset by peer]
sujith has quit [Ping timeout: 260 seconds]
< yashsharan>
Oh i get it now.Thanks for clearning my doubts.Also I was wondering why isnt gym_tcp_api already included in the mlpack codebase since ultimately if you want to run your RL agent you would need that api.
< zoq>
That would add another dependency (boost asio), for a simple script you could also just link against.
ImQ009 has joined #mlpack
< yashsharan>
Oh i see. Thanks.So ultimately if I want to train my Rl algorithms I would be doing it via gym_tcp_api right?
< zoq>
right
< yashsharan>
And the algorithms for reinforcemnt learning are written under methods/reinforcement learnings right?But when I lokked into the example file in the gym_api i saw that there isnt any algorithm being called from the reinforcement learning methods
< yashsharan>
so if I want to train my RL again using a specific algorithm,say Double DQN,how will I call that method ?
< yashsharan>
Also I had a suggestion. In the mlpack documentation it's not mentioned that to train an RL agent you would need gym_tcp_api.If that could be added to the documentation it would be helpful.
< zoq>
Actually, it's not necessary, Gym is neat to run your RL method against some environments, but at the end, you define your own problem and use mlpack to solve it.
yashsharan_ has joined #mlpack
yashsharan has quit [Ping timeout: 260 seconds]
< yashsharan_>
Ohh yes seems you are right since gym isn't the only the option where people train there RL agents.
< yashsharan_>
*thier
< zoq>
Adding a section as an example is still a good idea.
manthan has joined #mlpack
csoni has joined #mlpack
< yashsharan_>
yeah that would be a good option
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
s1998_ has joined #mlpack
< manthan>
@rcurtin @zoq, i set the tolerance high for the GradientBatchNormTest(), as I manually calculated the values.
< manthan>
can you let me know if you found something wrong with the implementation?
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
< s1998_>
zoq: I had submitted proposal for essential deep learning modules. Can you please take a look?
< manthan>
@rcurtin, @zoq could you please have a look at my proposal draft that i have submitted.
< zoq>
manthan: On my system, the gradient check failed about ~50% of the time with different random seeds. Also, the tolerance was quite high.
rf_sust2018 has joined #mlpack
< manthan>
the tolerance you mean is tolerance for BatchNormTest right?
< zoq>
right
< manthan>
ya, thats because, I manually calculated the values for the passes using the formula and so there were precision problems
< zoq>
I was talking about the CheckGradient tolerance (BOOST_REQUIRE_LE(CheckGradient(function), 1e-3);)
< manthan>
ohh, ya i agree its more than 1e-4 that is used for other tests, but i think that is because changes to the parameters in this case will cause larger gradient shifts because we are essentially changing the shift and scale parameters.
< manthan>
but i thought since 1e-3 is comparable, it shouldnt be a problem.
< manthan>
However, i didnt try with random seeds and so would have missed something in that case
< zoq>
So, even if we lower the tolerance once more, it shouldn't fail like 50% of the time.
< zoq>
If you have the time, please feel free to recheck the backward step.
csoni has joined #mlpack
< manthan>
@zoq, sure i will go through it. Hope something interesting comes up :D
csoni has quit [Read error: Connection reset by peer]
s1998_ has quit [Quit: Page closed]
< ckeshavabs>
hello, I had a doubt regarding some of the implementation details used in DQN. Is there any module in mlpack that helps us find the clipped gradients after performing back-propagation? Because, I read that clipping gradients stabilises the learning process
sumedhghaisas has quit [Ping timeout: 256 seconds]
< ckeshavabs>
@zoq, thank you for the resources. I will check it out.
< zoq>
Let me know if I should clarify anything.
sumedhghaisas has joined #mlpack
navneet has quit [Ping timeout: 276 seconds]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
rf_sust2018 has quit [Quit: Leaving.]
navneet has joined #mlpack
yashsharan_ has quit [Ping timeout: 260 seconds]
IAR has joined #mlpack
IAR has quit [Ping timeout: 264 seconds]
Prabhat-IIT has joined #mlpack
Prabhat-IIT has quit [Ping timeout: 260 seconds]
Prabhat-IIT has joined #mlpack
< Prabhat-IIT>
zoq: you there?
IAR has joined #mlpack
< zoq>
Prabhat-IIT: yes
< Prabhat-IIT>
zoq: Regarding the handling of constraints after much thought I've concluded that we can modify the existing functions to return the simple inequality contraints involving lower bound and upper bound of the feasible space. To handle custom mae contraints like x^2 + y^2<=1 the function should return True or False if custom contraints are satisfied.
< Prabhat-IIT>
Now, How the function behaviour is clear to us.
< Prabhat-IIT>
Then, further handling depends on the optimizer
< Prabhat-IIT>
There can be many optimizers apart from PSO implemented in future for non linear contraint based optimization. Each can handle contraint in their own way
< zoq>
Agreed, I had the same thought.
< Prabhat-IIT>
For, eg. I've though that a simple way to handle contraints in PSO will be updating the particle position and velocity only if the particle's next position is in feasible space
< zoq>
Prabhat-IIT: As for me, this sounds like that would cover a lot of problems, so for now I say let's go with the simple solution.
< Prabhat-IIT>
zoq: Then I'll mention a simplistic approach in my proposal and would add the clause that the final method will be flexible and will be implemented only after thorough discussion with the mentors
< zoq>
Prabhat-IIT: Sounds good, to me.
< Prabhat-IIT>
zoq: One thing I'd like to ask is the Initialization phase. In PSO we initialize randomly. As far as upper bound and lower bound are concerned we can genrate a uniform random distribution within those bounds but the problem arise with custom made constraints like x^2+y^2<1
< Prabhat-IIT>
If we try to generate initial positions randomly till all the positions are within this custom contraint there's no gaurantee at all that it will be
< Prabhat-IIT>
zoq: So, do you have something in mind regarding this, how can we effectively handle initialization for custom contraints
< zoq>
Yes, there is no gaurantee that it will converge, but a user could specify the maximum number of iterations. And a user could also provide initial parameters.
< Prabhat-IIT>
zoq: Then it'll be all good to go :)
< zoq>
Prabhat-IIT: Nice :)
< Prabhat-IIT>
I hope we can turn our PSO implementation into a full fledged tool box suitable from noob to researcher :)
< Prabhat-IIT>
zoq: one thing more, how'd you like the idea of that `PARTICLE` and `topologies`?
< Prabhat-IIT>
I've introduced it just because there's so many variants based on different topologies and behaviour of Particles. If we stick to just one then it'll not be robust enough to be used in varied engineering applications
< Prabhat-IIT>
These will allow anyone to implement even its own topology and particle easily with minimal efforts to suit his/her specific need
< zoq>
Yeah, I get the idea, I think we could slightly simplify the classes, but it works as it is and could be used as a basis for a discussion.
< Prabhat-IIT>
zoq: The classes will be more elegant and sychronized once we get into actual implementation. I'll also work in simplifying them.
< Prabhat-IIT>
zoq: Also, How much technical details have to be included in the proposal. Too much technical details will make it messy I think
< ckeshavabs>
please correct me if I am wrong with any of my assumptions?
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 276 seconds]
Prabhat-IIT has quit [Ping timeout: 260 seconds]
ckeshavabs has quit [Quit: Page closed]
Prabhat-IIT has joined #mlpack
Nisha_ has joined #mlpack
< Nisha_>
Hi @zoq, as suggested by you, i am working on my draft proposal for quasi rnns. Should i include the mathematical details ( regarding forget gates, input gates etc). too?
Prabhat-IIT has quit [Ping timeout: 260 seconds]
< Nisha_>
Also, which issues (related to this topic) can i solve to increase my chance of getting selected?
Nisha_ has quit [Quit: Page closed]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas2 has joined #mlpack
Nisha_ has joined #mlpack
witness has quit [Quit: Connection closed for inactivity]
manthan has quit [Ping timeout: 260 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 246 seconds]
< Atharva>
I am facing a weird problem, on every execution of randu/n function of the arma library I get the same values, why is this happening?
< zoq>
Nisha_: No need to go into the mathematical details, however, if you think there is something interesting you like to describe, please feel free to provide some more details.
< zoq>
Nisha_: Ideally you can show that you get the overall idea.
< zoq>
Nisha_: I'm not sure there is an open issue at this point, but you can always try to improve an existing method.
< zoq>
Prabhat-IIT: Yeah, that is the tricky part, you can expect that any reviewer is kinda familiar with the topic.
< Nisha_>
Okay, that sounds great. Will submit my draft proposal asap :)
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
IAR has quit [Quit: Leaving...]
< rcurtin>
I gave an hour-long talk today on machine learning, C++, and mlpack; maybe some of you might find the slides interesting: http://www.ratml.org/misc/mlpack_cb.pdf
< rcurtin>
it's tough to gauge the feedback from the talk (I think it went well, the audience seemed to enjoy it), but one of the themes that I hear is that data scientists don't prefer to work in C++, so I think this emphasizes the importance of the Python bindings and bindings to other languages
< Atharva>
rcurtin: I am planning to write a tutorial on vae in mlpack as a part of the project. Do you think it’s a good idea or should I use that time to implement more functionality?
< rcurtin>
Atharva: personally I think documentation is extremely important---you can write the best code in the world, but if nobody knows how to use it, it will never see the light of day :)
< Atharva>
rcurtin: yeah, even I would want that code to be an integral part of the library which people know how to use. So I will include a tutorial. :)
< rcurtin>
sounds good
< Atharva>
Also, I have also put a CLI/python binding for the vae class as one of my objectives.
< zoq>
rcurtin: Looks really, interesting, not sure I get the image reference on page 19
< yashsharan>
I was wondering is it possible to have gpu support for mlpack ,maybe possibly in the future?
< zoq>
yashsharan: Currently, you could build against NVBLAS, and in the future Bandicoot (GPU accelerator add-on for Armadillo) will help.
< yashsharan>
ohh nice.Also i'm stuck at interegrating appveyor CI issue in the models repository.I have mentioned the error in the pull request.If you could have a look at it that would be great.Thanks.https://github.com/mlpack/models/pull/10
yashsharan has quit [Quit: Page closed]
< zoq>
"This will _definitely_ get me best paper at ICML! I can // feel it!" :)
Arshdeep has joined #mlpack
rf_sust2018 has quit [Quit: Leaving.]
ImQ009 has quit [Quit: Leaving]
< Nisha_>
Great slides @rcurtin. Found it very interesting. As you mentioned, binding to Python or other languages is extremely important. I had a doubt regarding this. How exactly do we go about implementing python binding in mlpack? Like will it be specific to a particular class? If so, I would like to include this too :)
Arshdeep has quit [Remote host closed the connection]
< rcurtin>
Nisha_: yeah, basically we use an automatic binding generator to provide the exact same interface for Python, the command-line, and other languages (although nothing else is implemented yet)
Arshdeep has joined #mlpack
< Arshdeep>
I want to solve issues in string processing utilities, guide me where i can find them.
< rcurtin>
Arshdeep: there are currently no open issues for the string processing utilities project
< rcurtin>
since the functionality is new, there is nothing related to be solved at this time
< Arshdeep>
thanks, also tell me how to approach string processing utilities as i want to work on it