verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
eugene_ has joined #mlpack
eugene_ has quit [Client Quit]
Nie has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
ASamir has joined #mlpack
ASamir has quit [Remote host closed the connection]
< Nie>
Hello guys, I installed mlpack in my Ubuntu 16.04 with this command sudo apt-get install libmlpack-dev. But when I try using command like mlpack_fastmks, it says The program 'mlpack_fastmks' is currently not installed. You can install it by typing: sudo apt install mlpack-bin
< Nie>
I'm confused about it.
govg has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
luffy1996 has quit [Read error: Connection reset by peer]
ironstark has quit [Read error: Connection reset by peer]
anirudhm has quit [Read error: Connection reset by peer]
lozhnikov has quit [Ping timeout: 260 seconds]
petris has quit [Read error: Connection reset by peer]
gtank has quit [Ping timeout: 256 seconds]
prashanthd has quit [Ping timeout: 256 seconds]
luffy1996 has joined #mlpack
gtank has joined #mlpack
anirudhm has joined #mlpack
prashanthd has joined #mlpack
Nie has quit [Quit: Page closed]
Nie has joined #mlpack
petris has joined #mlpack
Nie has quit [Client Quit]
ironstark has joined #mlpack
lozhnikov has joined #mlpack
Prabhat-IIT has joined #mlpack
< Prabhat-IIT>
zoq: I've been trying to implement the SAGA optimizer that you've proposed to me with the Xavier initialization. Actually, I was not able to fully understand the code that authors have proposed but have understood the algorithm. I've implemented a straight forward version of it in python first. I want to know if the straight forward implementation can be implemented or not as it seems to be only 3 times slower than the actual one
< Prabhat-IIT>
This implementation can be improved upon memory by using mini-batches for each epoch
< Prabhat-IIT>
If it can be incorporated in mlpack like this version then I can go ahead and code otherwise I'll have to give it some more time for understanding authors implementation code. I think the difference will only be in supporting sparse gradients instead of dense ones!
Prabhat-IIT has quit [Ping timeout: 260 seconds]
Prabhat-IIT has joined #mlpack
rajeshdm9 has joined #mlpack
Nie has joined #mlpack
Nie has quit [Client Quit]
qwqw has joined #mlpack
qwqw has quit [Quit: Page closed]
rajeshdm9 has quit [Ping timeout: 260 seconds]
Sayan98 has joined #mlpack
Sayan98 has left #mlpack []
avtansh has quit [Ping timeout: 260 seconds]
Prabhat-IIT has quit [Ping timeout: 260 seconds]
wenhao has joined #mlpack
sumedhghaisas has joined #mlpack
robertohueso has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 268 seconds]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
qwert123 has joined #mlpack
qwert123 has quit [Client Quit]
robust has joined #mlpack
robertohueso has quit [Ping timeout: 276 seconds]
< zoq>
Prabhat-IIT: Hello, can you point me to the faster implementation? Also, if you like open a PR with the current implementation and we can figure out how to make it faster. For sparse support checkout http://www.mlpack.org/docs/mlpack-git/doxygen/optimizertutorial.html
< zoq>
dk97[m]: Can you elaborate on the idea?
robertohueso has joined #mlpack
Avtansh has joined #mlpack
namratamukhija has joined #mlpack
robertohueso has quit [Quit: Leaving.]
< namratamukhija>
hi, I'm trying to run all test cases in mlpack using the command ./mlpack_test. I have the error - 55 failures detected in test suite "mlpackTest". Is this the right way of running the test suite? Most of the errors are of the type - fatal error in "DualTreeVsNaive2": Cannot load test dataset test_data_3_1000.csv!. Others are memory access violation and run_time errors.
< rcurtin>
namratamukhija: be sure that you run the test from the build directory, not the build/bin directory
< rcurtin>
you can also type 'make test' from your build directory
< rcurtin>
you will see that test_data_3_1000.csv is in your build directory but not in the bin/ directory
wenhao has quit [Ping timeout: 260 seconds]
< namratamukhija>
rcurtin: thank you for pointing that out. I ran make test from the build directory - it seems stuck on Running tests... Start 1: mlpack_test from the past 2-3 mins. I'm not sure how long it's supposed to take. can you let me know?
Avtansh has quit [Ping timeout: 260 seconds]
< namratamukhija>
rcurtin: It worked! Thanks! Took about 400 seconds. Not sure, but should there be some kind of verbosity option while running all tests? Something like .. running test file *name of test file*. If it is a good idea, I'll be happy to work on it.
< zoq>
namratamukh: run with -p to get more information
jenkins-mlpack has quit [Read error: Connection reset by peer]
< rcurtin>
oops, sorry jenkins-mlpack... that is what happens when you assign two hosts to the same IP... :)
jenkins-mlpack has joined #mlpack
< namratamukhija>
zoq: Thanks
ShikharJ has joined #mlpack
namratamukhija has quit [Quit: Page closed]
17WAAVQQJ has joined #mlpack
< 17WAAVQQJ>
mlpack/mlpack#4205 (master - 1572e91 : Ryan Curtin): The build has errored.
< Prabhat-IIT>
zoq: Here's a faster implementation given by the authors themselves in cython https://arxiv.org/pdf/1407.0202.pdf (checkout at the end) Its for sparse least squares and ridge regression
Prabhat-IIT has quit [Ping timeout: 260 seconds]
ImQ009 has joined #mlpack
< rcurtin>
ok, all of the build infrastructure is back online, including the sun systems
< rcurtin>
I wish it hadn't taken so long, but now we should not need to have any more downtime for quite a while (...I hope...)
< dk97[m]>
@zoc I was thinking of making a separate folder for loss functions, similar to how there are different folders for activation functions and initialization.
< dk97[m]>
The loss folder would contain implementation of different loss functions.
< dk97[m]>
Like the MSE, KL divergence, cross entropy
Rithesh has joined #mlpack
Rithesh has left #mlpack []
< dk97[m]>
This could be useful in localising the loss functions to one place. Like right now KL Divergence loss is defined in the sparse auto-encoder implementation, but the same loss will be need in VAE as well.
< zoq>
dk97[m]: I see, I'm fine with moving the existing ann layer into a separated folder.
< zoq>
rcurtin: awesome
< zoq>
Prabhat-IIT: Okay, thanks, will take a look at the paper.
< dk97[m]>
zoq: Actually, I was thinking of having the loss function inside the ann folder.
< dk97[m]>
ann->loss_functions->MSE.hpp
< dk97[m]>
something like this
< zoq>
dk97[m]: sounds good
< dk97[m]>
So should I open an issue?
< dk97[m]>
Or just make a PR?
< zoq>
dk97[m]: If you like to work on this one, there is no need to open an issue first.
< dk97[m]>
Okay then, tnanks!
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
Sayan98 has joined #mlpack
Sayan98 has quit [Client Quit]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
aman_ has joined #mlpack
< aman_>
hello contributors! I went through goals and work done at mlpack and I believe my interests and skill overlap to a great extent with this orgs'. I really liked your project ideas, especially one about VAE and other about Reinforcement learning. I wanted to get my hands dirty to be ready to take up one of these projects during the summer as a gsoc project. Any heads up on how to start and go about it. Thanks in advance!
< aman_>
Btw, I am Aman from IIIT-H, India pursuing CSE and have more than 3 years of experience with coding and have completed multiple ML, DL projects. :)
< aman_>
Thanks zoq , I have already built mlpack on my local machines. I will start off by solving some of the issues then.
< zoq>
aman_: Sounds good.
< aman_>
cool
K4k has quit [Read error: Connection reset by peer]
Prabhat-IIT has quit [Ping timeout: 260 seconds]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
samidha1705 has joined #mlpack
daivik has joined #mlpack
< samidha1705>
Hey, I've been trying to work on PSO algorithm and I've written down the psuedo-code of the PSO algorithm. What does the proposal 'exactly' want?
< samidha1705>
I specifically meant the technical details.
< samidha1705>
Do I have to write the code and submit it? Can I send the technical proposal to any of the mentors right now?
< zoq>
There is no need to write pseudocode, if you lay down the general idea this is just fine, the more important part is the interface, e.g. how it the project fits into the existing codebase.
< zoq>
Once the student application period starts, mentors are able to take a look at the proposal and give feedback if the student enables the setting.
< samidha1705>
Thanks but I think with optimizers as the category, the PSO algorithms will just be another addition to that category like other optimization algorithms.
< samidha1705>
Do I need to provide how it will fit into the exisiting codebase from scratch or can I take the help of the pre-made optimizers algorithms?
< zoq>
That is true for the standard PSO method, in that case you can just point that out. The constrained method might need some modifcations.