ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< Manav-KumarGitte>
I have added all 3 namespace in one cell
kartikdutt18Gitt has joined #mlpack
< kartikdutt18Gitt>
They need to be in separate cells to work.
< zoq>
Yeah, I think there is some issue with that.
< Manav-KumarGitte>
It's working
< zoq>
okay, good
< zoq>
I could write a workaround for the namespace issue, but maybe there is already a solution for that, have to check.
< kartikdutt18Gitt>
Hey zoq, for utils in models repo I have added the following dependencies : Libarchive, libcurl, cryptopp. Are these okay? I am finishing up tests for them now.
< kartikdutt18Gitt>
> It's working
< kartikdutt18Gitt>
Great.
< zoq>
kartikdutt18Gitt: Hm, are those optional?
< zoq>
kartikdutt18Gitt: And cryptopp is used for?
< kartikdutt18Gitt>
I am using them for downloading, unzipping and running checksums. So if a user downloads a dataset or weights or model they will be needed.
< zoq>
I wonder what cmake is using to do that.
< kartikdutt18Gitt>
cryptopp is used for calculating checksums on files / strings. Similar to openssl (which I used earlier but it gives some error on OS Catalina).
< zoq>
That seems like a lot of dependencies to download a bunch of files.
< kartikdutt18Gitt>
I can check that too.
< kartikdutt18Gitt>
Should I remove the checksum portion? That should remove cryptopp. And if we don't download zips we can remove libarchive as well.
< zoq>
We currently use cmake to download ensmallen if not already installed.
< zoq>
and that includes a checksum check as well
< zoq>
if I remember right
< zoq>
so if we use tar.gz or something we don't need libarchive?
< kartikdutt18Gitt>
We would need libarchive to unzip during runtime. Earlier I was using `system(Command)` to do that, however rcurtin suggested that it might not work on windows.
< kartikdutt18Gitt>
All dependencies are for downloading, unzipping and calculating checksums in run time.
< zoq>
Looks like boost could decompress gzip files.
< zoq>
And I guess they can crc as well?
< zoq>
Maybe that is an alternative?
< zoq>
Not sure, just thought since we already have a boost dependency it makes sense to use boost here as well.
< kartikdutt18Gitt>
Yep, Just checked the docs for both of them. That makes more sense. I can switch to that.
< zoq>
Haven't used any of those features, so not sure it will work.
< zoq>
I guess, boost asio to download the file works as well?
< zoq>
Or have some cmake files that we can call from c++ that does everything for us?
< zoq>
Ideally we can keep the list of dependencies low.
< kartikdutt18Gitt>
I can try them out, Hopefully they will work as well.
< kartikdutt18Gitt>
* they what know they need.
v_bat has joined #mlpack
< pickle-rick[m]>
Hey there! I had been bogged down with online exams and assignments at uni for the past couple of weeks. Phew.
< pickle-rick[m]>
Anyways, I'd been going throught the split_by_any_of.hpp file in that mlpack uses to tokenize strings.
< pickle-rick[m]>
Can anyone help me understanding what the following line does: `using MaskType = std::array<bool, 1 << CHAR_BIT>;`
< pickle-rick[m]>
*understand
< pickle-rick[m]>
I guess its the `1 << CHAR_BIT` part thats throwing me off
< rcurtin>
pickle-rick[m]: that value is 1 shifted by CHAR_BIT bits
< rcurtin>
so, that turns out to be equivalent to 2^CHAR_BIT
< pickle-rick[m]>
ohhhh, because 1 gets shifted "CHAR_BIT" number of times
< pickle-rick[m]>
I see. I understand now. Thanks
< v_bat>
Hi, I've been having some issues getting segfaults for an ann model I'm working on, so far as I can tell the layers, inputs, and outputs are the appropriate sizes. Am I missing something when setting up the network?
< rcurtin>
pickle-rick[m]: happy to help :)
< rcurtin>
v_bat: sorry to hear that... what version of mlpack are you using? there were recently some fixes for memory handling, released as part of mlpack 3.3.0
< v_bat>
Oh, that's it then since it's 3.2.2.
< rcurtin>
also, how are you compiling your model? if you compile with, e.g., -DARMA_NO_DEBUG, it'll remove all Armadillo debugging, so you won't see any "incorrect matrix size" errors, just segfaults
< rcurtin>
yeah... try 3.3.0 and see if that helps. if it doesn't, it might be easier to open an issue on Github? up to you
< v_bat>
I'll open that issue if it persists after switching 3.3, thank you :)
< rcurtin>
v_bat: sounds good, hopefully 3.3.0 fixes it :)
< v_bat>
Hmm, it's still happening, though, I'm not sure if it's begun linking to the correct library yet
< v_bat>
Hmm, yeah, it's 3.3
< kartikdutt18Gitt>
Also an error that I ran into once was when input size reduces to a point in subsequent layers such that kernel size is greater than input size.
ImQ009 has quit [Quit: Leaving]
v_bat has quit [Remote host closed the connection]
sreenik[m] has joined #mlpack
< himanshu_pathak[>
Hey, rcurtin: zoq When you get time can you please review https://github.com/mlpack/mlpack/pull/2324 . One more thing I want to I need to also test WGAN and WGANGP. Every layer is being tested except sequential layer and ADD merge layer. Also copy constructor of BRNN and RNN is also added.
< zoq>
himanshu_pathak[: Nice, will take a look soon.
< himanshu_pathak[>
* Hey, rcurtin: zoq When you get time can you please review https://github.com/mlpack/mlpack/pull/2324 . One more thing I want to I also need to test WGAN and WGANGP. Every layer is being tested except sequential layer and ADD merge layer. Also copy constructor of BRNN and RNN is also added.