rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
_slack_mlpack_31 has joined #mlpack
_slack_mlpack_10 has joined #mlpack
_slack_mlpack_U0 has joined #mlpack
_slack_mlpack_U4 has joined #mlpack
AyushKumarLavani has joined #mlpack
_slack_mlpack_U7 has joined #mlpack
_slack_mlpack_13 has joined #mlpack
Prometheus[m] has joined #mlpack
_slack_mlpack_16 has joined #mlpack
_slack_mlpack_27 has joined #mlpack
_slack_mlpack_19 has joined #mlpack
_slack_mlpack_24 has joined #mlpack
_slack_mlpack_22 has joined #mlpack
_slack_mlpack_34 has joined #mlpack
GaborBakos[m] has joined #mlpack
_slack_mlpack_25 has joined #mlpack
_slack_mlpack_28 has joined #mlpack
DivyanshKumar[m] has joined #mlpack
_slack_mlpack_37 has joined #mlpack
zoq[m] has joined #mlpack
AlexNguyen[m] has joined #mlpack
AmanKashyap[m] has joined #mlpack
_slack_mlpack_17 has joined #mlpack
MayankRaj[m] has joined #mlpack
DirkEddelbuettel has joined #mlpack
_slack_mlpack_14 has joined #mlpack
gitter-badgerThe has joined #mlpack
mlpack-inviter[m has joined #mlpack
Shadow3049[m] has joined #mlpack
OleksandrNikolsk has joined #mlpack
ShivamNayak[m] has joined #mlpack
DavidportlouisDa has joined #mlpack
VedantaJha[m] has joined #mlpack
HarshVardhanKuma has joined #mlpack
say4n[m] has joined #mlpack
ryan[m]1 has joined #mlpack
Gulshan[m] has joined #mlpack
ABHINAVANAND[m] has joined #mlpack
fazamuhammad[m] has joined #mlpack
AyushiJain[m] has joined #mlpack
TathagataRaha[m] has joined #mlpack
AbhinavGudipati[ has joined #mlpack
M7Ain7Soph77Ain7 has joined #mlpack
MatheusAlcntaraS has joined #mlpack
SiddhantJain[m] has joined #mlpack
DillonKipke[m] has joined #mlpack
KrishnaSashank[m has joined #mlpack
ShivamShaurya[m] has joined #mlpack
jonpsy[m] has joined #mlpack
dkipke[m] has joined #mlpack
Saksham[m] has joined #mlpack
AbhishekNimje[m] has joined #mlpack
RishabhGoel[m] has joined #mlpack
GauravGhati[m] has joined #mlpack
M068AABMUC has joined #mlpack
Kaushalc64[m] has joined #mlpack
LolitaNazarov[m] has joined #mlpack
SergioMoralesE[m has joined #mlpack
NitikJain[m] has joined #mlpack
abernauer[m] has joined #mlpack
AbhishekMishra[m has joined #mlpack
jjb[m] has joined #mlpack
Aakash-kaushikAa has joined #mlpack
Pushker[m] has joined #mlpack
jeffin143[m] has joined #mlpack
HemalMamtora[m] has joined #mlpack
halfy has joined #mlpack
M074AABGKS has joined #mlpack
sdev_7211[m] has joined #mlpack
FranchisNSaikia[ has joined #mlpack
HARSHCHAUHAN[m] has joined #mlpack
jonathanplatkiew has joined #mlpack
shrit[m] has joined #mlpack
MatrixTravelerbo has joined #mlpack
rcurtin[m] has joined #mlpack
Amankumar[m] has joined #mlpack
SlackIntegration has joined #mlpack
GopiManoharTatir has joined #mlpack
VarunGupta[m] has joined #mlpack
heisenbuugGopiMT has joined #mlpack
ABoodhayanaSVish has joined #mlpack
HrithikNambiar[m has joined #mlpack
prasad-dashprasa has joined #mlpack
rcurtin_matrixor has joined #mlpack
SoumyadipSarkar[ has joined #mlpack
swaingotnochill[ has joined #mlpack
JatoJoseph[m] has joined #mlpack
ManishKausikH[m] has joined #mlpack
GauravTirodkar[m has joined #mlpack
RudraPatil[m] has joined #mlpack
Gman[m] has joined #mlpack
swaingotnochills has joined #mlpack
LokeshJawale[m] has joined #mlpack
ArunavShandeelya has joined #mlpack
ronakypatel[m] has joined #mlpack
AbdullahKhilji[m has joined #mlpack
EricTroupeTester has joined #mlpack
zoq[m]1 has joined #mlpack
fieryblade[m] has joined #mlpack
RishabhGarg108Ri has joined #mlpack
ShahAnwaarKhalid has joined #mlpack
NippunSharmaNipp has joined #mlpack
AyushSingh[m] has joined #mlpack
kartikdutt18kart has joined #mlpack
ServerStatsDisco has joined #mlpack
Cadair has joined #mlpack
RishabhGarg108[m has joined #mlpack
MohomedShalik[m] has joined #mlpack
ZanHuang[m] has joined #mlpack
huberspot[m] has joined #mlpack
Gauravkumar[m] has joined #mlpack
TrinhNgo[m] has joined #mlpack
bisakh[m] has joined #mlpack
KshitijAggarwal[ has joined #mlpack
SaiVamsi[m] has joined #mlpack
KumarArnav[m] has joined #mlpack
ChaithanyaNaik[m has joined #mlpack
sailor[m] has joined #mlpack
AvikantSrivastav has joined #mlpack
<jonpsy[m]> hmm, looks like we don't have multiplying matrix in batch like ```torch.bmm```
<jonpsy[m]> or do we?
<heisenbuugGopiMT> @shrit:matrix.org Small issue. Actually there are some tests failing for `sp_mat`. They are related to formats other than `.csv` I beleive.
<heisenbuugGopiMT> As you can see the `warning: SpMat::load(): couldn't load from the given stream`
<heisenbuugGopiMT> Actually I forgot to uncomment all the tests in `load_save_test` so that's why missed these. sorry...
<say4n[m]> <jonpsy[m]> "hmm, looks like we don't have mu" <- Armadillo [doesn't support](http://arma.sourceforge.net/docs.html#operators) multiplication of two arma::Cube, so you'll need to implement the multiplication for each item in the batch. Iterate with batch as first dim of arma::Cube (b x m x n @ b x n x p = b x m x p)?
<say4n[m]> (looked at the documentation, I hope it is up to date :))
<jonpsy[m]> yeap, i was thinking we could omp parallelize it
<jonpsy[m]> or atleast some way to parallelize it, because its [embarassingly parallel](https://en.wikipedia.org/wiki/Embarrassingly_parallel)
<jonpsy[m]> some way to SIMD it
<jonpsy[m]> Also, in our replay folder. Should ```states``` be of the type ```std::vector<StateType>``` rather than ```arma::mat```, because if we have stack of image as input then we won't be able to store.
<heisenbuugGopiMT> But when I am trying to use the build, i.e. run some program, I am getting this exception
<heisenbuugGopiMT> Any idea why?
<heisenbuugGopiMT> I made some changes to the. The build was successful and even all the test cases passed.
<heisenbuugGopiMT> `free(): invalid next size (normal)`
<heisenbuugGopiMT> * I made some changes to the code, the build was successful and even all the test cases passed.
<heisenbuugGopiMT> But when I am trying to use the build, i.e. run some program, I am getting this exception
<heisenbuugGopiMT> `free(): invalid next size (normal)`
<heisenbuugGopiMT> Any idea why?
<heisenbuugGopiMT> * I made some changes to the code, the build was successful and even all test cases passed.
<heisenbuugGopiMT> But when I am trying to use the build, i.e. run some program, I am getting this exception
<heisenbuugGopiMT> `free(): invalid next size (normal)`
<heisenbuugGopiMT> Any idea why?
<shrit[m]> What is the program you are trying to run?
<shrit[m]> You can try to run the CLI programs too in order to be sure what is happening
<shrit[m]> What is the full error?
<heisenbuugGopiMT> This is it.
<heisenbuugGopiMT> I am just getting this as an exception
<shrit[m]> I mean does the program run?
<shrit[m]> do you get that at the start, or at the end?
<shrit[m]> Do you have an idea whether if it is in the Load function?
<shrit[m]> or before?
<shrit[m]> Also do you get this error if you are not using the Dataset mapper?
<heisenbuugGopiMT> Yea, I am entering the `Load()` everything is running perfectly. but as soon as we done with [this](https://github.com/mlpack/mlpack/blob/6ee199db0fa340ae5d22ae8c2949a4cdc31776f9/src/mlpack/core/data/load_impl.hpp#L239)
<heisenbuugGopiMT> it throws the exception
<shrit[m]> Are you sure, that this is the exception that is thrown?
<shrit[m]> at line 241?
<shrit[m]> It feels like an Armadillo error
<heisenbuugGopiMT> Yea, but like, I set a breakpoint on Load and navigated.
<heisenbuugGopiMT> No error without datasetmapper
<heisenbuugGopiMT> I made changes in functions related to datasetmapper but how come tests are not failing if there is an exception.
<shrit[m]> Would you use GDB?
<shrit[m]> I mean, I agree there is an error, but I do not understand anything.
<shrit[m]> It seems that it is like a pointer error.
<heisenbuugGopiMT> Although I made changes after the push
<heisenbuugGopiMT> But I didn't try to use the code locally even once so I have no idea since when we have this issue.
<shrit[m]> Maybe
<heisenbuugGopiMT> Can you tell me what command shall I use in GDB?
<shrit[m]> use gdb ./program_name
<shrit[m]> and, then type run
<shrit[m]> when it will fail, use `bt`
<shrit[m]> I would recommend that add a `std::cout <<` to print the token inside the parsing loop, and recompile again
<shrit[m]> this will help us to see what is happening
<heisenbuugGopiMT> Okay, give me 5 mins.
<shrit[m]> no worries
<heisenbuugGopiMT> This is what I got from gdb
<heisenbuugGopiMT> I think I got what's happening, I will update soon
<heisenbuugGopiMT> @shrit:matrix.org It was due to my changes only, it's resolved.
<heisenbuugGopiMT> Can we have a meeting today?
<shrit[m]> Yes, we can have a meeting today
<shrit[m]> I am not sure of time
<shrit[m]> I will let you know
<shrit[m]> Did you push all the modifications?
<heisenbuugGopiMT> It's fine, let me know whenever you are free...
<shrit[m]> I will need to have a look at the code before the meeting
<heisenbuugGopiMT> Yup, working on some more...
<shrit[m]> Okay let me know when you finish everything, I will have a look at the code and then we can have a quick call
<heisenbuugGopiMT> OKay...
<heisenbuugGopiMT> And this looks different for with and without `datasetmapper`
<shrit[m]> I mean there is no need for line 5
<shrit[m]> * I think there is no need for line 5
<shrit[m]> no, sorry
<shrit[m]> I mis understood the code
<shrit[m]> Would you clarify a little bit you question>
<shrit[m]> ?
<heisenbuugGopiMT> I was planning to make these parts into function, now my doubt is we already have something called NumericParse and CategoricalParse
<heisenbuugGopiMT> And in both these parse we again have 2 cases where we with either doing it to map where we consider token or just to get size where we don't need to worry about token.
<heisenbuugGopiMT> Just small difference in both cases.
<heisenbuugGopiMT> Now should I make different functions for these or like I can add a bool `isNumeric` and using an if condition I can optionally execute the token part.
<shrit[m]> So, you mean that the difference is only the two loops?
<shrit[m]> You can put it as a template parameters
<heisenbuugGopiMT> Ahh, I think I didn't make it clear. Give me a sec.
<shrit[m]> template<bool Numeric> 👍️
<heisenbuugGopiMT> Ohh
<heisenbuugGopiMT> Okay...
<zoq[m]> <jonpsy[m]> "Also, in our replay folder. Shou" <- When do you have a stack of images, usually you get the observations and perform an action, get the observations perform an action.
<heisenbuugGopiMT> Now consider `CategoricalParse`, we can have a case where we don't need to manipulate token.
<shrit[m]> I can not see the different between the last two
<shrit[m]> why not just using the last one
<heisenbuugGopiMT> If we ignore the mapping part the only difference is of two lines, where we are manipulating `token`
<heisenbuugGopiMT> But we will not need it always, so is it okay to do those extra string manipulations?
<heisenbuugGopiMT> Also `tok` will eat maybe negligible but extra memory, is it okay?
<shrit[m]> I see what you mean, the second case is only needed if we have the `,` delimiter inside the phrase
<heisenbuugGopiMT> Yup yup.
<heisenbuugGopiMT> No No
<shrit[m]> but the code in the first case is not correcet
<heisenbuugGopiMT> There are two cases we need to parse
<heisenbuugGopiMT> * 1st just to get size, we don't care about the token itself
<heisenbuugGopiMT> * 2nd to get a proper token, so that we can map it
<heisenbuugGopiMT> Hence we have extra lines in if condition
<shrit[m]> Let us have a quick call
<heisenbuugGopiMT> OKay, coming
<shrit[m]> I am not sure if I understand what is happening
<jonpsy[m]> > When do you have a stack of images, usually you get the observations and perform an action, get the observations perform an action.
<jonpsy[m]> so like, in mario we take a stack of images as input right? Which we pass to convolutional network
<rcurtin[m]> ensmallen finally published in JMLR: https://jmlr.org/papers/v22/20-416.html
<rcurtin[m]> hopefully this will help with visibility :)
<jonpsy[m]> > ensmallen finally published in JMLR: https://jmlr.org/papers/v22/20-416.html
<jonpsy[m]> the added support for multi-objective framework wasn't mentioned :((
<rcurtin[m]> yeah, unfortunately the review process was super, super long
<rcurtin[m]> our original submission was May 2020...
<jonpsy[m]> ah!
<rcurtin[m]> so I think the hope with a paper like that is just that it piques people's interest (it is only a 4 page summary anyway), and then they go check it out and find out about all the cool stuff it can do 😃
<jonpsy[m]> we could've compared with ```pymoo``` and ```pagmo``` and other ```moo``` frameworks :)
<rcurtin[m]> yeah, definitely---maybe we can do this in a follow-up paper or something like this? the 4-page limit was absolutely brutal (especially with the JMLR style) so basically all you can do in 4 pages is say "we made a library, here is a short API snippet, here is a super simple experiment, the license is <X>, go check it out"
<jonpsy[m]> For sure!
<jonpsy[m]> infact, Marcus did suggest it, but i was too lazy ;). Benchmarking MOO algorithms is definetely on my TODO list (if i get the time)
<rcurtin[m]> 👍️
<zoq[m]> > <@jonpsy:matrix.org> > When do you have a stack of images, usually you get the observations and perform an action, get the observations perform an action.
<zoq[m]> Do we? I think it's just one image, I mean we are in a specific state right?
<zoq[m]> > so like, in mario we take a stack of images as input right? Which we pass to convolutional network
<zoq[m]> >
<shrit[m]> one year for four pages
<shrit[m]> This is a lot of time
<zoq[m]> In this case we got some actual feedback, that had to be addressed.
<jonpsy[m]> > Do we? I think it's just one image, I mean we are in a specific state right?
<jonpsy[m]> yes, but remember a single image doesn't tell much. So we stack few consecutive images into 1 and then call it s_t
<zoq[m]> But how do you get the observation from the future without an action?
<jonpsy[m]> 5:18
<zoq[m]> Previous observations, should be learned by the model itself, (recurrent neural network).
<jonpsy[m]> i mean, how do we handle image inputs in RL?
<zoq[m]> The video talks about previous frames, and that is something you can already do, you can just stack the images together.
<jonpsy[m]> yes
<jonpsy[m]> oh!
<jonpsy[m]> but!
<jonpsy[m]> each image is a matrix
<jonpsy[m]> but here we were talking of ```sampledStates```
<jonpsy[m]> so we should have an array of ```arma::mat``` right?
<jonpsy[m]> not just ```arma::mat```
<zoq[m]> I mean you could put multiple frames into one single mat, having an array or a single mat is just another way to format the data.
<zoq[m]> At the end you feed a vector to the network.
<jonpsy[m]> if its an FC yeah, but it can have conv right?
<jonpsy[m]> > I mean you could put multiple frames into one single mat, having an array or a single mat is just another way to format the data.
<jonpsy[m]> yes but ``` states.col(position) = state.Encode(); ``` would be totally meaningless then.
<jonpsy[m]> I guess you're suggesting you're storing image as vector?
<jonpsy[m]> * I guess you're suggesting you're storing each image as vector?
<zoq[m]> Even for conv net's, we are currently passing a matrix as well, that contains multiple images.
<zoq[m]> each col represents an image.
<zoq[m]> you can pass multiple cols to the conv layer.
<jonpsy[m]> > I guess you're suggesting you're storing each image as vector?
<jonpsy[m]> so, this?
<zoq[m]> Yes, at least this is what we are currently doing.
<jonpsy[m]> hm, but to pass to ```CNN``` each image need to be a vector right?
<jonpsy[m]> * hm, but to pass to `CNN` each image need to be a matrix* right?
<zoq[m]> KInda, we pass a matrix to the CNN, where each col represents one image, internally the conv layer unpacks the matrix into a `cube` to make the code easier to understand, which means transforming the col into a matrix.
<jonpsy[m]> that kinda sucks ngl
<jonpsy[m]> why go this far?
<jonpsy[m]> i guess its because we cant support 4D tensor?
<zoq[m]> To have a unified interface, I mean what is the downside?
<zoq[m]> This way we can use `arma::mat` for every method, can easily transfer data between different methods.
<zoq[m]> We could make an interface for `arma::cube` armadillo supports that.
<zoq[m]> But what is the point in doing that.
<rcurtin[m]> in the `ann-vtable` branch, some things are being changed around. our fundamental issue is that Armadillo doesn't have support for higher-order tensors
<jonpsy[m]> exactly!
<rcurtin[m]> so, I think the API we have to settle on is that a feedforward network takes an `arma::mat`; in this `arma::mat`, each column corresponds to a data point, so the dimensions are `(n_features, batch_size)`
<rcurtin[m]> then, we have a function `InputDimensions()` that allows you to pass a `std::vector<size_t>` that specifies the actual dimensions to interpret each column as
<rcurtin[m]> so, for instance, if you have an image, you might set `InputDimensions()` to `{ width, height, channels }`
<rcurtin[m]> and then, the `arma::mat` you pass to `Evaluate()` or `Train()` will have dimensions `(width * height * channels, batchSize)`
<zoq[m]> Sure, but we still pass a matrix that contains all the images.
<zoq[m]> Which is what jonpsy said "that kinda sucks ngl"
<zoq[m]> I'm not sure I agree on that because I don't see the problem.
<rcurtin[m]> yeah, my thought is that it's not perfect, but with `InputDimensions()` it cleans it up a decent bit
<jonpsy[m]> rcurtin[m]: question: We handle the image unrolling and re-construction right? Because there's a good chance that it converting to ```vec``` and then retreiving back to ```mat``` might mess up the image due to reshape
<rcurtin[m]> at least I don't see any realistic alternative. we can write some wrappers so if you have, e.g., a `std::vector<>` of some images or something, we can pack them into an `arma::mat` and get a `std::vector<size_t>` of input dimensions
<zoq[m]> I mean what is the difference between using `vector<mat> data; data[i]` instead of `mat data; data.col(i)`
<rcurtin[m]> jonpsy: yeah, right now the user has to handle that, but I think we can provide utilities that do this conversion correctly and handle most cases
<jonpsy[m]> yep, that sucks
<zoq[m]> We already do, if you use the `data::Load` function it will handle that for you.
<rcurtin[m]> I mean, you might be better off here if you consider the constraints and consider what we can do to make things work within those constraints
<rcurtin[m]> I think zoq is right, we have some loading functionality for images. I'm not sure exactly the format it packs things into, but it wouldn't be too hard to modify it to return a packed `arma::mat` plus a `std::vector<size_t>` of dimensions
<jonpsy[m]> yeah, given the constraints. This is nice
<rcurtin[m]> well, that's still not a very positive outlook either ;)
<rcurtin[m]> given those constraints, I believe we can make things a little nicer, we just have to consider those constraints in figuring out what to do
<rcurtin[m]> imagine a C++ dataframe class that actually does have support for higher dimensions, and just wraps an `arma::mat`
<rcurtin[m]> it doesn't support higher-order tensor operations, but maybe it provides an interface so you can do, e.g., `image.at(x, y, z, channel)` and get a value
<heisenbuugGopiMT> I was somewhat thinking that this discussion will lead to dataframe
<rcurtin[m]> then, you can imagine the `FFN` taking in one of these dataframe-like classes, setting `InputDimensions()` automatically, and learning on the underlying `arma::mat` representation
<jonpsy[m]> sounds like a plan!
<zoq[m]> Hm, in most of the cases you don't actually need to reshape, it's more the other way around, in case of the conv layer we reshape internally, because it's easier to read but in the end it's a vector operation.
<shrit[m]> The issue is related to manipulating the `arma::cube` right?
<rcurtin[m]> the other comment worth pointing out here is that it's not always the best choice to imitate the design choices of more popular toolkits. there is an advantage to that, which is that it is a familiar interface to users, but, in many cases, popular toolkits have really counterintuitive and complicated APIs (one decent example is TensorFlow's original API)
<zoq[m]> rcurtin[m]: In this case it always comes down to the actually implementation, in some cases it makes sense to interpret the data as a matrix in some cases it makes sense to use a vector.
<zoq[m]> Like passing a matrix to the conv layer makes sense, but passing it to e.g. a relu activation function a vector would be better.
<rcurtin[m]> ah, change "in many cases" to "in some cases". I don't want to overstate it 😃 but, it is worth keeping in mind when considering API design that sometimes it is a good thing to explicitly choose something different than the 'typical' approach
<rcurtin[m]> in our convolution layer, what it really does at the moment is take in an `arma::vec` representing a single image/input, then "reinterprets" it as something of the correct dimensions. other layers that also expect higher-dimensional inputs do the same thing
<rcurtin[m]> that, too, is perhaps a thing we could improve upon 😃
<rcurtin[m]> the only issue is the amount of effort it takes to do so 😃
<jonpsy[m]> yep! That was exactly my concern
<jonpsy[m]> the re-shape to matrix thing
<jonpsy[m]> and potential mess up when re-constructing
<rcurtin[m]> internally, that's what utility functions and unit tests are good for :) but for a user, I agree, we can do better than we're currently doing
<zoq[m]> That is the "reshape"
<zoq[m]> It's just a reinterpretation of the input.
<jonpsy[m]> hm, so its using the same memory space
<jonpsy[m]> like ```torch.view```
<rcurtin[m]> yeah, exactly, pretty much any tensor library or matrix library will work exactly like this. it allocates a big block of memory, and then simply partitions it logically for each dimension
<zoq[m]> Right, no copy involved, it's really just a reinterpretation because people like to thing about images as a matrix.
<zoq[m]> * Right, no copy involved, it's really just a reinterpretation because people like to think about images as a matrix.
<jonpsy[m]> i suppose that operation guarantees that the image would be re-constructed as was originally intended
<rcurtin[m]> assuming the user has the dimensions specified in the same "order" as we reconstruct them in, yeah
<rcurtin[m]> (and, technically, if the user passes in something as `{height, width, channels}`, so long as they specify their convolution filter shape as `{height, width}`, too, then everything will work out fine)
<jonpsy[m]> okay so the ```reshape`` and "potential mess up" thing is cleared
<jonpsy[m]> * okay so the ```reshape``` and "potential mess up" thing is cleared
<jonpsy[m]> now we're counting on user to correctly convert image to ```vec```
<zoq[m]> Right, if you stay in the mlpack universe and use the `data::Load` function that is already handled for you.
<zoq[m]> But yeah, we don't provide a utility function that does that for you.
<rcurtin[m]> and maybe someday this magical dataframe class will exist and provide a nice interface on top of it 😃 (maybe XTensor could do the job, but we'd still need to integrate it)
<jonpsy[m]> zoq[m]: sounds like a PR :)
<jonpsy[m]> xtensor's pretty cool
<jonpsy[m]> well, this was very educational!
<rcurtin[m]> :) another nice thing about xtensor and most matrix/tensor libraries using a big block of memory, is that it would theoretically be possible to take an xtensor dataframe and wrap the underlying block of memory in an `arma::mat` without a copy
<rcurtin[m]> this is actually what is done to transition between Armadillo and Numpy matrices for the Python bindings
<rcurtin[m]> (and same thing in Go, R, Julia, and I think the Java bindings PR too but I haven't had any time to return to that :( ...)
<jonpsy[m]> rcurtin[m]: hm, so you'll take the ```mem_ptr``` and use advanced_ctor
<rcurtin[m]> yeah, exactly
<jonpsy[m]> we could wrap it in a function saying ```xTensorToMat```
<jonpsy[m]> ez
<rcurtin[m]> right, exactly, I think it would be straightforward (but perhaps not trivial) to create a tool like that
<rcurtin[m]> I'm still waiting on an email from Sylvain Corlay (CEO of QuantStack, who produces xtensor) to see what his level of interest in a collaboration is
<rcurtin[m]> (apparently it must not be high enough to respond quickly 😃 but I'm sure we'll hear from him eventually. I know he is interested in C++ data science, so if someone who works on XTensor wants to collaborate on mlpack integration, I think it could be a pretty nice project)
<jonpsy[m]> there's also ArrayFIre
<rcurtin[m]> they're actually based in Atlanta---I did a job interview with them at some point but didn't end up going that direction
<rcurtin[m]> it seems like they have done a lot of work recently and gotten some press, but, I haven't checked in. at least in 2015, it didn't seem like there was much momentum
<shrit[m]> rcurtin: it is vacation time, everyone on vacation in France now
<rcurtin[m]> ahhhh, I see 😃
<jonpsy[m]> they're super popular since Pytorch is using ArrayFire in their C++ API
<shrit[m]> vacations are from 14 July to 15 August, and sometime to 23 of Augugst
<rcurtin[m]> the complexity for ArrayFire is that they are built on GPUs---and Armadillo can't just wrap GPU memory
<rcurtin[m]> but, presumably one could wrap a Bandicoot matrix around an ArrayFire `array` (but there is still a lot of work to be done to make bandicoot work in general)
<shrit[m]> * vacations are from 14 July to 15 August, and sometime to 23 of August
<jonpsy[m]> i see.
<jonpsy[m]> this was fun. Thanks for all the details reg. conv neural network, i couldn't have figued in a 100 years
<rcurtin[m]> 100 years is a long time, I am sure you could have figured it out by then 😃
<rcurtin[m]> I enjoyed the discussion too---it's always good to talk about the larger picture and how we can improve things 👍️
<heisenbuugGopiMT> @shrit:matrix.org, I pushed the code.
<heisenbuugGopiMT> I will add some comments, working on it.
<heisenbuugGopiMT> But I think you can have a look now.
<heisenbuugGopiMT> Only failing tests are related to `sp_mat`
<shrit[m]> Is there are any new failing tests?
<heisenbuugGopiMT> Nope, nothing new, just those stream related issues...
<heisenbuugGopiMT> That also for formats other than `csv`
<shrit[m]> The good looks good, much better than before, thanks for the organization
<shrit[m]> If you would like I can review it now, or in a couple of hour when you finish all the modifications
<shrit[m]> Let me know what do you think :+1:
<heisenbuugGopiMT> I am my dinner now, although it's late...
<heisenbuugGopiMT> I will take an hour to get back to work, so up to you...
<shrit[m]> * Let me know what do you think 👍️
<shrit[m]> Okay, I will review it at the end of the day. You can incorporate my modifications tomorrow
<heisenbuugGopiMT> okayy...