<jonpsy[m]>
hey zoq say4n I've mailed the final updates on GSoC project. Let me know what you guys think
<jonpsy[m]>
and looks like lab.mlpack is still down?
aakashi2001 has joined #mlpack
aakashi2001 has joined #mlpack
aakashi2001 has quit [Changing host]
aakashi2009 has joined #mlpack
aakashi2001 has quit [Ping timeout: 252 seconds]
aakashi2021 has joined #mlpack
aakashi2021 has joined #mlpack
aakashi2021 has quit [Changing host]
aakashi2009 has quit [Ping timeout: 252 seconds]
aakashi2021 has quit [Remote host closed the connection]
aakashi2001 has joined #mlpack
aakashi2001 has joined #mlpack
aakashi2001 has quit [Changing host]
aakashi2001 has quit [Ping timeout: 252 seconds]
<heisenbuugGopiMT>
Can you have a look at it?
<heisenbuugGopiMT>
@shrit:matrix.org I pused the code.
<heisenbuugGopiMT>
* @shrit:matrix.org I pushed the code.
<heisenbuugGopiMT>
I can see some green ticks, feels good.
<heisenbuugGopiMT>
But there are some reds as well, I don't think they are related to parser.
<heisenbuugGopiMT>
Also what about benchmark?
<heisenbuugGopiMT>
We can use the same code that we used earlier?
<heisenbuugGopiMT>
Also how can I compare `boost::spirit` implementation and our implementation as they are not on the same branch so how can I call them both?
<heisenbuugGopiMT>
Or should I run them seperatly and then plot those results?
<shrit[m]>
I will look at the code in a couple of hours
<shrit[m]>
Yeah, you need to run the master branch and your branch to compare the performance of the two parsers
<shrit[m]>
<heisenbuugGopiMT> "Or should I run them seperatly a" <- This is good for me
<heisenbuugGopiMT>
Oh okay, somehow getting a segmentation fault when input is very small. But it was running earlier...
aakashi2001 has joined #mlpack
aakashi2001 has quit [Changing host]
aakashi2001 has joined #mlpack
<heisenbuugGopiMT>
@shrit:matrix.org I am thinking of running the benchmark code on my Raspberry Pi, is that a good idea? It takes up a lot of RAM and then I am unable to use my laptop
<heisenbuugGopiMT>
So I thought maybe use Pi for that?
<heisenbuugGopiMT>
But when I tried installing cmake, I am getting this `Waiting for cache lock: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 2765 (unattended-upgr)`
aakashi2001 has quit [Ping timeout: 252 seconds]
aakashi2001 has joined #mlpack
aakashi2001 has quit [Changing host]
aakashi2001 has joined #mlpack
<ABHINAVANAND[m]>
zoq I built the group norm PR on colab and ran the test suite. Its passing there. But for some reason in azure its failing. Can you take a look. This is the colab notebook link where I built the group norm PR.
<shrit[m]>
I would prefer that you do it on your laptop, it is harder to do a benchmark on RPI
aakashi2001 has joined #mlpack
aakashi2001 has joined #mlpack
aakashi2001 has quit [Changing host]
aakashi2001 has quit [Client Quit]
<zoq[m]>
<jonpsy[m]> "and looks like lab.mlpack is..." <- Will be up again later today.
<jonpsy[m]>
hey, what you doin rn?
<zoq[m]>
<jonpsy[m]> "See [this](https://pastebin.com..." <- `res += boost::apply_visitor(lossVisitor, network[i]);` checks if the layer implements `Loss()` and if it does it adds the loss to the final loss. In your case since you know you have two loss function why not just run them sequentially so double res = outputLayerA.Forward(boost::apply_visitor(
<zoq[m]>
* `res += boost::apply_visitor(lossVisitor, network[i]);` checks if the layer implements `Loss()` and if it does it adds the loss to the final loss. In your case since you know you have two loss function why not just run them sequentially so double res = outputLayerA.Forward(boost::apply_visitor(
<jonpsy[m]>
but its a single loss right?
<jonpsy[m]>
I mean its a sum of two losses, but how do we do it?
<jonpsy[m]>
and who will handle ```LambdaAnneal()```, ```L = lambda * L_a + (1 - lambda) * L_b ```
<zoq[m]>
I think a better idea would be to implement a custom loss function.
<jonpsy[m]>
that was already in mind, but i was wondering how would i do that
<jonpsy[m]>
brb
<zoq[m]>
Not sure I follow you already have a custom EQLForward/Backward function, so why not pass all the necessary parameters and calculate the loss there.
<zoq[m]>
But in this case you have to pass additional parameters, so we would need to add another visitor that selects the right Function to call.
<jonpsy[m]>
back!
<jonpsy[m]>
> <@marcusedel:matrix.org> `res += boost::apply_visitor(lossVisitor, network[i]);` checks if the layer implements `Loss()` and if it does it adds the loss to the final loss. In your case since you know you have two loss function why not just run them sequentially so double res = outputLayerA.Forward(boost::apply_visitor(
<jonpsy[m]>
> outputParameterVisitor, network.back()), targets);` and then`res += outputLayerB.Forward(boost::apply_visitor(
<jonpsy[m]>
hm, but we only have one output layer right..?
<jonpsy[m]>
<zoq[m]> "Not sure I follow you already ha" <- sounds like a good idea. I only have EQLBackward
<zoq[m]>
jonpsy[m]: Right, in this case we would somewhat ignore the exssisting output layer.
<zoq[m]>
jonpsy[m]: We could add the Forward pass as well.
<jonpsy[m]>
I'm confused :)
<zoq[m]>
Like make the FFN class custom to the EQL method.
<jonpsy[m]>
quick question, ```network[i]``` gives the ```ith``` layer?
<zoq[m]>
correct
<zoq[m]>
Like we could copy the FFN class, remove the tje output layer template or add a second one.
<jonpsy[m]>
ah
<jonpsy[m]>
okay wait
<zoq[m]>
And either implement the loss function internally the way we need it or combine the two.
<jonpsy[m]>
you said another way would be to use our current API, create our own loss function and pass parameters to Loss function using some sorcery?
<zoq[m]>
In this case we still have to add another ForwardEQL function.
<jonpsy[m]>
and after every episode, lambda increases.
<jonpsy[m]>
zoq[m]: so from the final layer, we will branch off to two layers right? One will calculate ```L_a``` other ```L_b``` ?
<zoq[m]>
jonpsy[m]: Don't think that is necessary if we implement `FFNEQL` I would remove the output layer template parameter, and implement the loss function as part of the `FFNEQL` class. Which replaces every call to `outputLayer.` with the custom loss function.
<jonpsy[m]>
then we could do ```outputLayerA``` and ```outputLayerB``` thing you were talking about.., right?
<zoq[m]>
zoq[m]: Every call of `Evaluate` or `EvalauteWithGradient` would also update the annealing.
<zoq[m]>
jonpsy[m]: Yes, if you want to outsource the loss function into a separate layer sure.
<zoq[m]>
zoq[m]: But you could also implement the loss as a part of the class itself, inside the Forward/Backward function.
<jonpsy[m]>
so ```boost::apply_visitor(
<jonpsy[m]>
outputParameterVisitor, network.back())``` would give me `Q`