<jonesv>
I am trying to understand better how capabilities are implemented in Cap'n Proto. Here (https://capnproto.org/rpc.html) it says: "the host only assigns it an ID specific to the connection over which it was sent". Say that I have a capability called `File`, does that mean that somehow, the host will give it an ID (or is it the ID as defined in the capnp file?) and remember that e.g. this specific connection has access to it?
<jonesv>
Which would boil down to having a table saying "over that TCP session, I allow the use of `File`". Now I can imagine that this is fine for TCP, but I am not completely clear for other connections, e.g. a named pipe or a UDP connection (if that did exist). Which part of the code would be responsible for checking whether a UDP message coming from some_ip:some_port has access to the capability `File`?
<isd>
Concretely, the library maintains a per-connection table called "exports" which just maps integers to objects -- so a if you call a method on a File on the other side of a connection, at the network layer you send a message that looks something like: "call method foo on export id #4"
<isd>
You can't speak capnp rpc over raw UDP; it expects an actual connection.
<isd>
So it assumes something lower level is going to provide that connection abstraction.
<jonesv>
Well I can implement a `MessageStream` interface (similar to your WebSocket one) that sends over UDP :)
<jonesv>
(I did it, and it works as long as messages are not lost, which is not part of this question :D)
<isd>
Yeah, if you do that naively it will just treat everything coming in on that port as the same connection.
<isd>
It really wants to think in terms of connections.
<isd>
You could use some other transport like QUIC or such. But you need more machinery than a raw UDP socket.
<jonesv>
Right. But then for UDP I could `connect` the sockets so that only one remote can talk to my socket. Would that count as a connection?
<jonesv>
(maybe doing the `connect` thing would be similar to that machinery you are talking about? 🤔)
<isd>
I mean, you could just use the peer IP/port number as a connection ID. But also capnp rpc expects reliable, in-order delivery.
<jonesv>
Which should work with unreliable, out-of-order messages. But until now I only used it with a TCP transport
<jonesv>
Currently I'm trying to find a nice way to handle a second stream in a `TwoPartyVatNetwork`, so that I could have a TCP stream _and_ a UDP stream in the same `VatNetwork` (for reliable and realtime messages)
<jonesv>
But that brings the question of the "connection" concept, because suddenly I kind of want two streams with one connection, somehow
<isd>
Yeah, just glancing at it it seems like you could let the transport drop those and it would be fine -- but you still need to ensure it is processed after any messages it actually depends on (e.g. if it's pipelined on some other call).
<jonesv>
Hmm I probably forgot some details (I did that back in March), but... streams cannot be pipelined, can they?
<jonesv>
There I just added this `realtime` keyword to streams, and IIRC, I was assuming that they could not be pipelined 🤔
<isd>
Where write would be your method declared "realtime stream"
<isd>
there's still a dependency on the call message for the invocation of getStream()
<isd>
So the call message for write() can't go ahead of that.
<isd>
I've toyed with the idea of doing something like capnp-rpc that doesn't assume it's running over a TCP like transport, and keeps track of dependencies between messages itself. But that would be major surgery, to the point that it would be a different protocol.
<jonesv>
Hmm I need to think about that. Doesn't `getStream()` return a promise? So there would be a `.then()` before going into the `write()` calls, right?
<isd>
That's one way you could do it -- just wait until the result actually comes back. But that means giving up pipelining, which maybe that's ok.
<jonesv>
Oh, I think I see what you mean. I just did not know that pipelining could be written like this. I thought it was really about passing an argument. Yeah then handling pipelining for that call between the reliable stream (e.g. a TCP connection) and the unreliable stream (e.g. a UDP connection) may be tricky 😕
<isd>
I have some more thoughts, but I'm being pulled out the door. I will brain dump later.
<jonesv>
Sure, no worries. And don't lose too much time on that, at this point I don't think my PR will go anywhere upstream, I'm mostly exploring in my fork for myself. But thanks a lot for the insights, that's helpful!
<isd>
Fwiw, you would *also* need to make sure the `write()` message doesn't get processed after the reference to `stream` is dropped, so there's another ordering constraint there.
jimpick_ has quit [Read error: Connection reset by peer]
jimpick_ has joined #sandstorm
frederick has quit [Read error: Connection reset by peer]
frederick has joined #sandstorm
ocdtrekkie has quit [Ping timeout: 246 seconds]
<isd>
I've had the thought that we could generalize the file-descriptor-passing stuff to support other types of objects that the underlying transport knows about; in particular I want to do this with objects transferable via postMessage() in the browser. I could also see building a transport that uses this to attach real-time streams to capabilities to be passed around, but whose actual use would be slightly out of band.
ocdtrekkie has joined #sandstorm
larjona has quit [Quit: No Ping reply in 180 seconds.]
larjona has joined #sandstorm
<jonesv>
I see
<jonesv>
I am still a bit confused about your snippet above, though. I am not super comfortable with the pipelining implementation yet, but I see it as "send the pipelined requests to the remote peer and let it run them" (maybe that's too naive). So if you do `foo.getStream().write()`, you delegate the `.write()` call to the remote, in which case you don't have a say about the transport used by the remote to call `.write()`, right?
<jonesv>
But that's always the case, with or without the realtime stream. If that is the case, then it would make sense to say that you have to `getStream().then(write())` in order to make sure that you are the one calling `write()`, hence going over your transport (in this case UDP). If you call `getStream().write()`, that's still a realtime call, but you just don't know which transport will be used for the pipelined request.
<jonesv>
Probably I'm just missing something, I need to study the codebase some more 😇
FredJones has joined #sandstorm
<FredJones>
Morning team. Is there a way to use MFA/2FA with LDAP integration on our self-hosted Sandstorm Server?
<ocdtrekkie>
I think you'd need to use SAML for that. LDAP doesn't really provide the ability to, does it?
<ocdtrekkie>
Because to use your identity provider's MFA you need to get redirected to your identity provider, which SAML (and OIDC, I think) do, but LDAP doesn't?
<FredJones>
yeah LDAP integration doesn't do it automatically
<FredJones>
from what i found in the docs, it looks like gmail and github logins are the only ones that support MFA?
FredJones has quit [Ping timeout: 252 seconds]
<ocdtrekkie>
It's more that because they redirect out to google.com or github.com, those things inherent MFA support will work.
<ocdtrekkie>
Similarly if you use SAML, it redirects out to your login provider as well, but can be selfhosted.
FredJones has joined #sandstorm
<FredJones>
Is there an allowed_ips list kind of like Squid Proxy has to restrict access without necessarily doing ip tables, ufw, etc.
<ocdtrekkie>
To Sandstorm from the Internet? No
<FredJones>
ok gotcha. So I think we will just change over to the github / google setup then.
<FredJones>
thanks for the help ocdtrekkie!
FredJones has quit [Ping timeout: 252 seconds]
strugee_ has quit [Read error: Connection reset by peer]