CCF uses a HTTP REST interface to call programs inside the enclave, so the process is usually read request, call enclave function and receive response (via ring buffer message), send the response to the client.
However, the TCP implementation in CCF is generic and could adapt to other common communication processes, but perhaps would need to change how the users (RPC, Node-to-node) use it.
The TCPImpl class (in
src/host/tcp.h) implements all TCP logic (using the asynchronous libuv), used by both RPCConnections and NodeConnections.
Because TCPImpl does not have access to the ring buffer, it must use behaviour classes to allow users to register callbacks on actions (ex. on_read, on_accept, etc).
Most of the call backs are for logging purposes, but the two important ones are: - on_accept on servers, which creates a new socket to communicate with the particular connecting client - on_read, which takes the data that is read and writes it to the ring buffer
For note-to-node connections, the behaviours are: - NodeServerBehaviour, the main listening socket and, on_accept, creates a new socket to communicate with a particular connecting client - NodeIncomingBehaviour, the socket that is created above, that waits for input and passes that to the enclave - NodeOutgoingBehaviour, a socket that is created by the enclave (via ring buffer messages into the host), to connect to external nodes
For RPC connections, the behaviours are: - RPCServerBehaviour, same as the NodeServerBehaviour above - RPCClientBehaviour, a misnomer, used for both incoming and outgoing behaviours above
Here’s a diagram with the types of behaviours and their relationships:
The TCPImpl has an internal state machine where states change as reactions to callbacks from libuv.
Since it implements both server (listen, peer, read) and client (connect, write) logic, the state helps common functions to know where to continue to on completion.
The complete state machine diagram, without failed states, is:
Some failed states give transition to retries / reconnects, others are terminal and close the connection.
The main cycle of a server is the following: - create a main socket and listen for connections - on accepting a new connection, creates a new (peer) socket to communicate with that client
read the request, communicate with the enclave, get the response backs
send the response to the client
close the socket
There could be several peer sockets open communicating with different clients at the same time and it’s up to libuv to handle the asynchronous tasks.
Here’s a diagram of the control flow for a server connection:
The control flow of the peer connection is similar to the client (below), but the order is reverse.
The client first writes the request and then waits for the response, while the peer first waits for the request and then writes the response back.
Clients don’t have a cycle, as they connect to an existing server, send the request, wait for the response and disconnect.
Clients are used from the enclave side (Node-to-node and RPC), via a ring buffer message.
Node-to-node clients are used for pings across nodes, electing a new leader, etc.
RPC clients are used for REST service callbacks from other services, ex. metrics.
Here’s the diagram of the client control flow:
Note that some clients have a client_host parameter separate from host that is used for testing, and uses the state BINDING.
The client_host is resolved separately, bound to the client handle (via uv_tcp_bind) but the call to uv_tcp_connect is done on the host address.
This allows us to bind separate addresses to the client side while connecting to the host, to allow external packet filters (like iptables) to restrict traffic.