Example app (C++)

A C++ application exposes itself to CCF by implementing:

/** To be implemented by the application. Creates a collection of endpoints
 * which will be exposed to callers under /app.
 *
 * @param context Access to node and host services
 *
 * @return Unique pointer to the endpoint registry instance
 */
std::unique_ptr<ccf::endpoints::EndpointRegistry> make_user_endpoints(
  ccf::AbstractNodeContext& context);

The Logging example application simply has:

std::unique_ptr<ccf::endpoints::EndpointRegistry> make_user_endpoints(
  ccf::AbstractNodeContext& context)
{
  return std::make_unique<loggingapp::LoggerHandlers>(context);
}

Note

ccf::kv::Map tables are the only interface between CCF and the replicated application, and the sole mechanism for it to have distributed state.

The Logging application keeps its state in a pair of tables, one containing private encrypted logs and the other containing public unencrypted logs. Their type is defined as:

using RecordsMap = ccf::kv::Map<size_t, string>;

These tables are then accessed by type and name:

auto records_handle =
  ctx.tx.template rw<RecordsMap>(public_records(ctx));
auto records_handle =
  ctx.tx.template rw<RecordsMap>(private_records(ctx));

Application Endpoints

The implementation of ccf::make_user_endpoints() should return a subclass of ccf::endpoints::EndpointRegistry, containing the endpoints that constitute the app.

class LoggerHandlers : public ccf::UserEndpointRegistry

The logging app defines ccf::LoggerHandlers, which creates and installs handler functions or lambdas for several different HTTP endpoints. Each of these functions takes as input the details of the current request (such as the URI which was called, the query string, the request body), interacts with the KV tables using the given ccf::kv::Tx object, and returns a result:

auto record = [this](auto& ctx, nlohmann::json&& params) {
  // SNIPPET_START: macro_validation_record
  const auto in = params.get<LoggingRecord::In>();
  // SNIPPET_END: macro_validation_record

  if (in.msg.empty())
  {
    return ccf::make_error(
      HTTP_STATUS_BAD_REQUEST,
      ccf::errors::InvalidInput,
      "Cannot record an empty log message.");
  }

  // SNIPPET: private_table_access
  auto records_handle =
    ctx.tx.template rw<RecordsMap>(private_records(ctx));
  // SNIPPET_END: private_table_access
  records_handle->put(in.id, in.msg);
  return ccf::make_success(true);
};

This example uses the json_adapter wrapper function, which handles parsing of a JSON params object from the HTTP request body.

Each function is installed as the handler for a specific HTTP resource, defined by a verb and URI:

make_endpoint(
  "/log/private", HTTP_POST, ccf::json_adapter(record), auth_policies)
  .set_auto_schema<LoggingRecord::In, bool>()
  .install();

This example installs at "/app/log/private", HTTP_POST, so will be invoked for HTTP requests beginning POST /app/log/private.

The return value from make_endpoint is an Endpoint& object which can be used to alter how the handler is executed. For example, the handler for POST /app/log/private shown above sets a schema declaring the types of its request and response bodies. These will be used in calls to the GET /app/api endpoint to populate the relevant parts of the OpenAPI document. That OpenAPI document in turn is used to generate the entries in this documentation describing POST /app/log/private.

There are other endpoints installed for the URI path /app/log/private with different verbs, to handle GET and DELETE requests. Requests with those verbs will be executed by the appropriate handler. Any other verbs, without an installed endpoint, will not be accepted - the framework will return a 405 Method Not Allowed response.

To process the raw body directly, a handler should use the general lambda signature which takes a single EndpointContext& parameter. Examples of this are also included in the logging sample app. For instance the log_record_text handler takes a raw string as the request body:

auto log_record_text = [this](auto& ctx) {
  const auto expected = ccf::http::headervalues::contenttype::TEXT;
  const auto actual =
    ctx.rpc_ctx->get_request_header(ccf::http::headers::CONTENT_TYPE)
      .value_or("");
  if (expected != actual)
  {
    ctx.rpc_ctx->set_error(
      HTTP_STATUS_UNSUPPORTED_MEDIA_TYPE,
      ccf::errors::InvalidHeaderValue,
      fmt::format(
        "Expected content-type '{}'. Got '{}'.", expected, actual));
    return;
  }

  const auto& path_params = ctx.rpc_ctx->get_request_path_params();
  const auto id_it = path_params.find("id");
  if (id_it == path_params.end())
  {
    ctx.rpc_ctx->set_error(
      HTTP_STATUS_BAD_REQUEST,
      ccf::errors::InvalidInput,
      "Missing ID component in request path");
    return;
  }

  const auto id = strtoul(id_it->second.c_str(), nullptr, 10);

  const std::vector<uint8_t>& content = ctx.rpc_ctx->get_request_body();
  const std::string log_line(content.begin(), content.end());

  auto records_handle =
    ctx.tx.template rw<RecordsMap>(private_records(ctx));
  records_handle->put(id, log_line);

  ctx.rpc_ctx->set_response_status(HTTP_STATUS_OK);
};
make_endpoint(
  "/log/private/raw_text/{id}", HTTP_POST, log_record_text, auth_policies)
  .install();

Rather than parsing the request body as JSON and extracting the message from it, in this case the entire body is the message to be logged, and the ID to associate it with is passed as a request header. This requires some additional code in the handler, but provides complete control of the request and response formats.

This general signature also allows a handler to see additional caller context. An example of this is the log_record_prefix_cert handler:

auto log_record_prefix_cert = [this](auto& ctx) {
  const auto& caller_ident =
    ctx.template get_caller<ccf::UserCertAuthnIdentity>();

  const nlohmann::json body_j =
    nlohmann::json::parse(ctx.rpc_ctx->get_request_body());

  const auto in = body_j.get<LoggingRecord::In>();
  if (in.msg.empty())
  {
    ctx.rpc_ctx->set_error(
      HTTP_STATUS_BAD_REQUEST,
      ccf::errors::InvalidInput,
      "Cannot record an empty log message");
    return;
  }

  const auto log_line =
    fmt::format("{}: {}", caller_ident.user_id.value(), in.msg);
  auto records_handle =
    ctx.tx.template rw<RecordsMap>(private_records(ctx));
  records_handle->put(in.id, log_line);

  ctx.rpc_ctx->set_response_status(HTTP_STATUS_OK);
  ctx.rpc_ctx->set_response_header(
    ccf::http::headers::CONTENT_TYPE,
    ccf::http::headervalues::contenttype::JSON);
  ctx.rpc_ctx->set_response_body(nlohmann::json(true).dump());
};
make_endpoint(
  "/log/private/prefix_cert",
  HTTP_POST,
  log_record_prefix_cert,
  {ccf::user_cert_auth_policy})
  .set_auto_schema<LoggingRecord::In, bool>()
  .install();

This parses the caller’s TLS certificate, and prefixes the logged message with the Subject field extracted from this certificate.

If a handler makes no writes to the KV, it may be installed as read-only:

make_read_only_endpoint(
  "/log/private",
  HTTP_GET,
  ccf::json_read_only_adapter(get),
  auth_policies)
  .set_auto_schema<void, LoggingGet::Out>()
  .add_query_parameter<size_t>("id")
  .install();

This offers some additional type safety (accidental puts or removes will be caught at compile-time) and also enables performance scaling since read-only operations can be executed on any receiving node, whereas writes must always be executed on the primary node.

API Schema

Instead of taking and returning nlohmann::json objects directly, the endpoint handlers use a macro-generated schema and parser converting compliant requests into a PoD C++ object:

struct LoggingRecord
{
  struct In
  {
    size_t id;
    std::string msg;
    bool record_claim = false;
  };
};

struct LoggingGet
{
  struct Out
  {
    std::string msg;
  };
};

struct LoggingRemove
{
  using Out = bool;
};

struct LoggingPut
{
  struct Out
  {
    bool success;
    std::string tx_id;
  };
};

struct LoggingGetReceipt
{
  struct In
  {
    size_t id;
  };

  struct Out
  {
    std::string msg;
    nlohmann::json receipt;
  };
};

DECLARE_JSON_TYPE_WITH_OPTIONAL_FIELDS(LoggingRecord::In);
DECLARE_JSON_REQUIRED_FIELDS(LoggingRecord::In, id, msg);
DECLARE_JSON_OPTIONAL_FIELDS(LoggingRecord::In, record_claim);

DECLARE_JSON_TYPE(LoggingGet::Out);
DECLARE_JSON_REQUIRED_FIELDS(LoggingGet::Out, msg);

DECLARE_JSON_TYPE(LoggingPut::Out);
DECLARE_JSON_REQUIRED_FIELDS(LoggingPut::Out, success, tx_id);

DECLARE_JSON_TYPE(LoggingGetReceipt::In);
DECLARE_JSON_REQUIRED_FIELDS(LoggingGetReceipt::In, id);
DECLARE_JSON_TYPE(LoggingGetReceipt::Out);
DECLARE_JSON_REQUIRED_FIELDS(LoggingGetReceipt::Out, msg, receipt);
const auto in = params.get<LoggingRecord::In>();

This produces validation error messages with a low performance overhead, and ensures the schema and parsing logic stay in sync, but is only suitable for simple schema - an object with some required and some optional fields, each of a supported type.

Authentication

Each endpoint must provide a list of associated authentication policies in the call to make_endpoint. Inside the handler, the caller identity that was constructed by the accepting policy check can be retrieved with get_caller or try_get_caller. The latter should be used when multiple policies are present, to detect which policy accepted the request. When multiple policies are listed, they are tested in-order, essentially saying that any of policy A or policy B or … is acceptable. To instead form a conjunction of policies, such that all policies must pass, you may combine them with the ccf::AllOfAuthnPolicy built-in policy.

For example in the /log/private endpoint above there is a single policy stating that requests must come from a known user cert, over mutually authenticated TLS. This is one of several built-in policies provided by CCF. These built-in policies will check that the caller’s TLS cert is a known user or member identity, or that the request is HTTP signed by a known user or member identity, or that the request contains a JWT signed by a known issuer. Additionally, there is an empty policy which accepts all requests, which should be used as the final policy to declare that the endpoint is optionally authenticated (either an earlier-listed policy passes providing a real caller identity, or the empty policy passes and the endpoint is invoked with no caller identity). To declare that an endpoint has no authentication requirements and should be accessible by any caller, use the special value no_auth_required.

Applications can extend this system by writing their own authentication policies. There is an example of this in the C++ logging app. First it defines a type describing the identity details it aims to find in an acceptable request:

struct CustomIdentity : public ccf::AuthnIdentity
{
  std::string name;
  size_t age;
};

Next it defines the policy itself. The core functionality is the implementation of the authenticate() method, which looks at each request and returns either a valid new identity if it accepts the request, or nullptr if it does not. In this demo case it is looking for a pair of headers and doing some validation of their values:

class CustomAuthPolicy : public ccf::AuthnPolicy
{
public:
  std::unique_ptr<ccf::AuthnIdentity> authenticate(
    ccf::kv::ReadOnlyTx&,
    const std::shared_ptr<ccf::RpcContext>& ctx,
    std::string& error_reason) override
  {
    const auto& headers = ctx->get_request_headers();

    {
      // If a specific header is present, throw an exception to simulate a
      // dangerously implemented auth policy
      constexpr auto explode_header_key = "x-custom-auth-explode";
      const auto explode_header_it = headers.find(explode_header_key);
      if (explode_header_it != headers.end())
      {
        throw std::logic_error(explode_header_it->second);
      }
    }

    constexpr auto name_header_key = "x-custom-auth-name";
    const auto name_header_it = headers.find(name_header_key);
    if (name_header_it == headers.end())
    {
      error_reason =
        fmt::format("Missing required header {}", name_header_key);
      return nullptr;
    }

    const auto& name = name_header_it->second;
    if (name.empty())
    {
      error_reason = "Name must not be empty";
      return nullptr;
    }

    constexpr auto age_header_key = "x-custom-auth-age";
    const auto age_header_it = headers.find(age_header_key);
    if (age_header_it == headers.end())
    {
      error_reason =
        fmt::format("Missing required header {}", age_header_key);
      return nullptr;
    }

    const auto& age_s = age_header_it->second;
    size_t age;
    const auto [p, ec] =
      std::from_chars(age_s.data(), age_s.data() + age_s.size(), age);
    if (ec != std::errc())
    {
      error_reason =
        fmt::format("Unable to parse age header as a number: {}", age_s);
      return nullptr;
    }

    constexpr auto min_age = 16;
    if (age < min_age)
    {
      error_reason = fmt::format("Caller age must be at least {}", min_age);
      return nullptr;
    }

    auto ident = std::make_unique<CustomIdentity>();
    ident->name = name;
    ident->age = age;
    return ident;
  }

  std::optional<ccf::OpenAPISecuritySchema> get_openapi_security_schema()
    const override
  {
    // There is no OpenAPI-compliant way to describe this auth scheme, so we
    // return nullopt
    return std::nullopt;
  }

  std::string get_security_scheme_name() override
  {
    return "CustomAuthPolicy";
  }
};

Note that authenticate() is also passed a ReadOnlyTx object, so more complex authentication decisions can depend on the current state of the KV. For instance the built-in TLS cert auth policies are looking up the currently known user/member certs stored in the KV, which will change over the life of the service.

The final piece is the definition of the endpoint itself, which uses an instance of this new policy when it is constructed and then retrieves the custom identity inside the handler:

auto custom_auth = [](auto& ctx) {
  const auto& caller_identity = ctx.template get_caller<CustomIdentity>();
  nlohmann::json response;
  response["name"] = caller_identity.name;
  response["age"] = caller_identity.age;
  response["description"] = fmt::format(
    "Your name is {} and you are {}",
    caller_identity.name,
    caller_identity.age);
  ctx.rpc_ctx->set_response_status(HTTP_STATUS_OK);
  ctx.rpc_ctx->set_response_body(response.dump(2));
};
auto custom_policy = std::make_shared<CustomAuthPolicy>();
make_endpoint("/custom_auth", HTTP_GET, custom_auth, {custom_policy})
  .set_auto_schema<void, nlohmann::json>()
  // To test that custom auth works on both the receiving node and a
  // forwardee, we always forward it
  .set_forwarding_required(ccf::endpoints::ForwardingRequired::Always)
  .install();

Default Endpoints

The logging app sample exposes several built-in endpoints which are provided by the framework for convenience, such as GET /app/tx, GET /app/commit, and GET /app/receipt. It is also possible to write an app which does not expose these endpoints, either to build a minimal user-facing API or to re-wrap this common functionality in your own format or authentication. A sample of this is provided in samples/apps/nobuiltins. Whereas the logging app declares a registry inheriting from ccf::CommonEndpointRegistry, this app inherits from ccf::BaseEndpointRegistry which does not install any default endpoints:

class NoBuiltinsRegistry : public ccf::BaseEndpointRegistry

This app can then define its own endpoints from a blank slate. If it wants to provide similar functionality to the default endpoints, it does so using the APIs provided by ccf::BaseEndpointRegistry. For instance to retrieve the hardware quote of the executing node:

ccf::QuoteInfo quote_info;
result = get_quote_for_this_node_v1(ctx.tx, quote_info);
if (result != ccf::ApiResult::OK)
{
  ctx.rpc_ctx->set_error(
    HTTP_STATUS_INTERNAL_SERVER_ERROR,
    ccf::errors::InternalError,
    fmt::format(
      "Failed to get quote: {}", ccf::api_result_to_str(result)));
  return;
}

summary.quote_format = quote_info.format;
summary.quote = quote_info.quote;
summary.endorsements = quote_info.endorsements;

Historical Queries

This sample demonstrates how to define a historical query endpoint with the help of ccf::historical::adapter_v3(). Most endpoints operate over the current state of the KV, but these historical queries operate over old state, specifically over the writes made by a previous transaction. The adapter handles extracting the target Transaction ID from the user’s request, and interacting with the Historical Queries API to asynchronously fetch this entry from the ledger. The deserialised and verified transaction is then presented to the handler code below, which performs reads and constructs a response like any other handler.

The handler passed to the adapter is very similar to a read-only endpoint definition, but receives a read-only ccf::historical::State rather than a transaction.

auto get_historical = [this](
                        ccf::endpoints::ReadOnlyEndpointContext& ctx,
                        ccf::historical::StatePtr historical_state) {
  // Parse id from query
  const auto parsed_query =
    ccf::http::parse_query(ctx.rpc_ctx->get_request_query());

  std::string error_reason;
  size_t id;
  if (!ccf::http::get_query_value(parsed_query, "id", id, error_reason))
  {
    ctx.rpc_ctx->set_error(
      HTTP_STATUS_BAD_REQUEST,
      ccf::errors::InvalidQueryParameterValue,
      std::move(error_reason));
    return;
  }

  auto historical_tx = historical_state->store->create_read_only_tx();
  auto records_handle =
    historical_tx.template ro<RecordsMap>(private_records(ctx));
  const auto v = records_handle->get(id);

  if (v.has_value())
  {
    LoggingGetHistorical::Out out;
    out.msg = v.value();
    nlohmann::json j = out;
    ccf::jsonhandler::set_response(std::move(j), ctx.rpc_ctx);
  }
  else
  {
    ctx.rpc_ctx->set_response_status(HTTP_STATUS_NO_CONTENT);
  }
};

auto is_tx_committed =
  [this](ccf::View view, ccf::SeqNo seqno, std::string& error_reason) {
    return ccf::historical::is_tx_committed_v2(
      consensus, view, seqno, error_reason);
  };
make_read_only_endpoint(
  "/log/private/historical",
  HTTP_GET,
  ccf::historical::read_only_adapter_v4(
    get_historical, context, is_tx_committed),
  auth_policies)
  .set_auto_schema<void, LoggingGetHistorical::Out>()
  .add_query_parameter<size_t>("id")
  .set_forwarding_required(ccf::endpoints::ForwardingRequired::Never)
  .install();

Indexing

The historical endpoint described above must process each target transaction on a specific node, asynchronously, before the result can be served. For some use cases, in particular where the response is repeated often rather than dynamically constructed, this may be extremely inefficient. Instead, we would prefer to pre-process all committed transactions and construct an efficient index of their contents, geared towards responding to a known pattern of user queries.

For instance, if we want to list every value written to a specific key but know that writes are relatively rare, we could build an index of such writes. When this historical query comes in, rather than fetching every transaction - to extract useful writes from a small fraction - the historical query endpoint can first ask the index which transactions should be processed and fetch only those. If the response format is known, the index could even pre-construct the response itself.

In CCF, this is achieved by implementing an indexing ccf::indexing::Strategy. This is constructed on each node, in-enclave, by processing every committed transaction in-order in the implementation of ccf::indexing::Strategy::handle_committed_transaction(). The strategy can then return its aggregated results to the calling endpoint in whatever format is appropriate. A ccf::indexing::Strategy may offload partial results to disk to avoid infinite memory growth, via the automatically encrypted LFS (Large File Storage) system. Since the indexing system and all the strategies it manages exist entirely within the enclave, this has the same trust guarantees as any other in-enclave code - users can trust that the results are accurate and complete, and the query may process private data.

An example ccf::indexing::Strategy is included in the logging app, to accelerate historical range queries. This strategy stores the list of seqnos where every key is written to, offloading completed ranges to disk to cap the total memory useage. In the endpoint handler, rather than requesting every transaction in the requested range, the node relies on its index to fetch only the interesting transactions; those which write to the target key:

const auto interesting_seqnos =
  index_per_public_key->get_write_txs_in_range(
    id, range_begin, range_end);

See the sample app for full details of how this strategy is installed and used.

Receipts

Historical state always contains a receipt. Users wishing to implement a receipt endpoint may return it directly, or include it along with other historical state in the response.

auto get_historical_with_receipt =
  [this](
    ccf::endpoints::ReadOnlyEndpointContext& ctx,
    ccf::historical::StatePtr historical_state) {
    // Parse id from query
    const auto parsed_query =
      ccf::http::parse_query(ctx.rpc_ctx->get_request_query());

    std::string error_reason;
    size_t id;
    if (!ccf::http::get_query_value(parsed_query, "id", id, error_reason))
    {
      ctx.rpc_ctx->set_error(
        HTTP_STATUS_BAD_REQUEST,
        ccf::errors::InvalidQueryParameterValue,
        std::move(error_reason));
      return;
    }

    auto historical_tx = historical_state->store->create_read_only_tx();
    auto records_handle =
      historical_tx.template ro<RecordsMap>(private_records(ctx));
    const auto v = records_handle->get(id);

    if (v.has_value())
    {
      LoggingGetReceipt::Out out;
      out.msg = v.value();
      assert(historical_state->receipt);
      out.receipt = ccf::describe_receipt_v1(*historical_state->receipt);
      ccf::jsonhandler::set_response(std::move(out), ctx.rpc_ctx);
    }
    else
    {
      ctx.rpc_ctx->set_response_status(HTTP_STATUS_NO_CONTENT);
    }
  };
make_read_only_endpoint(
  "/log/private/historical_receipt",
  HTTP_GET,
  ccf::historical::read_only_adapter_v4(
    get_historical_with_receipt, context, is_tx_committed),
  auth_policies)
  .set_auto_schema<void, LoggingGetReceipt::Out>()
  .add_query_parameter<size_t>("id")
  .set_forwarding_required(ccf::endpoints::ForwardingRequired::Never)
  .install();

User-Defined Claims in Receipts

A user wanting to tie transaction-specific values to a receipt can do so by attaching a claims digest to their transaction. This is conceptually equivalent to getting a signature from the service for claims made by the application logic.

if (in.record_claim)
{
  ctx.rpc_ctx->set_claims_digest(ccf::ClaimsDigest::Digest(in.msg));
}

CCF will record this transaction as a leaf in the Merkle tree constructed from the combined digest of the write set, this claims_digest, and the Commit Evidence.

This claims_digest will be exposed in receipts under leaf_components. It can then be revealed externally, or by the endpoint directly if it has been stored in the ledger. The receipt object deliberately makes the claims_digest optional, to allow the endpoint to remove it when the claims themselves are revealed.

Receipt verification can then only succeed if the revealed claims are digested and their digest combined into a leaf that correctly combines with the proof to form the root that the signature covers. Receipt verification therefore establishes the authenticity of the claims.

// Claims are expanded as out.msg, so the claims digest is removed
// from the receipt to force verification to re-compute it.
auto full_receipt =
  ccf::describe_receipt_v1(*historical_state->receipt);
out.receipt = full_receipt;
out.receipt["leaf_components"].erase("claims_digest");

A client consuming the output of this endpoint must digest the claims themselves, combine the digest with the other leaf components (write_set_digest and hash(commit_evidence)) to obtain the equivalent leaf. See Receipt Verification for the full set of steps.

As an example, a logging application may register the contents being logged as a claim:

auto record_public = [this](auto& ctx, nlohmann::json&& params) {
  const auto in = params.get<LoggingRecord::In>();

  if (in.msg.empty())
  {
    return ccf::make_error(
      HTTP_STATUS_BAD_REQUEST,
      ccf::errors::InvalidInput,
      "Cannot record an empty log message.");
  }

  // SNIPPET: public_table_access
  auto records_handle =
    ctx.tx.template rw<RecordsMap>(public_records(ctx));
  // SNIPPET_END: public_table_access
  const auto id = params["id"].get<size_t>();

  // SNIPPET_START: public_table_post_match
  MatchHeaders match_headers(ctx.rpc_ctx);
  if (match_headers.conflict())
  {
    return ccf::make_error(
      HTTP_STATUS_BAD_REQUEST,
      ccf::errors::InvalidHeaderValue,
      "Cannot have both If-Match and If-None-Match headers.");
  }

  // The presence of a Match header requires a read dependency
  // to check the value matches the constraint
  if (!match_headers.empty())
  {
    auto current_value = records_handle->get(id);
    if (current_value.has_value())
    {
      ccf::crypto::Sha256Hash value_digest(current_value.value());
      auto etag = value_digest.hex_str();

      // On a POST operation, If-Match failing or If-None-Match passing
      // both return a 412 Precondition Failed to be returned, and no
      // side-effect.
      if (match_headers.if_match.has_value())
      {
        ccf::http::Matcher matcher(match_headers.if_match.value());
        if (!matcher.matches(etag))
        {
          return ccf::make_error(
            HTTP_STATUS_PRECONDITION_FAILED,
            ccf::errors::PreconditionFailed,
            "Resource has changed.");
        }
      }

      if (match_headers.if_none_match.has_value())
      {
        ccf::http::Matcher matcher(match_headers.if_none_match.value());
        if (matcher.matches(etag))
        {
          return ccf::make_error(
            HTTP_STATUS_PRECONDITION_FAILED,
            ccf::errors::PreconditionFailed,
            "Resource has changed.");
        }
      }
    }
  }
  // SNIPPET_END: public_table_post_match

  records_handle->put(id, in.msg);
  // SNIPPET_START: set_claims_digest
  if (in.record_claim)
  {
    ctx.rpc_ctx->set_claims_digest(ccf::ClaimsDigest::Digest(in.msg));
  }
  // SNIPPET_END: set_claims_digest
  CCF_APP_INFO("Storing {} = {}", id, in.msg);

  // SNIPPET_START: public_table_post_etag
  ccf::crypto::Sha256Hash value_digest(in.msg);
  // Succesful calls set an ETag
  ctx.rpc_ctx->set_response_header("ETag", value_digest.hex_str());
  // SNIPPET_END: public_table_post_etag

  return ccf::make_success(true);
};

And expose an endpoint returning receipts, with that claim expanded:

auto get_historical_with_receipt =
  [this](
    ccf::endpoints::ReadOnlyEndpointContext& ctx,
    ccf::historical::StatePtr historical_state) {
    // Parse id from query
    const auto parsed_query =
      ccf::http::parse_query(ctx.rpc_ctx->get_request_query());

    std::string error_reason;
    size_t id;
    if (!ccf::http::get_query_value(parsed_query, "id", id, error_reason))
    {
      ctx.rpc_ctx->set_error(
        HTTP_STATUS_BAD_REQUEST,
        ccf::errors::InvalidQueryParameterValue,
        std::move(error_reason));
      return;
    }

    auto historical_tx = historical_state->store->create_read_only_tx();
    auto records_handle =
      historical_tx.template ro<RecordsMap>(private_records(ctx));
    const auto v = records_handle->get(id);

    if (v.has_value())
    {
      LoggingGetReceipt::Out out;
      out.msg = v.value();
      assert(historical_state->receipt);
      out.receipt = ccf::describe_receipt_v1(*historical_state->receipt);
      ccf::jsonhandler::set_response(std::move(out), ctx.rpc_ctx);
    }
    else
    {
      ctx.rpc_ctx->set_response_status(HTTP_STATUS_NO_CONTENT);
    }
  };
make_read_only_endpoint(
  "/log/private/historical_receipt",
  HTTP_GET,
  ccf::historical::read_only_adapter_v4(
    get_historical_with_receipt, context, is_tx_committed),
  auth_policies)
  .set_auto_schema<void, LoggingGetReceipt::Out>()
  .add_query_parameter<size_t>("id")
  .set_forwarding_required(ccf::endpoints::ForwardingRequired::Never)
  .install();

Receipts from this endpoint will then look like:

{'msg': 'Public message at idx 5 [0]',
 'receipt': {'cert': '-----BEGIN CERTIFICATE-----\n'
                     'MIIBzzCCAVWgAwIBAgIRANKoegKBViucMxSPzftnDB4wCgYIKoZIzj0EAwMwFjEU\n'
                     'MBIGA1UEAwwLQ0NGIE5ldHdvcmswHhcNMjIwMzE1MjExODIwWhcNMjIwMzE2MjEx\n'
                     'ODE5WjATMREwDwYDVQQDDAhDQ0YgTm9kZTB2MBAGByqGSM49AgEGBSuBBAAiA2IA\n'
                     'BG+RJ5qNPOga8shCF3w64yija/ShW46JxrE0n9kDybyRf+L3810GjCvjxSpzTQhX\n'
                     '5WEF2dou1dG2ppI/KSNQsSfk081lbaB50NADWw+jDCtrq/fKuZ+w9wQSaoSvE5+0\n'
                     '1qNqMGgwCQYDVR0TBAIwADAdBgNVHQ4EFgQU7tFQR91U1EDhup1XPS3u0w5+R2Yw\n'
                     'HwYDVR0jBBgwFoAU3aI0vfJMBdWckvv9dKK2UzNCLU0wGwYDVR0RBBQwEocEfwAA\n'
                     'AYcEfxoNCocEfwAAAjAKBggqhkjOPQQDAwNoADBlAjAiOmvGpatg4Uq8phQkwj/p\n'
                     'Wj33fih6SUtRHOpdsIKvbV8TDNHRdSo1RKPArDd1w1wCMQDnw9zziS5G8qwvucP3\n'
                     'gn3htz+2ZPBJRr98AqmRNmgflhgqLQp+jAVPrJaWtD3fDpw=\n'
                     '-----END CERTIFICATE-----\n',
             'leaf_components': {'commit_evidence': 'ce:2.25:54571ec6d0540b364d8343b74dff055932981fd72a24c1399c39ca9c74d2f713',
                                 'write_set_digest': '08b044fc5b0e9cd03c68d77c949bb815e3d70bd24ad339519df48758430ac0f7'},
             'node_id': '95baf92969b4c9e52b4f8fcde830dea9fa0286a8c3a92cda4cffcf8251c06b39',
             'proof': [{'left': '50a1a35a50bd2c5a4725907e77f3b1f96f1f9f37482aa18f8e7292e0542d9d23'},
                       {'left': 'e2184154ac72b304639b923b3c7a0bc04cecbd305de4f103a174a90210cae0dc'},
                       {'left': 'abc9bcbeff670930c34ebdab0f2d57b56e9d393e4dccdccf2db59b5e34507422'}],
             'signature': 'MGUCMHYBgZ3gySdkJ+STUL13EURVBd8354ULC11l/kjx20IwpXrg/aDYLWYf7tsGwqUxPwIxAMH2wJDd9wpwbQrULpaAx5XEifpUfOriKtYo7XiFr05J+BV10U39xa9GBS49OK47QA=='}}

Note that the claims_digest is deliberately omitted from leaf_components, and must be re-computed by digesting the msg.

Client-side Concurrency Control

Clients of a CCF application submit transactions concurrently. Single transactions are always handled atomically by the service, transparently for clients and for application writers. But some clients may wish to submit sequences of multiple, dependent transactions. For example:

  1. Client reads value at key K

  2. Performs some client-side processing

  3. Writes a new value at key K, but only if the server-side value has not changed

Implementing If-Match and If-None-Match HTTP headers in endpoint logic is a common pattern for this use case. The following endpoints of the C++ logging app demonstrate this:

POST /app/log/public

MatchHeaders match_headers(ctx.rpc_ctx);
if (match_headers.conflict())
{
  return ccf::make_error(
    HTTP_STATUS_BAD_REQUEST,
    ccf::errors::InvalidHeaderValue,
    "Cannot have both If-Match and If-None-Match headers.");
}

// The presence of a Match header requires a read dependency
// to check the value matches the constraint
if (!match_headers.empty())
{
  auto current_value = records_handle->get(id);
  if (current_value.has_value())
  {
    ccf::crypto::Sha256Hash value_digest(current_value.value());
    auto etag = value_digest.hex_str();

    // On a POST operation, If-Match failing or If-None-Match passing
    // both return a 412 Precondition Failed to be returned, and no
    // side-effect.
    if (match_headers.if_match.has_value())
    {
      ccf::http::Matcher matcher(match_headers.if_match.value());
      if (!matcher.matches(etag))
      {
        return ccf::make_error(
          HTTP_STATUS_PRECONDITION_FAILED,
          ccf::errors::PreconditionFailed,
          "Resource has changed.");
      }
    }

    if (match_headers.if_none_match.has_value())
    {
      ccf::http::Matcher matcher(match_headers.if_none_match.value());
      if (matcher.matches(etag))
      {
        return ccf::make_error(
          HTTP_STATUS_PRECONDITION_FAILED,
          ccf::errors::PreconditionFailed,
          "Resource has changed.");
      }
    }
  }
}

And before returning 200 OK:

ccf::crypto::Sha256Hash value_digest(in.msg);
// Succesful calls set an ETag
ctx.rpc_ctx->set_response_header("ETag", value_digest.hex_str());

GET /app/log/public/{idx}

// If there is not value, the response is always Not Found
// regardless of Match headers
if (record.has_value())
{
  MatchHeaders match_headers(ctx.rpc_ctx);
  if (match_headers.conflict())
  {
    return ccf::make_error(
      HTTP_STATUS_BAD_REQUEST,
      ccf::errors::InvalidHeaderValue,
      "Cannot have both If-Match and If-None-Match headers.");
  }

  // If a record is present, compute an Entity Tag, and apply
  // If-Match and If-None-Match.
  ccf::crypto::Sha256Hash value_digest(record.value());
  const auto etag = value_digest.hex_str();

  if (match_headers.if_match.has_value())
  {
    ccf::http::Matcher matcher(match_headers.if_match.value());
    if (!matcher.matches(etag))
    {
      return ccf::make_error(
        HTTP_STATUS_PRECONDITION_FAILED,
        ccf::errors::PreconditionFailed,
        "Resource has changed.");
    }
  }

  // On a GET, If-None-Match passing returns 304 Not Modified
  if (match_headers.if_none_match.has_value())
  {
    ccf::http::Matcher matcher(match_headers.if_none_match.value());
    if (matcher.matches(etag))
    {
      return ccf::make_redirect(HTTP_STATUS_NOT_MODIFIED);
    }
  }

  // Succesful calls set an ETag
  ctx.rpc_ctx->set_response_header("ETag", etag);
  CCF_APP_INFO("Fetching {} = {}", id, record.value());
  return ccf::make_success(LoggingGet::Out{record.value()});
}

DELETE /app/log/public/{idx}

// If there is no value, we don't need to look at the Match
// headers to report that the value is deleted (200 OK)
if (current_value.has_value())
{
  MatchHeaders match_headers(ctx.rpc_ctx);
  if (match_headers.conflict())
  {
    return ccf::make_error(
      HTTP_STATUS_BAD_REQUEST,
      ccf::errors::InvalidHeaderValue,
      "Cannot have both If-Match and If-None-Match headers.");
  }

  if (!match_headers.empty())
  {
    // If a Match header is present, we need to compute the ETag
    // to resolve the constraints
    ccf::crypto::Sha256Hash value_digest(current_value.value());
    const auto etag = value_digest.hex_str();

    if (match_headers.if_match.has_value())
    {
      ccf::http::Matcher matcher(match_headers.if_match.value());
      if (!matcher.matches(etag))
      {
        return ccf::make_error(
          HTTP_STATUS_PRECONDITION_FAILED,
          ccf::errors::PreconditionFailed,
          "Resource has changed.");
      }
    }

    if (match_headers.if_none_match.has_value())
    {
      ccf::http::Matcher matcher(match_headers.if_none_match.value());
      if (matcher.matches(etag))
      {
        return ccf::make_redirect(HTTP_STATUS_NOT_MODIFIED);
      }
    }
  }
}

The framework provides a ccf::http::Matcher class, which can be used to evaluate these conditions.