File any issues / feature requests / questions etc. you may have in the Issues for this repo.
This project provides connectors for Kafka Connect to read from and write data to Azure Cosmos DB(SQL API).
The connectors in this repository are specifically for the Cosmos DB SQL API. If you are using Cosmos DB with other APIs then there is likely a specific connector for that API, but it’s not this one.
The sink & source connectors are configurable in order to support:
Format Name | Description |
---|---|
JSON (Plain) | JSON record structure without any attached schema. |
JSON with Schema | JSON record structure with explicit schema information to ensure the data matches the expected format. |
AVRO | A row-oriented remote procedure call and data serialization framework developed within Apache’s Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format. |
Since key and value settings, including the format and serialization, can be independently configured in Kafka, it is possible to work with different data formats for records’ keys and values respectively.
To cater for this there is converter configuration for both key.converter and value.converter.
If you need to use JSON without Schema Registry for Connect data, you can use the JsonConverter supported with Kafka. The example below shows the JsonConverter key and value properties that are added to the configuration:
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false
When the properties key.converter.schemas.enable
and value.converter.schemas.enable
are set to true, the key or value is not treated as plain JSON, but rather as a composite JSON object containing both an internal schema and the data.
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=true
The resulting message to Kafka would look like the example below, with schema and payload top-level elements in the JSON:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "userid"
},
{
"type": "string",
"optional": false,
"field": "name"
}
],
"optional": false,
"name": "ksql.users"
},
"payload": {
"userid": 123,
"name": "user's name"
}
}
NOTE: The message written is made up of the schema + payload. Notice the size of the message, as well as the proportion of it that is made up of the payload vs. the schema. This is repeated in every message you write to Kafka. In scenarios like this, you may want to use a serialisation format like JSON Schema or Avro, where the schema is stored separately and the message holds just the payload.
This connector supports AVRO. To use AVRO you need to configure a AvroConverter so that Kafka Connect knows how to work with AVRO data. This connector has been tested with the AvroConverter supplied by Confluent, under Apache 2.0 license, but another custom converter can be used in its place instead if you prefer.
Kafka deals with keys and values independently, you need to specify the key.converter
and value.converter
properties as required in the worker configuration.
An additional converter property must also be added, when using AvroConverter, that provides the URL for the Schema Registry.
The example below shows the AvroConverter key and value properties that are added to the configuration:
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://schema-registry:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://schema-registry:8081
(org.apache.kafka.connect.json.JsonConverter)
.
schemas.enable=false
as per JSON (Plain) configuration.(io.confluent.connect.avro.AvroConverter)
as per AVRO configuration.Some of the common errors you can get if you misconfigure the converters in Kafka Connect. These will show up in the sinks you configure for Kafka Connect, as it’s this point at which you’ll be trying to deserialize the messages already stored in Kafka. Converter problems tend not to occur in sources because it’s in the source that the serialization is set.
The Sink and Source connectors share the following common configuration properties
Name | Type | Description | Required/Optional |
---|---|---|---|
connect.cosmos.connection.endpoint | uri | Cosmos endpoint URI string | Required |
connect.cosmos.master.key | string | The Cosmos primary key that the sink connects with | Required |
connect.cosmos.databasename | string | The name of the Cosmos database the sink writes to | Required |
connect.cosmos.containers.topicmap | string | Mapping between Kafka Topics and Cosmos Containers, formatted using CSV as shown: topic#container,topic2#container2 |
Required |
For Sink connector specific configuration, please refer to the Sink Connector Documentation
For Source connector specific configuration, please refer to the Source Connector Documentation
Please refer Developer Walkthrough and Project Setup for initial setup instructions.
For more information on the performance tests run for the Sink and Source connectors, refer to the Performance testing document.
Refer to the Performance Environment Setup for exact steps on deploying the performance test environment for the Connectors.
We introduced the standard dead level queue from Kafka. For more info see: https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues