Supporting Volume Tracking through Schema Registry¶
Since the process of volume tracking involves a messaging architecture, there are producers and consumers serializing/deserializing messages published and received. To do so, the publishers and consumers need to agree on a certain set of schemas that provide the ability to facilitate this message serialization and deserialization. For example, if a consumer expects an attribute \(a_{i}\) from the publisher, and the publisher fails to publish it along with the message, the volume tracking process would err as the message would likely be dead-lettered because the consumer is not able to process the message. Therefore, we have used RedPanda Schema Registry to host the schemas which gives the ability for producers and consumers to use the compatible schemas to publish and consume.
Tip
Schemas evolve. The set of attributes (and the nature of those attributes) the application starts with might get updated as the application matures. Therefore, to support schema evolution, schema registries facilitate schema versioning. As long as the producers and consumers agree on the versions that they use, the messaging process succeed. However, if either the producer and consumer change the schema version they use, the system can run into a compatibility conflict and introduce friction throughout the messaging process.
RedPanda Schema Registry has different compatibility modes to choose from when a new subject (schema) is created.
The default is BACKWARD, where the consumers can consume messages produced by using the schema version (i.e., \(v_{k}\)) that the consumer agrees with, or the version before that (i.e., \(v_{k-1}\)) and only that.
There are other types of compatibility modes that provide more flexibility such as FULL and FULL_TRANSITIVE but we have chosen to go for the default compatibility mode, which is BACKWARD.
How RedPanda facilitates these compatibility modes is by restricting what you can change when the schema evolves.
A good explanation on these restrictions is described here.
Avro¶
To facilitate serialization and deserialization processes using schemas, we used Avro as our specification. It was initially developed by Apache to manage efficient and compact storage of big data. However, our use-case for Avro relies on its ability to lay out a schema for our messages, serialize them (following validation) and validate them upon deserialization. The internals of Avro on how these validations occur are not discussed here. However, a simple guide is given on the process and the tools we used for serialization and deserialization.
It is noteworthy to mention that the publishers and consumers that the volume tracking process involves were written using different technologies. Therefore, here are some points that require highlighting as both the producers and consumers use schemas as a common concept.
Producers¶
Producers for volume tracking were written in Ruby (Ruby on Rails as the web framework).
The library (i.e., Ruby Gem) we have used to serialize the messages using the schemas is avro.
We use avro library's Avro::IO::BinaryEncoder to encode the data into binary format using the schema and send the binary data along the AMQP channel established with the message broker.
Consumers¶
Consumers for volume tracking were written in Python.
The library we have used to deserialize messages using the schemas is fastavro. We use fastavro library's schemaless_reader to decode the data into a Python type and map it in a way that we could access the data inside messages.
The general process of message serialization and deserialization¶
The following figure aims to give a bird's-eye perspective on message serialization and deserialization.
Publisher¶
flowchart LR
A[Producer] --> B[Fetch schema];
B --> C[Encode the message];
C --> F{Validation?};
F --> |Failure| G[Log Error];
F --> |Success| D[Push it to queue]
D --> |consume| E[Consumer];
Consumer¶
graph LR
A[Consumer] --> B[Fetch schema];
B --> C[Decode the message];
C --> F{Validation?};
F --> |Success| E[Push it to warehouse];
F --> |Failure| D[Dead Letter];
Schema¶
The hosted schema can be accessed through a REST API given by RedPanda.
It can also be accessed by the RedPanda Console.
The subject of the schema hosted for volume tracking is create-aliquot-in-mlwh. Given below is the version=1 of the schema.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | |
Note that each attribute has its own type, and only some of them can accept null values.
Also, it is noteworthy to mention attributes such as sourceType, which lays out the type of the aliquot source, can only be one of a predefined set of values defined in the schema itself (i.e., an enum with possible values well, sample, library, pool).
Enforcing optionality and type of such attributes allows the messaging process to be more robust since it would fail at the earliest if the publisher tries to publish a message that does not conform to the schema (i.e., it fails early than late).