The Bitfury Group releases Exonum 0.3 version

alt

The beginning of November gave birth to a new version of Exonum framework actively promoted by Bitfury recently. In the current release the team focused on two main aspects:

  • the network code has been rewritten on Tokio;

  • RocksDB has been established as the default storage while LevelDB support has been discontinued. Exonum now relies on RocksDB specific internal feature (column families), which makes its data storage more effective. The work on further adoption of RocksDB functionality carries on and will be realized in further releases.

The database engine update was particularly addressed in the 0.2 release article, so below we will focus on the first aspect.

Tokio is an event loop abstraction that is specifically adjusted for asynchronous programming. It is fundamentally based on asynchronous I/O, meaning that event processing is non-blocking. Separate events are processed in the background until the final result of event processing is obtained, so that new events can be polled without blocking the main thread.

The feature that particularly benefits asynchronous management of events is a future – a value that will be computed at some time later but that is not ready yet. Futures are composable and can incorporate a chain of actions that would realize the business logic of some process. For example, in terms of Exonum this can be described as follows:

  Connect to node B --> Process bytes from node B and split them into messages --> Forward each message to a node channel

Besides futures, there is a similar primitive for async I/O called stream, which represents a flow of bytes that, with the help of some converter, may be decoded into other elements. Whilst a future returns only one result, a stream can yield several results until it is completed. In other words, streams deal with series of events and are particularly effective in operating with complicated interaction schemes. For example, in Exonum the TCP stream will be transformed into Exonum messages. Said stream can then be forwarded to consensus code stream through the corresponding channel and as they are combined the consensus code obtains necessary messages from the network for processing.

Another significant improvement incurred by “Tokio shift” is allowing nodes to process events in three main queues instead of one that existed in previous Mio-based network code. Generally, there are three main types of events that the network processes while realizing consensus - incoming transactions, message events and timer events. When the only existing queue got clogged by the first two, the timer events could not get into the queue; consequently, no new rounds could start, and consensus came to a halt. Naturally, processing these types of events separately facilitates node operation stability.

Alike, the new code operates in two threads: one for processing external events from the network (responding to the network messages) and the second one for consensus itself. (There is a separate thread for forwarding transactions received via REST API.) As a result, the speed and stability of the network increased – at present new blocks can be accepted in even intervals of 0.5 seconds. Meanwhile, the speed of acceptance can be decreased, of course, if required.

Besides, the node update speed doubled (i.e. the speed of obtaining missing blocks by a lagging node). For example, in the network with 4 nodes the speed of accepting a new block while updating a lagging node constituted from 200 to 300 milliseconds. Switching the lagging node to a tokio-based code has already reduced said interval to 100 milliseconds only. It is expected that with the whole network switched to the new code said values should be even lower.

The statistics are provided for empty blocks and may differ for the network under load.

We have also roughly estimated impact of the described changes on the network code performance on a network with 4 validator nodes in one data center. The average transactions processing speed in accepting blocks of 1000 transactions was:

  • 7,318 transactions per second for the code on Mio and LevelDB;
  • 20,237 transactions per second for the code on Tokio and LevelDB;
  • 31,571 transactions per second for the code on Tokio and RocksDB.

Besides boosting performance, the adoption of Tokio contributed to code structuring, which in its turn provides for better maintenance and extensibility of Exonum codebase.

Another new option is associated with introducing a new index into the storage. Specifically, a SparseListIndex has been added which is an ordered structure, a list of items, that may contain “spaces”. This means that even if a random element is removed from inside the list, it will continue to operate, unlike the ListIndex where all the items are strictly numbered from “0” to “n-1”, where “n” is a number of elements, and elements may be only deleted from the end of the list.

Further, a basic infrastructure has been added for collecting metrics and statistical data in Exonum and services. The option will be useful for monitoring and assessment of node productivity. The idea will be fully implemented in further releases.

Finally, we would also like to draw additional attention of service developers to the following several points:

  • The field events_pool_capacity in MemoryPoolConfig has been replaced by the new EventsPoolCapacity configuration;
  • NodeBuilder has started to work with ServiceFactory as a trait object;
  • The signature of gen_prefix function in the schema module has been changed;
  • The index constructor has changed. With the help of column families available in RocksDB it became possible to define indices with a string name as it is usually done in databases (e.g. “Transactions”, “Wallets”, etc.) instead of prefixes that represent a sequence of some bytes.

The above-mentioned changes should be introduced into the existing service code to keep it actual according to the framework update.

The detailed list of changes, fixes and additions included into the current release is available under the following link.

Keep track of our updates and feel free to address our team on any issues or suggestions in Gitter or GitHub.