0.4 Version of Exonum Framework by the Bitfury Group Is Now Available


The Bitfury Group accompanies the end of 2017 with Exonum 0.4 release. This time a greater attention was given to the Exonum toolset, namely, the team produced a kit for testing Exonum services and a library for checking anchoring of Exonum blockchain to Bitcoin with the Exonum light client. Below we will focus on these two and some other new features of Exonum.


The testkit is a Rust crate that allows to easily test operation of the whole Exonum service. The testkit emulates activity of an Exonum node, facilitating tests related to transactions execution and APIs operation (correctness of the returned responses to get requests). Testing is done in a synchronous manner and within the same process, without the consensus stage and network setup. This allows to check the results faster as you do not have to wait in real time until a new block is accepted to verify the outcome of transactions execution.

The testkit also allows to create completely deterministic test cases, in particular, to check different orderings of transactions. Moreover, the tests may include transactions for multiple services; for example, configuration and anchoring services can be tested within a single scenario. The testkit can even test boundary cases that are quite difficult (but not impossible) to produce in the real network.

Testing may cover even more complex options like oracles testing and configuration changes testing. Oracle here is a service which can produce transactions with external data after block commit with the help of the handle_commit method. A good example here is exonum-btc-anchoring oracle. The validators regularly inquire Bitcoin nodes on new anchoring transactions included into the Bitcoin blockchain. The anchored transaction is published in Exonum as a LECT transaction signed by some validator. Said transaction updates the chain of anchoring transactions and becomes the basis for a new anchoring transaction. The handle_commit method provides validators with actual LECT data.

Similarly, if configuration of a service may be customized, the testkit allows to create and commit configuration change proposals, and verify that a new configuration is scheduled and applied after commit of the block.

Anchoring Verification

Bitcoin anchoring service is an important feature of Exonum framework providing its blockchain a security level similar to that of a permissionless blockchain. The idea behind anchoring is that the service writes the hash of the latest Exonum block to the public source (for now, Bitcoin blockchain only), thus, saving a snapshot of the system in a persistent read-only public storage. In this way it guarantees immutability of Exonum blockchain, because even if the validators collude, transaction history cannot be falsified; discrepancy between actual Exonum blockchain state and the one written to the Bitcoin blockchain would be detected instantly.

Exonum 0.4 release is accompanied by a library that allows to check validity of blocks and transactions in Exonum blockchain with the help of the light client. Using the hash of the data stored in Exonum, a user obtains a cryptographic proof through the light client containing the path from user’s data up to the corresponding anchoring transaction together with information on Bitcoin block where it is included.

The tool obtains the history of all Bitcoin anchoring transactions from HTTP API with the help of a driver. As soon as the whole history is downloaded, the regular checks for new anchoring transactions take place at even intervals. By default, two drivers are implemented in the library - for Blocktrail API and Insight API. Meanwhile, a custom driver for another API can be implemented by extending the driver class.

As for the changes adopted into the core of the framework itself, the following items can be outlined:

  • A new auditing node can now be created from the command line. This provides additional usability to the tool as you do not need to adjust configs when launching an auditing node anymore;
  • A new merge_sync function has been added which allows to apply bulk changes to the database simultaneously saving new data from memory into the storage. In this way we can guarantee that on calling the method the data will be forwarded to the storage and will not be lost in case of troubleshooting. If this method encounters any form of I/O error during merging, the changes to the storage will not be applied. Some reasons why an error can occur are: the lack of space in the storage, failure of the storage, or changes in the access rights to the corresponding directory within the storage.

Finally, in this release a considerable code refactoring has been done. The full list of changes, fixes and additions can be found at its regular page.