Skip to content

Java Binding User Guide

Exonum Java App is an application that includes the Exonum framework and Java services runtime environment.

Installation

To run a node with your Java service you need to use Exonum Java application.

There are several installation options:

Manual Installation

You can download an archive containing the application and all the necessary dependencies on the Releases page on GitHub. We suggest using debug version during development and release version for deployment.

  • Download and unpack the archive from the Releases page into some known location. To install the latest release to ~/bin:
mkdir -p ~/bin
cd ~/bin
unzip /path/to/downloaded/exonum-java-0.8.0-release.zip
  • Install Libsodium as the necessary runtime dependency.

Note

Exonum Java is built with Libsodium 23, which means it will not work on some older Linux distributions, like Ubuntu 16.04. Libsodium 23 is available in Ubuntu 18.04 or can be installed from a custom PPA.

Linux (Ubuntu)
sudo apt-get update && sudo apt-get install libsodium-dev
Mac OS
brew install libsodium

Homebrew Package

For Mac users, we provide a Homebrew repository, which gives the easiest way of installing Exonum Java App:

brew tap exonum/exonum
brew install exonum-java

This will install exonum-java binary with all the necessary dependencies. However, you still need to install Maven 3 and follow the steps mentioned in After Install section below.

Build from Source

It is also possible to build Exonum Java application from sources. To do so, follow the instructions in Contribution Guide.

After Install

  • Create an environment variable EXONUM_HOME pointing at installation location.
# The path is provided in after-install message in case of Homebrew
export EXONUM_HOME=~/bin/exonum-java-0.8.0-release
# Setting PATH variable is not needed in case of Homebrew
export PATH="$PATH:$EXONUM_HOME/bin"
  • Install Maven 3 which is essential for developing and building Java service.

Creating Project

The easiest way to create a Java service project is to use a template project generator. After installing Maven 3, run the command:

mvn archetype:generate \
    -DinteractiveMode=false \
    -DarchetypeGroupId=com.exonum.binding \
    -DarchetypeArtifactId=exonum-java-binding-service-archetype \
    -DarchetypeVersion=0.8.0 \
    -DgroupId=com.example.myservice \
    -DartifactId=my-service \
    -Dversion=1.0.0

You can also use the interactive mode:

mvn archetype:generate \
    -DarchetypeGroupId=com.exonum.binding \
    -DarchetypeArtifactId=exonum-java-binding-service-archetype \
    -DarchetypeVersion=0.8.0

The build definition files for other build systems (e.g., Gradle) can be created similarly to the template. For more information see an example.

Service Development

The service abstraction serves to extend the framework and implement the business logic of an application. The service defines the schema of the stored data that constitute the service state; transaction processing rules that can make changes to the stored data; handles events occurring in the ledger; and defines an API for external clients that allows interacting with the service from outside of the system. See more information on the software model of services in the corresponding section.

In Java, the abstraction of a service is represented by Service interface. Implementations can use the abstract class AbstractService.

Schema Description

Exonum provides several collection types to persist service data. The main types are sets, lists and maps. Data organization inside the collections is arranged in two ways – ordinary collections and Merkelized collections; the latter allow providing cryptographic evidence of the authenticity of data to the clients of the system (for example, that an element is stored in the collection under a certain key). The blockchain state is influenced only by the Merkelized collections.

For the detailed description of all Exonum collection types see the corresponding documentation section. In Java, implementations of collections are located in a separate package. Said package documentation describes their use.

Note

SparseListIndex is not yet supported in Java. Let us know if it may be useful for you!

Collections work with a database view – either Snapshot, which is read-only and represents the database state as of the latest committed block; or Fork, which is mutable and allows performing modifying operations. The database view is provided by the framework: Snapshot can be requested at any time, while Fork – only when the transaction is executed. The lifetime of these objects is limited by the scope of the method to which they are passed to.

Exonum stores elements in collections as byte arrays. Therefore, serializers for values stored in collections must be provided. See Serialization for details.

Example of ProofMapIndex Creation

void updateBalance(Fork fork) {
  var name = "balanceById";
  var balanceById = ProofMapIndexProxy.newInstance(name, fork,
      StandardSerializers.hash(),
      StandardSerializers.longs());
  balanceById.put(id, newBalance);
}

A set of named collections constitute a service scheme. For convenient access to service collections you can implement a factory of service collections.

The state of the service in the blockchain is determined by the list of hashes. Usually, it is comprised of the hashes of its Merkelized collections. State hashes of each service are aggregated in a single blockchain state hash, which is included in each committed block. When using AbstractService, the hash list must be defined in the schema class that implements Schema interface; when implementing Service directly – in the service itself.

Serialization

As Exonum storage accepts data in the form of byte arrays, storing user data requires serialization. Java Binding provides a set of built-in serializers that you can find in the StandardSerializers utility class. The list of serializers covers the most often-used entities and includes:

  • Standard types: boolean, float, double, byte[] and String. Integers with various encoding types, see StandardSerializers Java documentation and the table below.
  • Exonum types: PrivateKey, PublicKey and HashCode.
  • Any Protobuf messages using StandardSerializers#protobuf.

Besides the available built-in serializers, users can still implement their own serializers for storing their data in a custom format instead of using the built-in ones.

Integer Encoding Types Comparison Table
Type Description The most efficient range
fixed32 Always four bytes If values are often greater than 2^28
uint32 Unsigned int that uses variable-length encoding If values are in range [0; 2^21-1]
sint32 Signed int that uses variable-length encoding If values are in range [-2^20; 2^20-1]
fixed64 Always eight bytes If values are often greater than 2^56
uint64 Unsigned int that uses variable-length encoding If values are in range [0; 2^49-1]
sint64 Signed int that uses variable-length encoding If values are in range [-2^48; 2^48-1]

Transactions Description

Exonum transactions allow you to perform modifying atomic operations with the storage. Transactions are executed sequentially, in the order determined by the consensus of the nodes in the network.

For more details about transactions in Exonum – their properties and processing rules – see the corresponding section of our documentation.

Messages

Transactions are transmitted by external service clients to the framework as Exonum messages. A transaction message contains a header with the identifying information, such as an ID of the service this transaction belongs to and a transaction ID within that service; a payload containing transaction parameters; a public key of the author and a signature that authenticates them.

The transaction payload in the message can be serialized using an arbitrary algorithm supported by both the service client and the service itself.

If the service itself needs to create a transaction on a particular node, it can use the Node#submitTransaction method. This method will create and sign a transaction message using the service key of that particular node (meaning that the node will be the author of the transaction), and submit it to the network. Invoking this method on each node unconditionally will produce N transactions that have the same payloads, but different authors’ public keys and signatures, where N is the number of nodes in the network.

Ed25519 is a standard cryptographic system for digital signing of Exonum messages. It is available through the CryptoFunctions#ed25519 method.

Transaction Lifecycle

The lifecycle of a Java service transaction is the same as in any other Exonum service:

  1. A service client creates a transaction message, including IDs of the service and this transaction, serialized transaction parameters as a payload, and signs the message with the author’s key pair.
  2. The client transmits the message to one of the Exonum nodes in the network. The transaction is identified by the hash of the corresponding message.
  3. The node verifies the correctness of the message: its header, including the service ID, and its cryptographic signature against the author’s public key included into it.
  4. The node verifies that the transaction payload can be correctly decoded by the service into an executable transaction.
  5. If all checks pass, the node that received the message adds it to its local transaction pool and broadcasts the message to all the other nodes in the network.
  6. Other nodes, having received the transaction message, perform all the previous verification steps, and, if they pass, add the message to the local transaction pool.
  7. When majority of validator nodes agree to include this transaction into the next block, they take the message from the transaction pool and convert it into an executable transaction, and execute it.
  8. When all transactions in the block are executed, all changes are atomically applied to the database state and a new block is committed.

The transaction messages are preserved in the database regardless of the execution result, and can be later accessed via Blockchain class. For a more detailed description of transaction processing, see the Transaction Lifecycle section.

Transaction Execution

When the framework receives a transaction message, it must transform it into an executable transaction to process. As every service has several transaction types each with its own parameters, it must provide a TransactionConverter for this purpose (see also Service#convertToTransaction). When the framework requests a service to convert a transaction, its message is guaranteed to have a correct cryptographic signature.

An executable transaction is an instance of a class implementing the Transaction interface and defining transaction business logic. The interface implementations must define an execution rule for the transaction in execute method.

The Transaction#execute method describes the operations that are applied to the current storage state when the transaction is executed. Exonum passes an execution context as an argument, which provides a Fork – a view that allows performing modifying operations; and some information about the corresponding transaction message: its SHA-256 hash that uniquely identifies it, and the author’s public key. A service schema object can be used to access data collections of this service.

Also, Transaction#execute method may throw TransactionExecutionException which contains a transaction error report. This feature allows users to notify Exonum about an error in a transaction execution whenever one occurs. It may check the preconditions before executing a transaction and either accepts it or throws an exception that is further transformed into an Exonum core TransactionResult enum containing an error code and a message with error data. If transaction execution fails, the changes made by the transaction are rolled back, while the error data is stored in the database for further user reference. Light clients also provide access to information on the transaction execution result (which may be either success or failure) to their users.

An implementation of the Transaction#execute method must be a pure function, i.e. it must produce the same observable result on all the nodes of the system for the given transaction. An observable result is the one that affects the blockchain state hash: a modification of a collection that affects the service state hash, or an execution exception.

Blockchain Events

A service can also handle a block commit event that occurs each time the framework commits a new block. The framework delivers this event to implementations of Service#afterCommit(BlockCommittedEvent) callback in each deployed service. Each node in the network processes that event independently from other nodes. The event includes a Snapshot, allowing a read-only access to the database state exactly after the commit of the corresponding block.

As services can read the database state in the handler, they may detect any changes in it, e.g., that a certain transaction is executed; or some condition is met. Services may also create and submit new transactions using Node#submitTransaction. Using this callback to notify other systems is another common use case, but the implementations must pay attention to not perform any blocking operations such as synchronous I/O in this handler, as it is invoked synchronously in the same thread that handles transactions. Blocking that thread will delay transaction processing on the node.

Core Schema API

Users can access information stored in the blockchain by the framework using methods of Blockchain class. This API can be used both in transaction code and in read requests. The following functionality is available:

  • getHeight: long The height of the latest committed block in the blockchain
  • getBlockHashes: ListIndex<HashCode> The list of all block hashes, indexed by the block height
  • getBlockTransactions: ProofListIndexProxy<HashCode> The proof list of transaction hashes committed in the block with the given height or ID
  • getTxMessages: MapIndex<HashCode, TransactionMessage> The map of transaction messages identified by their SHA-256 hashes. Both committed and in-pool (not yet processed) transactions are returned
  • getTxResults: ProofMapIndexProxy<HashCode, TransactionResult> The map of transaction execution results identified by the corresponding transaction SHA-256 hashes
  • getTxLocations: MapIndex<HashCode, TransactionLocation> The map of transaction positions inside the blockchain identified by the corresponding transaction SHA-256 hashes
  • getBlocks: MapIndex<HashCode, Block> The map of block objects identified by the corresponding block hashes
  • getLastBlock: Block The latest committed block
  • getActualConfiguration: StoredConfiguration The configuration for the latest height of the blockchain, including services and their parameters

External Service API

The external service API is used for the interaction between a service and external systems. A set of operations defined by a service usually includes read requests for the blockchain data with the provision of the corresponding cryptographic proof. Exonum provides an embedded web framework for implementing the REST-interface of the service.

Service#createPublicApiHandlers method is used to set the handlers for HTTP requests. These handlers are available at the common path corresponding to the service name. Thus, the /balance/:walletId handler for balance requests in the "cryptocurrency" service will be available at /api/services/cryptocurrency/balance/:walletId.

See documentation on the possibilities of Vert.x used as a web framework.

Dependencies Management

Exonum uses Guice to describe the dependencies of the service components (both system-specific ones, for example, Exonum time service, and external ones). Each service should define a Guice module describing implementations of the Service and its dependencies, if any.

A service module shall:

  1. extend AbstractServiceModule
  2. be annotated with @org.pf4j.Extension.
  3. be public.

Minimalistic Example of Service Module

@Extension
public class ServiceModule extends AbstractServiceModule {

  @Override
  protected void configure() {
    // Define the Service implementation.
    bind(Service.class).to(CryptocurrencyService.class).in(Singleton.class);

    // Define the TransactionConverter implementation required
    // by the CryptocurrencyService.
    bind(TransactionConverter.class).to(CryptocurrencyTransactionConverter.class);
  }
}

The fully-qualified name of the module class is recorded in the service artifact metadata and is used by the framework to instantiate services.

For more information on using Guice, see the project wiki.

Testing

Exonum Java Binding provides a powerful testing toolkit — exonum-testkit library. TestKit allows testing transaction execution in the synchronous environment by offering simple network emulation (that is, without consensus algorithm and network operation involved).

Project Configuration

New projects

exonum-testkit is already included in projects generated with exonum-java-binding-service-archetype and you can skip the following instructions:

For existing projects include the following dependency into your pom.xml:

<dependency>
  <groupId>com.exonum.binding</groupId>
  <artifactId>exonum-testkit</artifactId>
  <version>0.8.0</version>
  <scope>test</scope>
</dependency>

As the TestKit uses a library with the implementation of native methods, pass java.library.path system property to JVM:

-Djava.library.path=$EXONUM_JAVA/lib/native

$EXONUM_JAVA environment variable should point at installation location, as specified in How to Run a Service section.

Surefire/Failsafe for Maven should be configured as follows:

<plugin>
    <!-- You can also configure a failsafe to run integration tests during
         'verify' phase of a Maven build to separate unit tests and ITs. -->
    <artifactId>maven-surefire-plugin</artifactId>
    <configuration>
        <argLine>
            -Djava.library.path=${path-to-java-bindings-library}
        </argLine>
    </configuration>
</plugin>

Creating Test Network

To perform testing, we first need to create a network emulation – the instance of TestKit. TestKit allows recreating behavior of a single full node (a validator or an auditor) in an emulated Exonum blockchain network.

To instantiate the TestKit, use TestKit.Builder. It allows configuration of:

Note

Note that regardless of the configured number of validators, only a single node will be emulated. This node will create the service instances, execute operations of those instances (e.g., afterCommit(BlockCommittedEvent) method logic), and provide access to their state.

Default TestKit can be instantiated with a single validator as an emulated node, a single service and without Time Oracle in the following way:

try (TestKit testKit = TestKit.forService(MyServiceModule.class)) {
  // Test logic
}

The TestKit can be also instantiated using a builder, if different configuration is needed:

try (TestKit testKit = TestKit.builder()
    .withServices(MyServiceModule.class, MyServiceModule2.class)
    .withValidators(2)
    .build()) {
  // Test logic
}

Transactions Testing

The TestKit allows testing transaction execution by submitting blocks with the given transaction messages. Here is an example of a test that verifies the execution result of a valid transaction and the changes it made in service schema:

try (TestKit testKit = TestKit.forService(MyServiceModule.class)) {
  // Construct a valid transaction
  TransactionMessage validTx = constructValidTransaction();

  // Commit block with this transaction
  Block block = testKit.createBlockWithTransactions(validTx);

  // Retrieve a snapshot of the current database state
  Snapshot view = testkit.getSnapshot();
  // It can be used to access the core schema, for example to check the
  // transaction execution result:
  Blockchain blockchain = Blockchain.newInstance(view);
  Optional<TransactionResult> validTxResult =
        blockchain.getTxResult(validTx.hash());
  assertThat(validTxResult).hasValue(TransactionResult.successful());
  // And also to verify the changes the transaction made to the service state:
  MySchema schema = new MySchema(view);
  // Perform assertions on the data in the service schema
}

And a test that verifies that a transaction that throws an exception during its execution will fail:

try (TestKit testKit = TestKit.forService(MyServiceModule.class)) {
  // Construct a transaction that throws `TransactionExecutionException` during
  // execution
  byte errorCode = 1;
  String errorDescription = "Test";
  TransactionMessage errorTx =
      constructErrorTransaction(errorCode, errorDescription);

  // Commit block with this transaction
  Block block = testKit.createBlockWithTransactions(errorTx);

  // Check that transaction failed
  Snapshot view = testKit.getSnapshot();
  Blockchain blockchain = Blockchain.newInstance(view);

  Optional<TransactionResult> errorTxResult =
      blockchain.getTxResult(errorTx.hash());
  TransactionResult expectedTransactionResult =
      TransactionResult.error(errorCode, errorDescription);
  assertThat(errorTxResult).hasValue(expectedTransactionResult);
}

The TestKit also allows creating blocks that contain all current in-pool transactions:

try (TestKit testKit = TestKit.forService(MyServiceModule.class)) {
  // Put the transaction into the TestKit transaction pool
  MyService service = testKit.getService(MyService.SERVICE_ID, MyService.class);

  TransactionMessage message = constructTransactionMessage();
  RawTransaction rawTransaction = RawTransaction.fromMessage(message);
  service.getNode().submitTransaction(rawTransaction);

  // This block will contain the transaction submitted above
  Block block = testKit.createBlock();
  // Check the resulting block or blockchain state
}

The TestKit provides getTransactionPool() and findTransactionsInPool(Predicate<TransactionMessage> predicate) methods to inspect the transaction pool. These methods are useful when there is a need to verify transactions that the service instance submitted itself (e.g., in afterCommit method) into the transaction pool.

Note

Note that blocks that are created with TestKit.createBlockWithTransactions(Iterable<TransactionMessage> transactionMessages) will ignore in-pool transactions. As of 0.8.0, there is no way to create a block that would contain both given and in-pool transactions with a single method. To do that, put the given transactions into the TestKit transaction pool with Node.submitTransaction(RawTransaction rawTransaction).

Checking the Blockchain State

In order to test service read operations and verify changes in the blockchain state, the TestKit provides a snapshot of the current database state (i.e., the one that corresponds to the latest committed block). There are several ways to access it:

  • Snapshot getSnapshot() Returns a snapshot of the current database state
  • void withSnapshot(Consumer<Snapshot> snapshotFunction) Performs the given function with a snapshot of the current database state
  • <ResultT> ResultT applySnapshot(Function<Snapshot, ResultT> snapshotFunction) Performs the given function with a snapshot of the current database state and returns a result of its execution

Note

Note that withSnapshot and applySnapshot methods destroy the snapshot once the passed closure completes. When using getSnapshot, created snapshots are only disposed when the TestKit is closed. That might cause excessive memory usage if many snapshots are created. Therefore, it is recommended to use the first two methods if a large number (e.g. more than a hundred) of snapshots needs to be created.

Time Oracle Testing

The TestKit allows to use Time Oracle in integration tests if your service depends on it. To do that, the TestKit should be created with TimeProvider. Its implementation, FakeTimeProvider, mocks the source of the external data (current time) and, therefore, allows to manually manipulate time that is returned by the time service. Note that the time must be set in UTC time zone.

@Test
void timeOracleTest() {
  ZonedDateTime initialTime = ZonedDateTime.now(ZoneOffset.UTC);
  FakeTimeProvider timeProvider = FakeTimeProvider.create(initialTime);
  try (TestKit testKit = TestKit.builder()
      .withService(MyServiceModule.class)
      .withTimeService(timeProvider)
      .build()) {
    // Create an empty block
    testKit.createBlock();
    // The time service submitted its first transaction in `afterCommit`
    // method, but it has not been executed yet
    Optional<ZonedDateTime> consolidatedTime1 = getConsolidatedTime(testKit);
    // No time is available till the time service transaction is processed
    assertThat(consolidatedTime1).isEmpty();

    // Increase the time value
    ZonedDateTime time1 = initialTime.plusSeconds(1);
    timeProvider.setTime(time1);
    testKit.createBlock();
    // The time service submitted its second transaction. The first must
    // have been executed, with consolidated time now available and equal to
    // initialTime
    Optional<ZonedDateTime> consolidatedTime2 = getConsolidatedTime(testKit);
    assertThat(consolidatedTime2).hasValue(initialTime);

    // Increase the time value
    ZonedDateTime time2 = initialTime.plusSeconds(1);
    timeProvider.setTime(time2);
    testKit.createBlock();
    // The time service submitted its third transaction, and processed the
    // second one. The consolidated time must be equal to time1
    Optional<ZonedDateTime> consolidatedTime3 = getConsolidatedTime(testKit);
    assertThat(consolidatedTime3).hasValue(time1);
  }
}

private Optional<ZonedDateTime> getConsolidatedTime(TestKit testKit) {
  return testKit.applySnapshot(s -> {
    TimeSchema timeSchema = TimeSchema.newInstance(s);
    return timeSchema.getTime().toOptional();
  });
}

TestKit JUnit 5 Extension

The TestKit JUnit 5 extension simplifies writing tests that use TestKit. It allows to inject TestKit objects into test cases as a parameter and delete them afterwards. To enable it, define a TestKitExtension object annotated with @RegisterExtension and provided with a builder. The builder would be used to construct the injected TestKit objects:

@RegisterExtension
TestKitExtension testKitExtension = new TestKitExtension(
  TestKit.builder()
    .withService(MyServiceModule.class));

@Test
void test(TestKit testKit) {
  // Test logic
}

It is possible to configure the injected TestKit instance with the following annotations:

  • @Validator sets an emulated TestKit node type to validator
  • @Auditor sets an emulated TestKit node type to auditor
  • @ValidatorCount sets a number of the validator nodes in the TestKit network

These annotations should be applied on the TestKit parameter:

@RegisterExtension
TestKitExtension testKitExtension = new TestKitExtension(
  TestKit.builder()
    .withService(MyServiceModule.class));

@Test
void validatorTest(TestKit testKit) {
  // Injected TestKit has a default configuration, specified in the builder
  // above
}

@Test
void auditorTest(@Auditor @ValidatorCount(8) TestKit testKit) {
  // Injected TestKit has an altered configuration — "auditor" as an emulated
  // node and 8 validator nodes
}

Note

Note that after the TestKit is instantiated in the given test context, it is not possible to reconfigure it again. For example, if the TestKit is injected in @BeforeEach method, it can't be reconfigured in @Test or @AfterEach methods. Also note that the TestKit cannot be injected in @BeforeAll and @AfterAll methods.

API

To test API implemented with Vertx tools, use the tools described in the project documentation. You can use Vertx Web Client as a client or a different HTTP client.

An example of API service tests can be found in ApiControllerTest.

Using Libraries

An Exonum service can use any third-party library as its dependency. At the same time, Exonum comes with its own dependencies. Classes of these dependencies are used in Exonum public APIs:

Said dependencies are provided by the framework and must be used as provided. They will not be changed in an incompatible way in a compatible Exonum release. An up-to-date list is also available in the Exonum bill of materials (BOM).

On top of that, Guava can be and is recommended to be used as a provided library.

Note

These dependencies do not have to be declared explicitly because any service depends on "exonum-java-binding-core" which has them as transitive dependencies.

These libraries must not be packaged into the service artifact. To achieve that in Maven, use the provided Maven dependency scope in the dependency declarations if you would like to specify them explicitly.

How to Build a Service Artifact

Exonum Java services are packaged as JAR archives with some extra metadata, required to identify the service and instantiate it.

If you used the service archetype to generate the project template, the build definition already contains all the required configuration. Hence you can invoke mvn verify and use the produced service artifact.

In case the service build definition needs to be configured, ensure that the following required metadata is present in the service artifact JAR:

  • Entries in the JAR manifest:
    • "Plugin-Id": must be set to "groupId:artifactId:version", e.g., com.exonum.example.timestamping:timestamping-demo:1.0.2.
    • "Plugin-Version": must be set to the project version, e.g., 1.0.2.
  • A fully-qualified name of the service module class in "META-INF/extensions.idx" file. This file is automatically generated by the annotation processor of @Extension.

How to Run a Service

  • Make sure you followed the steps mentioned in Installation section.
  • Follow the instructions in the Application Guide to configure and start an Exonum node with your service. The guide is provided inside the archive as well.

Built-In Services

Currently Java Binding includes the following built-in services:

  • Configuration Update Service. Although every node has its own configuration file, some settings should be changed for all nodes simultaneously. This service allows updating global configuration parameters of the network without stopping the nodes. The changes are agreed upon through the consensus mechanism.

  • Anchoring Service. The anchoring service writes the hash of the current Exonum blockchain state to the Bitcoin blockchain with a certain time interval. The anchored data is authenticated by a supermajority of validators using digital signature tools available in Bitcoin.

  • Time Oracle. Time oracle allows user services to access the calendar time supplied by validator nodes to the blockchain.

Services Activation

No services are enabled on the node by default. To enable services, define them in the services.toml configuration file. This file is required for a running node. services.toml should be located in the working directory of your project, where you run commands. It consists of two sections: system_services and user_services.

The user_services section enumerates services in the form of name = artifact, where name is a one-word description of the service and artifact is a full path to the service's artifact. It must be absolute unless you want to depend on the application working directory.

Note

At least one service must be defined in the [user_services] section.

[user_services]
service_name1 = "/path/to/service1_artifact.jar"

The optional system_services section is used to enable built-in Exonum services.

system_services = ["service-name"]

where possible values for service-name are:

  • configuration for Configuration Update Service
  • btc-anchoring for Anchoring Service
  • time for Time Oracle

Note

In case there is no such section, only Configuration Service will be activated.

Below is the sample of the services.toml file that enables all possible built-in Exonum services and two user services:

system_services = ["configuration", "btc-anchoring", "time"]
[user_services]
service_name1 = "/path/to/service1_artifact.jar"
service_name2 = "/path/to/service2_artifact.jar"

Logging Configuration

Java Binding uses two different methods for logging — Log4J in Java modules and env_logger in Rust modules.

Rust Logging

Rust logs are produced by Exonum Core and can be used to monitor the status of the blockchain node, including information about the block commitment and the consensus status.

Rust logs are disabled by default and controlled by the RUST_LOG environment variable. It is recommended to set info logging level for Exonum modules and warn level for all other modules:

export RUST_LOG=warn,exonum=info,exonum-java=info

Log entries go to stderr by default.

See env_logger documentation for more information on possible configuration options.

Java Logging

Logs produced by Java code (the framework and its dependencies, and the deployed services) are handled by Log4J framework. The services can use either Log4J or SLF4J logging APIs.

Java logging configuration is controlled by the configuration file specified by the ejb-log-config-path parameter. If no file was provided, the logs are disabled. Exonum Java package provides an example log4j-fallback.xml configuration file that can be found at the installation directory. With this file INFO-level messages are printed to stdout. Also, see Application Guide for more information on configuring the Exonum Java App.

See Log4J documentation for more information on possible configuration options.

Common Library

Java Binding includes a library module that can be useful for Java client applications that interact with an Exonum service. The module does not have the dependency on Java Binding Core, but it contains Java classes obligatory for the core that can now as well be easily used in clients, if necessary. The library provides the ability to create transaction messages, check proofs, serialize/deserialize data and perform cryptographic operations. For using the library just include the dependency in your pom.xml:

    <dependency>
      <groupId>com.exonum.binding</groupId>
      <artifactId>exonum-java-binding-common</artifactId>
      <version>0.8.0</version>
    </dependency>

Known Limitations

  • Core collections necessary to form a complete cryptographic proof for user service data (collections and their elements) are available only in a "raw" form – without deserialization of the content, which makes their use somewhat difficult.
  • Custom Rust services can be added to the application only by modifying and rebuilding thereof.
  • Exonum Java application does not support Windows yet.

See Also